ICS-2026-ET-001 · The EdTech Capture · Saga IX

How EdTech Entered the Classroom

Educational technology entered through the trust channel. Parents trust schools. Schools trusted the software. The software was never evaluated for what it was actually doing.

Named condition: The Trust Arbitrage · Saga IX · 16 min read · Open Access · CC BY-SA 4.0
1,400+
EdTech applications used by the average US school district
$35B
US EdTech market size, growing at 16% annually
73%
of teachers who adopted EdTech tools without formal district procurement review

The Trust Channel

Schools occupy a position in the lives of children that no other institution replicates. Attendance is compulsory. The state requires it. Parents comply not merely because of legal obligation but because they believe — correctly, in the aggregate — that schools act in the interest of their children. This belief is the foundation of the trust channel.

The trust channel operates through a specific legal and social architecture. Parents grant schools in loco parentis authority: the legal doctrine that the school stands in the place of the parent during school hours and for school-related activities. This authority extends to decisions about curriculum, classroom materials, disciplinary practices, and — critically — the tools and technologies deployed in the educational environment. When a school selects a textbook, a parent does not review the publisher's contract. When a school installs laboratory equipment, a parent does not audit the manufacturer's data practices. The school's institutional judgment serves as the parent's proxy.

This proxy function is not irrational. Schools employ credentialed educators, operate under district oversight, and are subject to school board governance that includes elected parental representatives. The institutional layers between a classroom technology decision and a child's exposure to that technology are real. They exist precisely to ensure that the decisions made within schools reflect professional judgment and institutional accountability. The trust that parents place in this system is earned trust — earned over generations of institutional operation in which the school's interests and the child's interests were broadly aligned.

The trust channel becomes a vulnerability when the technology deployed through institutional authority operates according to a logic that the institutional review process does not evaluate. A textbook's function is legible: it contains information; it can be reviewed; its content is static and inspectable. A software platform's function is not equivalently legible. It collects data. It deploys engagement architectures. It modifies behavior through feedback mechanisms. It transmits information about its users to servers controlled by entities the school has no oversight relationship with. None of these functions are visible to the institutional review process that approved the software for classroom use — because the review process was designed for a different category of educational tool.

The Procurement Bypass

The standard procurement process for educational materials — district-level review, committee evaluation, board approval, contract negotiation — exists to ensure that products entering the classroom meet educational standards, comply with applicable regulations, and represent responsible use of public funds. This process, whatever its imperfections, interposes institutional review between a commercial product and a child. EdTech entered schools through multiple pathways that circumvented this review entirely.

Direct teacher adoption. Individual teachers discovered applications through professional networks, conferences, online recommendations, and direct marketing. A teacher found a quiz application that made test review more engaging. A teacher adopted a reading platform that tracked student progress in real time. A teacher deployed a classroom management app that gamified behavior. Each adoption decision was made by a single educator for a single classroom, below the threshold that triggers district procurement review. The applications entered the classroom on the teacher's professional judgment — judgment that extended to pedagogy and classroom effectiveness but not to data architecture, behavioral modification design, or third-party data sharing agreements.

Freemium models. EdTech companies offered free versions of their products to individual teachers and schools, with paid tiers available for additional features. The free tier bypassed procurement entirely: there was no expenditure to review, no contract to negotiate, no purchase order to approve. The free tier also served as an adoption mechanism — once students and teachers were habituated to the platform, the upgrade to paid tiers followed institutional momentum rather than institutional review. The procurement process, when it finally engaged, was evaluating whether to continue using a product already embedded in classroom practice, not whether to adopt it in the first place.

Grant-funded pilots. External funding from foundations, federal programs, and EdTech companies themselves supported pilot deployments that operated outside the district's normal purchasing authority. A grant covering the cost of a technology pilot created no charge against the district budget and therefore triggered no procurement review. The pilot's evaluation criteria were defined by the grant's terms — typically engagement metrics and teacher satisfaction — rather than by the district's standards for classroom materials. When the pilot concluded and the district faced the choice of continuing or discontinuing a technology already in use, institutional inertia favored continuation.

Emergency purchasing. Crisis conditions — most dramatically the COVID-19 pandemic, but also natural disasters, school safety incidents, and infrastructure failures — activated emergency procurement provisions that suspended normal review timelines. Products adopted under emergency provisions entered the classroom with minimal evaluation and remained after the emergency ended. The emergency created the installed base; institutional inertia maintained it.

Each of these pathways shares a structural feature: the institutional review process that would evaluate the product's data practices, engagement architecture, and behavioral modification design was either bypassed entirely or engaged only after adoption had already occurred. The product entered the classroom first. The evaluation, if it happened at all, followed.

The COVID-19 Acceleration

The COVID-19 pandemic compressed a decade of gradual EdTech adoption into approximately ninety days. Between March and June 2020, the overwhelming majority of American K-12 schools transitioned to remote or hybrid instruction. This transition was not planned. It was not piloted. It was not evaluated. It was an emergency response to an unprecedented public health crisis, and it required the immediate deployment of digital platforms capable of delivering instruction to students at home.

The scale of the deployment is documented. Google Classroom went from approximately 40 million users to over 150 million users between February and April 2020. Zoom's daily meeting participants grew from 10 million to over 300 million in the same period, with a substantial fraction representing educational use. Microsoft Teams, Canvas, Schoology, and dozens of smaller platforms experienced comparable growth. The educational technology stack that had been a supplement to classroom instruction became, overnight, the entire delivery mechanism for public education.

Emergency procurement provisions suspended normal review timelines. Districts that would ordinarily require months of committee review, pilot evaluation, and board approval for a new technology platform adopted platforms in days. The Federal Emergency Management Agency and the Department of Education issued guidance encouraging rapid technology adoption. State procurement offices issued blanket waivers. The institutional safeguards that normally mediate between a commercial product and a child were suspended — not out of negligence but out of necessity. Children needed to continue their education. Digital platforms were the only available mechanism. Speed was the overriding priority.

The emergency adoption created an installed base of unprecedented scale. When in-person instruction resumed, the EdTech platforms adopted during the emergency did not leave. They had become embedded in instructional practice, administrative workflows, and student habits. Teachers had built curricula around them. Students had accounts and usage histories on them. Administrators had reporting dashboards that depended on them. The emergency was temporary. The technology deployment was permanent.

The permanent deployment inherited the evaluation deficit of the emergency adoption. Products adopted in days, under crisis conditions, with no procurement review, continued operating in schools with no subsequent comprehensive evaluation of their data practices, engagement architectures, or behavioral modification effects. The evaluation that was skipped during the emergency was never performed after it.

What the Evaluation Framework Measured

When EdTech products were evaluated — in the minority of cases where formal evaluation occurred — the evaluation framework measured four categories of outcome: engagement metrics (are students using the product? how frequently? for how long?), test score correlation (do students using the product show improvement on standardized assessments?), teacher satisfaction (do educators find the product useful and easy to integrate?), and technical reliability (does the product function without excessive downtime or technical failures?).

These four categories share a structural characteristic: they measure what the product does for the school. They do not measure what the product does to the student. Engagement metrics measure the behavior the EdTech revenue model requires — time on platform, interaction frequency, return visits — not the cognitive or developmental effects of that engagement. Test score correlation measures the educational outcome the school is accountable for, not the attentional or behavioral consequences of the delivery mechanism. Teacher satisfaction measures the product's integration into existing pedagogical practice, not its effects on student data privacy or cognitive architecture. Technical reliability measures whether the product works, not what it does when it is working.

The absence from the evaluation framework is specific and documentable. No standard EdTech evaluation framework in common use assesses: the scope and destination of student data collection; the presence and design of behavioral modification architectures (gamification, variable reward schedules, social comparison mechanics); the attentional effects of the product's interface design; the third-party data sharing practices of the vendor; or the long-term developmental consequences of sustained use. These are not obscure or technically inaccessible measurements. They are excluded from the evaluation framework because the framework was designed to assess educational utility, and the harms of EdTech products do not fall within the category of educational utility. They fall within the categories of data privacy, cognitive development, and behavioral modification — categories that no one in the procurement process was tasked with evaluating.

Standard Objection

Schools have always used commercial products — textbooks, calculators, laboratory equipment — without extensive procurement review of each item. EdTech is a tool like any other educational resource.

Textbooks do not collect behavioral data on their readers. Calculators do not track usage patterns and sell them to third parties. Laboratory equipment does not deploy engagement architectures designed to modify student behavior. The comparison fails because the prior generation of educational tools did not have the capacity to surveil, modify behavior, or monetize student data. EdTech is not a tool like any other — it is a data collection and engagement architecture deployed through the trust channel. The fact that schools have historically adopted commercial products without extensive review is precisely the institutional habit that EdTech companies exploit. The precedent was established with products that could not harm students through data extraction. The precedent is now applied to products that can.

The Trust Arbitrage in Practice

The Trust Arbitrage is not a theoretical construct. It operates in documented, specific ways across the American educational system.

Google Classroom and the educational data ecosystem. Google provides Google Workspace for Education to schools at no cost. The product is functional, reliable, and well-integrated. Schools adopt it because it solves real administrative and instructional problems at zero direct cost. Parents are informed that the school uses Google products. What parents are not informed of — because the school itself may not fully understand it — is the scope of behavioral data that Google collects through the educational platform, the relationship between that educational data and Google's broader advertising and data infrastructure, and the terms under which student data may be retained, processed, or used to train machine learning models after the student leaves the school system. The agreement governing data use is between Google and the school district. Parents are not parties to it. The Trust Arbitrage operates through this gap: parents trust the school; the school trusts Google; Google operates under terms the parents never reviewed and the school may not fully comprehend.

Gamified learning platforms and engagement architecture. Platforms such as Kahoot, Prodigy, and similar gamified educational applications deploy engagement mechanics — points, streaks, leaderboards, variable reward schedules, time pressure, social comparison — that the behavioral science and engagement literature identifies as compulsive-producing. These mechanics are the same mechanics deployed by social media platforms and mobile games to maximize time on platform. They are deployed in educational software because they increase the engagement metrics that schools evaluate. The Trust Arbitrage operates because parents assume that software deployed in a school has been evaluated for developmental appropriateness. The engagement architecture has not been evaluated at all — because the evaluation framework measures engagement quantity, not engagement quality or developmental consequence.

Reading apps and attention data. Digital reading platforms deployed in elementary schools measure reading behavior at a granularity that no physical book permits: time per page, re-reading patterns, reading speed, abandonment points, distraction frequency. This data has educational utility — it can identify struggling readers and inform instructional decisions. It also constitutes a detailed behavioral record of the child's cognitive patterns, collected under conditions where the child has no ability to opt out (the reading is assigned) and the parent has no visibility into the data's collection, storage, or use. Some platforms share this data with third parties under the educational purpose exemption. The Trust Arbitrage converts the school's authority to assign reading into a data collection mechanism that operates without meaningful parental consent.

Why Parents Did Not Know

The disclosure asymmetry that enables the Trust Arbitrage operates through three structural layers.

The terms-of-service gap. EdTech companies disclose their data practices in terms of service and privacy policies. These disclosures are made to the contracting party — the school or district — not to the parents of the students whose data is collected. Parents do not sign the terms of service. In most cases, parents do not see the terms of service. The disclosure exists in a legal sense — the company has stated what it does — but the disclosure does not reach the people whose children are affected. The company discloses to the institution. The institution does not relay the disclosure to the parent. The parent does not know what the company disclosed.

The stack opacity. The average school district uses over 1,400 educational technology applications. No district has the administrative capacity to review, monitor, and communicate the data practices of 1,400 vendors to parents. The sheer scale of the EdTech stack makes comprehensive disclosure practically impossible. Even a district with the best intentions and the most robust data governance practices cannot maintain parental awareness of what every application in its technology stack does with student data. The stack is too large, the practices are too varied, and the administrative resources available for technology governance are too limited.

The trust default. Parents operate on a trust default: the assumption that the school has evaluated the technologies it deploys and determined them to be safe and appropriate for student use. This assumption is rational — it is the same assumption parents make about every other aspect of the school environment, from building safety to food quality to curriculum content. The assumption fails in the EdTech context because the school's evaluation process does not assess the dimensions of EdTech products that produce the harms: data collection scope, behavioral modification design, and third-party data sharing. The parent trusts the school's judgment. The school exercised judgment on the wrong criteria. The Trust Arbitrage closes.

The result is a structural condition in which the most comprehensive surveillance and behavioral modification infrastructure ever deployed on children operates through the institution that parents trust most to protect them. The access channel is the school. The trust is the arbitrage. The children are the product.

Named Condition · ICS-2026-ET-001
The Trust Arbitrage
"The mechanism by which educational technology companies exploit the institutional trust that parents place in schools to deploy data collection, behavioral modification, and engagement architectures that would not survive direct parental scrutiny. The Trust Arbitrage operates through the school's intermediary position: parents grant schools authority over the educational environment; schools grant EdTech access to students on the assumption of educational benefit; EdTech operates within that access without the disclosure and consent requirements that would apply if they approached families directly. The school's institutional authority is the arbitrage — the gap between what the school represents to parents and what the EdTech product actually does inside the institution the parent trusts."
Series Hub · ET
The EdTech Capture
Series overview and all five papers in the EdTech Capture.
Next · ET-002
What Schools Gave Away
The Data Collection Event: the most comprehensive behavioral record ever assembled on children in institutional contexts.

References

Internal: This paper is part of The EdTech Capture (ET series), Saga IX. It draws on and contributes to the argument documented across 22 papers in 5 series.

External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.