“There is no constitutional right to psychologically manipulate children for profit. The question is whether Congress is willing to say so clearly enough that courts will agree.”
— Composite framing from legal scholars on child protection legislation and the First Amendment, 2023–2024
The Legislative Record
On July 30, 2024, the United States Senate passed the Kids Online Safety Act by a vote of 91 to 3. The margin was not a procedural artifact. Every Republican senator who voted voted yes. Almost every Democratic senator who voted voted yes. The three dissenting votes came from senators with objections ranging from insufficient strength to civil liberties concerns. The bill had the support of the American Psychological Association, the American Academy of Pediatrics, bipartisan state attorneys general, and the parents of children who had died by suicide in circumstances linked to social media use, several of whom testified before Congress in sessions that produced genuine, visible, cross-partisan emotional response.
The House of Representatives never voted on it.
This is the central fact of the Kids Online Safety Act record and the fact from which the analysis in this paper proceeds. A bill that achieved near-unanimous bipartisan support in one chamber — a chamber not typically known for near-unanimity on anything — died because the other chamber declined to bring it to a floor vote before the 118th Congress ended in January 2025. To understand what this means for the architecture of child protection law requires understanding what the bill actually said, what legal arguments were mobilized against it, who mobilized them, and what the gap between those arguments and the underlying legal record reveals about the structural problem of legislating digital child protection in the United States.
KOSA's legislative history began in 2022 when Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) introduced the first version. The original bill included a duty of care provision that would have required platforms to act in the best interest of minors — language modeled loosely on existing duties in other consumer protection contexts. After significant criticism from civil liberties organizations and an unusual lobbying alignment that will be examined in Section IV, the bill was substantially revised in 2023. The duty of care language was narrowed. The enforcement mechanism was modified. The 2024 version that achieved the 91–3 Senate vote was a more targeted instrument than the 2022 original, focused on specific design features and data practices rather than a general best-interest standard. This revision did not satisfy the bill's opponents.
What KOSA Actually Provided
The 2024 version of the Kids Online Safety Act contained three primary operative components. Understanding each is necessary to evaluate the constitutional arguments made against the bill, because the arguments were frequently directed at the 2022 version or at a characterization of the bill that did not accurately describe the enacted text.
The Duty to Act in the Best Interest of Minors
The revised bill required covered platforms — defined as social media services, video streaming services, online games, and messaging applications with more than a threshold number of users, where the platform knew or reasonably should have known it was accessed by minors — to exercise reasonable care in the design, operation, and recommendation systems of the platform to prevent and mitigate specified harms to minors. The specified harms were enumerated: anxious, depressive, or suicidal behaviors; use of illegal substances; online bullying and harassment; sexual exploitation and abuse; eating disorders; and content that facilitates these outcomes.
The duty was not a general best-interest standard. It was a duty to take reasonable care with respect to a specific and enumerated list of harms for which substantial independent evidence of algorithmic contribution existed in the published scientific literature. The distinction matters for First Amendment analysis, as will be discussed in Section III.
Default Protective Settings for Minors
KOSA required platforms to enable the most privacy-protective settings by default for users they knew or should have known to be minors. This included: disabling algorithmic content recommendations that were not directly related to content the minor had explicitly searched for or chosen to follow; defaulting direct messaging to contacts only; disabling location sharing; and limiting the hours during which notifications could be pushed to minor users. These settings could be changed by the minor user but had to start in the protective configuration.
This provision is design regulation. It does not prohibit any content. It does not require any speech to be removed. It specifies a default interface configuration for a specific user class. Its constitutional status is therefore distinct from content regulation, a distinction that opponents of the bill frequently obscured.
Data and Research Access
The bill required platforms to provide researchers access to data necessary to study the effects of platform design on minor users, subject to privacy protections. It also prohibited platforms from using minors' data for behavioral advertising. The Federal Trade Commission was designated as the primary enforcement authority, with state attorneys general given concurrent enforcement authority. The bill did not create a private right of action.
The First Amendment Argument
The constitutional argument against KOSA was that it violated the First Amendment. This argument was made in several forms that require separate analysis, because they have different relationships to the actual constitutional record.
The Content Moderation Argument
The strongest version of the First Amendment argument held that KOSA would pressure platforms to over-remove content to avoid liability — that a duty of care with respect to specified harmful content categories would create an incentive for platforms to err on the side of removal, producing a chilling effect on lawful speech. This is a cognizable First Amendment concern. It is the argument that was most seriously engaged by the bill's supporters, who revised the enumerated harm categories multiple times and added explicit safe harbor language to address it.
The argument has genuine force when applied to the duty of care provision's content-adjacent elements. It has less force when applied to the default settings provisions, which do not involve content at all. The ACLU's primary public communications against KOSA frequently conflated the two, treating the design regulation provisions as if they were content regulation provisions. They are not. A requirement that push notifications default to off during nighttime hours for minor users is not a speech restriction. A requirement that algorithmic recommendations default to content the user explicitly chose is not a prohibition on any content.
The Moody v. NetChoice Argument
In 2024, the Supreme Court decided Moody v. NetChoice and NetChoice v. Paxton, cases concerning Texas and Florida laws that restricted platforms' ability to moderate certain content. The Court's opinion, written by Justice Kagan, held that platforms' content curation activities constitute editorial discretion protected by the First Amendment. The Court remanded the cases for further proceedings but established that platform editorial choices receive First Amendment protection.
Moody was immediately invoked against KOSA by bill opponents. The invocation requires scrutiny. Moody addressed laws that restricted platforms' ability to make content moderation decisions — laws that told platforms they could not remove certain speech. KOSA does not restrict platforms' ability to remove or moderate content. KOSA imposes a duty of care with respect to design practices and requires certain default settings. These are not the same thing. The Supreme Court in Moody explicitly preserved the government's ability to regulate platform design practices that do not implicate editorial discretion. The mapping of Moody onto KOSA was a legal argument, not a settled legal conclusion.
The Prior Restraint Argument
A weaker version of the argument held that KOSA amounted to a prior restraint on speech because it would cause platforms to preemptively restrict content before it could be challenged. Prior restraint doctrine addresses government systems that require advance approval before speech can occur. Platform content moderation decisions, made in response to private liability concerns, are not prior restraints in any doctrinally recognized sense. This argument has not succeeded in any court reviewing analogous state legislation.
The civil liberties concerns raised about KOSA were not invented by industry lobbyists, even if industry lobbyists amplified them strategically. The duty of care provision in the 2022 version was genuinely vague, and vague statutes with significant liability exposure do create documented chilling effects on speech. The ACLU's concerns about LGBTQ+ youth — that platforms, to avoid liability, might remove information about sexuality and gender identity that serves a protective function for vulnerable minors — were substantive concerns, not pretextual ones.
The analytical error was not in raising these concerns. The error was in treating them as unanswerable rather than as problems to be solved through legislative refinement. The 2024 KOSA significantly addressed many of the 2022 bill's genuine First Amendment vulnerabilities. The opposition that persisted after those revisions was increasingly directed at a description of the bill that no longer matched the text.
The ACLU–Industry Coalition
The most politically consequential feature of the KOSA fight was the alignment between civil liberties organizations — primarily the ACLU and the Electronic Frontier Foundation — and the technology industry. This alignment is worth examining carefully because it shaped the political dynamics of the bill and because understanding it is necessary for designing legislation that can succeed.
The ACLU's opposition to KOSA was not strategically orchestrated by the industry. The ACLU has genuine institutional commitments to expansive First Amendment protection, has a historical pattern of opposing internet content regulation dating to the 1990s, and has a significant interest in protecting LGBTQ+ youth's access to information. These are real organizational values, not industry-funded positions. The EFF's concerns about surveillance infrastructure embedded in age verification requirements are also genuine — age verification at scale creates real privacy problems, as the European experience with GDPR's age-gating requirements has demonstrated.
The industry's lobbying operation was a separate and massive undertaking. Meta, Google, TikTok, and Snap collectively employed hundreds of lobbyists and spent tens of millions of dollars opposing KOSA and related legislation. The industry's constitutional arguments closely tracked the ACLU's public positions while being deployed in contexts — private meetings with House members and their staffs, campaign contribution decisions — that civil liberties organizations do not control and in some cases were not aware of.
| Actor | Position on KOSA | Primary Argument | Resources Deployed |
|---|---|---|---|
| Meta / Facebook | Opposed | First Amendment; chilling effect on speech | Estimated $8M+ lobbying spend; 40+ registered lobbyists |
| Google / YouTube | Opposed | Technical infeasibility of age detection; Section 230 | Estimated $10M+ lobbying; targeted House district advertising |
| TikTok / ByteDance | Nominally supportive of "child safety" principles; opposed specific provisions | Operational burden; age verification privacy concerns | Creator-facing advocacy campaigns; lobbyists in 40+ states |
| Snap | Publicly supportive in principle; opposed specific design mandates | Design regulation exceeds Congressional authority | Creator partnerships; letters to Congressional offices |
| ACLU | Opposed | First Amendment; LGBTQ+ youth information access; surveillance infrastructure | Advocacy campaigns; Congressional testimony; public letters |
| Electronic Frontier Foundation | Opposed | Age verification privacy; censorship risk; technical implementation concerns | Public advocacy; Congressional testimony |
The strategic problem for KOSA's supporters was that the industry's First Amendment arguments arrived pre-legitimized by the ACLU's endorsement of the underlying legal theory. A House member uncertain about the bill's constitutionality could cite ACLU opposition without appearing to be carrying water for Silicon Valley. The alignment — whatever its separate origins — functioned in practice as a legitimization mechanism for industry opposition. This is not a conspiracy. It is a structural feature of the legislative environment for platform regulation in the United States, and it is a feature that future legislation must account for.
The result was that House leadership declined to bring any version of the bill to a floor vote. The 118th Congress ended. The 91–3 Senate vote produced no law.
What Died With the Bill
The death of KOSA in the 118th Congress did not merely mean that one particular legislative text did not become law. It meant that a specific set of child protection mechanisms — for which no equivalent exists in current federal law — remain absent from the regulatory landscape. Understanding what was lost requires specificity about what the bill would have done.
The algorithmic recommendation default provision would have required platforms to disable non-chosen algorithmic amplification for minor users. This is the mechanism through which research most consistently identifies harm: not the availability of particular content, but the active promotion of increasingly extreme or emotionally dysregulating content to users who did not seek it. No current federal law addresses algorithmic recommendation architecture for minor users.
The behavioral advertising prohibition for minors would have removed the financial incentive structure that drives engagement-maximizing design for the minor user population. Platforms monetize minor users through behavioral advertising. Removing that monetization stream for minor users would have changed the economic calculation underlying design choices. No current federal law prohibits behavioral advertising targeting minors on social media.
The research access provision would have created a legal basis for independent researchers to obtain the data necessary to study platform effects — data that platforms have systematically restricted, as documented in the Frances Haugen whistleblower disclosures and subsequent congressional testimony. Research published in peer-reviewed journals on platform effects predominantly relies on self-reported survey data, behavioral experiment data, or data made available through platform partnership programs that the platforms control. No current federal law requires platforms to provide independent research access to their internal behavioral data.
What a Constitutionally Sound Version Requires
The lesson of the KOSA record is not that child protection legislation is unconstitutional. The lesson is that legislation structured to create maximum First Amendment surface area — through broad duty of care standards with significant liability exposure — will be opposed with First Amendment arguments that attract genuine civil liberties support and provide cover for industry opposition. A structurally different approach can achieve the same protective goals with substantially reduced constitutional vulnerability.
Target Design Architecture, Not Content
The strongest constitutionally available mechanism targets design features that have no First Amendment status: recommendation algorithm defaults, notification timing, sleep mode features, engagement measurement metrics, and session duration prompts. None of these are speech. None of them restrict platforms from hosting any content. Regulating them receives intermediate scrutiny at most, not strict scrutiny. The duty of care provision that attracted the most sustained constitutional criticism can be replaced by specific design mandates that achieve the same protective function without creating a general liability standard susceptible to the chilling effect argument.
Structure Age Verification as Privacy Regulation
Age verification was one of the ACLU's central concerns because age verification at scale requires data collection. The EFF's concern that age verification creates surveillance infrastructure is a real operational concern, not a pretextual one. A constitutionally sound version of this legislation structures age verification as privacy regulation: any age verification system used must be anonymous at the point of verification, cannot be retained by the platform, and must use technical means that do not create a linkable identity record. This does not solve the age verification problem, but it addresses the surveillance concern that provided civil liberties cover for industry opposition to age-gating requirements.
Separate the Harm Categories
The enumerated harm categories in KOSA included categories — such as “content that facilitates” specified harms — that did implicate content decisions. A revised bill can separate these. Design regulations (algorithmic defaults, notification settings, monetization restrictions) can be enacted separately from any provision touching content, ensuring that the content provisions, if challenged, do not bring down the design provisions. Legislative severability is a standard drafting tool; using it strategically to insulate design regulations from content regulation challenges is constitutionally sound legislative architecture.
Build the Research Access Provision as a Standalone
The research access provision attracted the least opposition and has the clearest constitutional foundation. Commercial enterprises that receive Section 230 immunity for their hosting decisions are not constitutionally immune from transparency requirements about those decisions' effects. Research access legislation modeled on the financial sector's data provision requirements to the Consumer Financial Protection Bureau would be constitutionally distinct from the First Amendment questions raised by KOSA's other provisions. It should be pursued as a standalone bill that can achieve passage without being blocked by the design regulation controversy.
| Provision Type | Constitutional Status | First Amendment Risk | Recommended Approach |
|---|---|---|---|
| Algorithm default settings | Design regulation; not speech | Low — no content restriction | Specific mandates; no general duty |
| Notification timing restrictions | Design regulation; not speech | Low — time/place/manner analog | Specific nighttime/school-hour defaults |
| Behavioral advertising ban (minors) | Commercial speech regulation | Moderate — Central Hudson analysis | Narrow to under-18 with documented harm nexus |
| Age verification requirement | Conditional access; anonymized | Moderate — anonymization addresses surveillance concern | Technical anonymized verification; no retention |
| General duty of care | Liability standard | High — vagueness + chilling effect | Replace with specific design mandates |
| Research data access | Transparency requirement | Low — no speech restriction | Standalone bill; CFPB model |
| Content harm enumeration | Content-adjacent | High — chilling effect argument strongest here | Narrow to algorithmically amplified content; add safe harbors |
What the Record Demands
The KOSA record is a record of near-success that illuminates the precise structural obstacles to digital child protection legislation in the United States. The 91–3 vote demonstrates that political will exists. The House inaction demonstrates that political will is insufficient when it can be neutralized by a lobbying operation that has been pre-legitimized by civil liberties organizations that share its constitutional conclusions, for different reasons, without sharing its interests.
The record demands a different legislative strategy, not a different legislative goal. The goal — preventing algorithmic systems from causing systematic harm to developing minds for the purpose of monetizing their attention — is both constitutionally available and morally necessary. The strategy must be redesigned to achieve that goal without creating constitutional surface area that can be captured by the arbitrage described in this paper.
Design regulation first. The provisions with the lowest constitutional risk — algorithm default requirements, notification controls, monetization restrictions — should be the primary legislative vehicle. They are the most powerful provisions, because they address the mechanism of harm directly. They are also the provisions for which the First Amendment argument has the least traction. Legislation built around design mandates rather than content-liability is constitutionally stronger and politically more defensible.
Engage the civil liberties community before introduction. The ACLU's concerns about LGBTQ+ youth information access were addressable. The EFF's concerns about age verification surveillance were addressable. They were not addressed in the 2022 bill, and the organizations' opposition was established before the 2023 revisions could change the political landscape. Future legislation should be drafted in active consultation with these organizations, not in reaction to their opposition.
Separate the research access question. The documentation gap — the absence of independent research access to platform data — is the prerequisite problem for all other regulatory questions. Without it, platforms can always dispute the evidentiary basis for any regulation. Research access legislation is constitutionally clean, politically viable, and strategically foundational. It should not be bundled with provisions that attract constitutional controversy.
Recognize the arbitrage explicitly. The constitutional arguments deployed against KOSA were not wrong about First Amendment doctrine in the abstract. They were wrong in their application to the specific provisions being challenged. The response requires not abandoning First Amendment sensitivity, but being precise about what the First Amendment actually protects — and writing legislation that makes that precision legible to courts and to civil liberties organizations whose support is necessary for the political mathematics to work.
The 91–3 vote was not an accident. It was the product of a decade of evidence accumulation, of parents testifying about their children's deaths, of whistleblowers documenting internal research that platforms suppressed, of the accumulated record that this series has been examining. That political consensus exists. The task is to convert it into law in a form that survives the institutions designed to prevent it from doing so.
Texas and Florida enacted laws in 2021 restricting social media platforms from removing or demoting content based on political viewpoint. In June 2024, the Supreme Court vacated both lower court decisions, holding that the courts had not adequately applied First Amendment scrutiny to the laws. The Court’s reasoning established a significant constraint on cognitive sovereignty legislation: platforms’ decisions about which content to amplify, demote, sequence, or recommend are analogous to the editorial discretion of newspapers and other media entities, which the First Amendment protects from government compulsion. A statute that mandates how a platform designs its recommendation algorithm — whether it must show content chronologically, or must not use engagement-maximizing ranking, or must display content the user has not requested — must satisfy First Amendment scrutiny under this framework. The NetChoice cases do not prohibit cognitive sovereignty legislation. They establish the specific constitutional constraint that any effective legislation must be designed around: the distinction between regulating data collection and processing practices (on which First Amendment scrutiny is lower) versus regulating the editorial decisions platforms make about content presentation (on which scrutiny is higher). The Legal Architecture series’ framework — which focuses on consent, design requirements, and enforcement mechanisms rather than mandated content curation — is designed with this constraint in mind. The NetChoice record establishes why.
Sources and References
- Kids Online Safety Act, S. 1409, 118th Congress (2023–2024). Senate vote July 30, 2024: 91–3.
- Kids Online Safety and Privacy Act, H.R. 7891, 118th Congress (2024). Passed House Energy and Commerce Committee; no floor vote.
- Moody v. NetChoice, LLC, 603 U.S. ___ (2024). Supreme Court on platform editorial discretion and First Amendment.
- NetChoice, LLC v. Paxton, consolidated with Moody. Texas HB 20 and Florida SB 7072 analysis.
- ACLU. "KOSA Threatens Free Speech Online." Letters to Senate and House, 2023–2024.
- Electronic Frontier Foundation. "Analysis of the Kids Online Safety Act." Multiple versions, 2022–2024.
- Senate Judiciary Committee. Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis." January 31, 2024. Testimony of Meta CEO Mark Zuckerberg and other platform executives.
- Haugen, Frances. Whistleblower disclosures, October 2021. Internal Facebook research on teen mental health; presented to Senate Commerce Subcommittee.
- American Academy of Pediatrics. Letter in support of KOSA. July 2024.
- American Psychological Association. "Resolution on Online Safety for Adolescents." 2023.
- Brennan Center for Justice. "First Amendment and Social Media Platform Regulation." 2024.
- Gonzalez v. Google LLC, 598 U.S. 617 (2023). Section 230 and algorithmic recommendation.
- 47 U.S.C. § 230. Communications Decency Act, Section 230.
- Children's Online Privacy Protection Act (COPPA), 15 U.S.C. §§ 6501–6506.
- Woodrow Hartzog and Evan Selinger. "The Problem with Consent in Digital Contexts." Harvard Law Review Forum, 2019.
- Open Secrets. "Big Tech Lobbying Expenditures, 118th Congress." lobbyingdisclosure.house.gov data compilation, 2024.
- Surgeon General of the United States. "Social Media and Youth Mental Health: The U.S. Surgeon General's Advisory." 2023.
- Bipartisan Policy Center. "What Congress Can Do to Make Social Media Safer for Kids." 2024.
- Moody v. NetChoice, LLC, 603 U.S. ___ (2024); NetChoice, LLC v. Paxton, 603 U.S. ___ (2024). — Platform content curation held to be First Amendment–protected editorial discretion; lower court decisions vacated and remanded.
The Institute for Cognitive Sovereignty. (2026). The Kids Online Safety Act Record [ICS-2026-LA-003]. The Institute for Cognitive Sovereignty. https://cognitivesovereignty.institute/legal-architecture/the-kids-online-safety-act-record