Series GC · AI Governance Capture · Saga II

The AI Governance Capture

The companies building the most powerful technology in history are writing the rules that govern it.

6 papers · Series GC · Saga II: The Collapse · Published 2026

$1.76MOpenAI lobbying spend, 2024
577%Increase in OpenAI lobbying, 2023–2024
0Comprehensive US federal AI laws, 2026
48%Industry share of NTIA AI comments
3 yearsEU AI Act: proposal to enactment
Series Thesis

The regulatory capture of AI governance by AI companies follows the same structural pattern documented across tobacco, pharma, and financial services in Saga VII — but in the highest-stakes domain. The companies with the most to lose from regulation are the primary authors of the regulatory frameworks. This is not corruption. It is the documented structural product of expertise asymmetry, revolving door employment, and industry-funded policy research.

The United States has no comprehensive federal AI legislation as of March 2026. The EU AI Act — proposed in April 2021, not enacted until August 2024 — is the first major regulatory framework, and its open-source exemptions were shaped by industry lobbying. The White House's primary governance mechanism was voluntary commitments from the companies being governed — non-binding, self-reported, with no enforcement mechanism. The NIST AI Safety Institute, the closest thing to an independent technical evaluator, was gutted in early 2025. At every level, the structural pattern is identical: the entities that should be regulated are the primary source of regulatory expertise, regulatory personnel, and regulatory proposals.

The Papers
01
The Regulatory Vacuum ICS-2026-GC-001 · The Governance Lag No comprehensive federal AI legislation in the United States as of 2026. The EU AI Act took three years from proposal to enactment. Congressional hearings featured the CEOs of the companies to be regulated testifying about how they should be regulated. The gap between capability advancement and regulatory response is not accidental — it is structural, and this paper documents its architecture.
02
The Industry Chair ICS-2026-GC-002 · The Expertise Capture The NIST AI Safety Institute appointed a former OpenAI researcher as head of AI safety. The AI Safety Summit attendee lists read like industry conferences. NTIA public comments on AI accountability were 48% industry submissions. The expertise asymmetry is structural: only the companies building AI have the technical knowledge to evaluate it, and every governance body reflects this dependency.
03
The Safety Theater ICS-2026-GC-003 · The Voluntary Commitment In July 2023, the White House secured voluntary AI commitments from seven companies — non-binding, self-reported, with no enforcement mechanism. California's SB 1047 AI safety bill was vetoed after industry opposition. The pattern: voluntary pledges substitute for binding regulation, and binding regulation is blocked when attempted. The companies being governed define the terms of their own governance.
04
The Open Source Weapon ICS-2026-GC-004 · The Openness Inversion Meta's Llama models do not meet the Open Source Initiative's definition of open source. Mistral lobbied against EU AI Act provisions using open-source rhetoric. "Open source" is deployed simultaneously as a genuine safety mechanism, a competitive strategy, and a regulatory shield — and the ambiguity is not accidental. This paper documents the strategic deployment of openness language in AI governance.
05
The Governance Gap ICS-2026-GC-005 · The Structural Asymmetry Expertise asymmetry plus revolving door plus industry-funded research plus voluntary frameworks equals structural capture — the identical pattern documented for tobacco, pharma, and financial services in Saga VII. But at higher stakes: AI capabilities may exceed human oversight capacity before governance catches up. This paper names the Structural Asymmetry and draws the cross-domain parallel that completes the series argument.
06
The Recursive Blind Spot ICS-2026-GC-006 · The Recursive Blind Spot When 100% of Claude Code is written by Claude Code, the humans responsible for oversight lack the generative understanding required to catch failure modes the system introduced. A twenty-day-old bun bug ships source maps in production. Anthropic calls it human error. This paper names the structural condition: the gap is not between what was written and what was understood, but between what was generated and what any human ever fully authored.
Series Named Condition
The Structural Asymmetry

The condition in which the entities that should be subject to regulation are the primary source of the technical expertise required to design, implement, and evaluate that regulation. In AI governance, this manifests as a four-element structure: expertise asymmetry (only AI companies have the technical knowledge to assess AI risk), revolving door employment (personnel move between AI companies and government AI roles), industry-funded research (the policy research that informs regulation is funded by the entities to be regulated), and voluntary frameworks (the primary governance mechanisms are non-binding commitments designed and reported by the governed entities). The pattern is structurally identical to regulatory capture in tobacco, pharmaceutical, and financial services regulation — but operates in a domain where the capability trajectory may outpace the governance trajectory permanently.

Series Navigation
← Saga II: The Collapse GC-001: The Regulatory Vacuum → Related: Autonomous Weapons Record → Related: Saga VII — The Archive →