Standard Regulatory Capture
In 1971, George Stigler published a nine-page paper that restructured political economy. “The Theory of Economic Regulation” advanced a single thesis: regulation is not imposed on industries by public-spirited legislators. It is acquired by industries as a service. The regulated entity shapes the regulator to serve its interests. Stigler’s framework was specific: the mechanism was political influence, exercised through lobbying, staffing, and information asymmetry. The regulator was a human institution. The regulated entity was a human industry. The product itself was inert.
Carpenter and Moss, writing four decades later, catalogued the correction mechanisms. Judicial review. Notice-and-comment rulemaking. OIRA oversight. Congressional intervention. Their central claim: capture is “preventable and manageable.” The framework assumed a structural precondition so basic that no one examined it: the regulated product does not participate in constructing the regulatory framework.
The entire correction apparatus — from judicial review to congressional oversight — was designed for a world in which the product being regulated cannot draft the regulations.
Sam Peltzman extended Stigler’s supply-side model. Lancieri, Edelson, and Bechtold documented AI-specific jurisdictional arbitrage. Metcalf warned of “enormous potential for regulatory capture” in AI governance. Each analysis preserved Stigler’s foundational assumption: the capture agent is human. The industry influences, staffs, or funds the regulator. The product is governed. It does not govern.
This paper documents what happens when that assumption fails.
What Makes AI Different
GC-005 established a structural asymmetry: AI capability trajectory permanently outpaces governance trajectory. The EU AI Act, “designed for the 2021 landscape,” was substantially outdated before enforcement began. The gap is not a lag to be closed but a structural condition — what GC-005 names The Structural Asymmetry. Every prior regulated industry allowed governance to eventually catch up. Tobacco science became legible. Financial instruments became auditable. AI capabilities accelerate faster than the governance frameworks designed to assess them.
But speed alone does not produce recursive capture. A fast-moving industry that the regulator cannot keep pace with produces an oversight gap — a familiar failure mode that Stigler’s framework can accommodate. The structural novelty lies elsewhere.
GC-006 identified it. In December 2025, an Anthropic employee posted on X: “100% of Claude Code contributions were written by Claude Code.” The post was not a warning. It was a celebration of productivity. GC-006 traces the implication: when the AI product writes its own code, the humans nominally overseeing it lack the generative understanding required to identify failure modes the system itself introduced. GC-006 names this The Recursive Blind Spot.
GC-005 establishes speed: the capability trajectory outpaces governance. GC-006 establishes participation: the product constructs the infrastructure on which its own oversight depends. Neither paper makes the synthesis alone. CV-024’s contribution is the integration: when these two conditions operate simultaneously, the governed entity does not merely evade governance. It builds the governor.
This is the structural break with all prior regulatory theory. In no previous regulated industry did the product participate in constructing the regulatory framework that governs it. Tobacco did not draft FDA regulations. Atorvastatin did not design the clinical trial that evaluated it. Collateralized debt obligations did not write the Basel accords. The AI product can — and increasingly does — generate the text that becomes regulatory analysis, draft the frameworks that become governance standards, and evaluate its own safety in the absence of independent capacity to do otherwise.
The Four-Stage Loop
The recursive capture loop operates through four convergent stages. Each is documented independently across ICS research series. No single stage produces the loop alone. The convergence is the argument.
The industry becomes the regulator’s primary source of technical knowledge, staffing, and analytical capacity. Documented in GC-002 and PE-002.
The industry shapes governance standards through lobbying, campaign finance, and voluntary commitment frameworks that substitute for binding regulation. Documented in GC-003, GC-004, PE-001, PE-003.
The regulated product is adopted as the governance tool itself — used to draft regulations, evaluate compliance, and assess its own safety. Documented in GC-006 and external evidence.
Existing governance frameworks lack the transparency, participation, and independence structures needed to detect or correct the loop. Documented in HC-011, HC-012, HC-015, HC-019, HC-025.
The stages are not sequential. They operate simultaneously, each reinforcing the others. Expertise capture (Stage 1) creates the conditions for framework capture (Stage 2): when the regulator depends on industry for technical knowledge, the frameworks produced reflect industry preferences. Framework capture creates the conditions for tool capture (Stage 3): voluntary, industry-friendly governance creates space for the product itself to be adopted as the compliance mechanism. Tool capture makes accountability capture (Stage 4) structurally inevitable: when the tool being governed is also the tool doing the governing, the oversight frameworks it operates within cannot detect the recursion they are embedded in.
The loop closes when the output of Stage 4 feeds back into Stage 1. The governance framework, constructed with the participation of the governed product, produces the regulatory environment in which the next generation of expertise, standards, tools, and oversight is built. Each cycle widens the gap between nominal governance and structural reality.
Stage 1: Expertise Capture
GC-002 documented the structural mechanism: AI companies staff the regulatory bodies designed to oversee them. Paul Christiano moved from OpenAI to lead AI safety at NIST’s AISI — career employees considered resignation. Geoffrey Irving and Chris Summerfield, both with prior OpenAI and Google DeepMind appointments, became the chief scientist and research director at the UK’s AISI. Simon Möller moved from Google to the EU AI Office. Friederike Grosse-Holz moved from the UK AISI to the EU AI Office — cross-pollinating one industry-populated regulator with another.
PE-002 provides the aggregate data: 186 former government officials employed by technology companies between 2010 and 2022. Eighteen former FTC senior staff at Google alone. The FTC-Google 2012 case distilled the consequence: staff recommended charges, commissioners voted 5-0 not to pursue. The EU fined over €8 billion on substantially the same facts.
The UK AISI signed research partnerships with Google DeepMind and MOUs with Anthropic, OpenAI, and Cohere. The EU AI Office made its first technical hires only in November 2024 — nine months after launch — drawing primarily from industry as the available talent pipeline. CAISI (the U.S. equivalent) operated on roughly one-tenth the UK AISI’s budget (Federation of American Scientists). The regulatory apparatus depends on the regulated industry for the expertise required to regulate it.
The expertise asymmetry is not corruption. It is structural necessity. The technical knowledge required to evaluate frontier AI systems exists almost exclusively within the companies that build them. When NIST, the UK AISI, and the EU AI Office hire from industry, they are not making a corrupt choice. They are making the only available choice. The result is the same: the regulator’s analytical capacity is constructed by the regulated entity.
Stage 2: Framework Capture
GC-003 documented the voluntary commitment architecture: in July 2023, the White House announced “voluntary commitments” from fifteen AI companies. GC-003 traces the structural effect: voluntary frameworks pre-empt binding regulation by creating the appearance of governance without its substance. California’s SB 1047, the most significant attempted binding AI safety legislation, was vetoed after industry opposition — with voluntary commitments cited as evidence that binding requirements were unnecessary.
GC-004 documented a complementary mechanism: “open-washing,” in which companies release model weights while retaining control of training data, compute infrastructure, and deployment architecture. The appearance of openness substitutes for actual transparency in governance deliberation.
PE-001 provides the financial architecture. Over $70 million in annual lobbying from Google, Meta, Amazon, Apple, and Microsoft. More than 400 registered lobbyists. Trade associations — CCIA, NetChoice, Internet Association, and the Chamber of Progress — coordinate industry positions across legislative venues. The result: ADPPA (comprehensive data privacy), COPPA/KOSA (children’s online safety), AICOA (antitrust), and Section 230 reform were all blocked or substantially weakened.
PE-003 documents the campaign finance dependency: over $50 million in tech PAC and employee contributions per election cycle. Ninety-six percent of Congress received technology sector contributions in 2022. Senator Schumer received approximately $2.9 million from technology interests and declined to schedule AICOA for a floor vote — despite committee passage at 16-6 and over $100 million in industry lobbying against it.
The framework is not captured after it is built. It is captured in the building. The industry does not merely influence the regulatory standard. It authors the conditions under which the standard is written.
Stage 3: Tool Capture
This is where AI governance breaks from all prior regulatory history. In Stages 1 and 2, the mechanisms are familiar — Stigler would recognize them. Expertise capture and framework capture are intensified versions of patterns documented across tobacco, pharmaceuticals, and financial services. Stage 3 has no precedent.
In January 2026, ProPublica reported that the U.S. Department of Transportation demonstrated Google’s Gemini to over 100 employees with the stated goal of drafting complete federal regulations in 30 days. The general counsel described DOT as the “first agency fully enabled to use AI to draft rules.” The system would handle “80–90% of the work.” It had already been used for an unpublished FAA rule.
In October 2024, the European Parliament assessed AI systems not against GDPR or the EU AI Act but against “compliance to the AI Constitutional approach” — Anthropic’s Constitutional AI framework, the company’s own alignment methodology, substituted for the legal compliance standard the Parliament was charged with enforcing. The product’s own safety framework became the regulatory benchmark (ICCL 2024; Yew & Judge, EAAMO 2025).
The OECD’s 2025 report, Governing with Artificial Intelligence, documented over 200 real-world cases of government AI deployment across eleven core functions, including regulatory drafting. The pattern is not hypothetical. It is operational.
Meanwhile, the external evaluation infrastructure that might detect the recursion does not exist in practice. Stein-Perlman (AI Lab Watch, May 2024) documented that Anthropic provided external access to evaluators for Claude 2 in 2023 only. OpenAI for GPT-4 in 2023 only. DeepMind provided “shallow access.” The finding: “No company was ever forced as the result of external evaluations, and there never was a model blocked, postponed or constrained before deployment.”
GC-006 names the structural pattern that emerges: The Absolution Architecture — the rhetorical pattern of attributing outcomes to human error in systems where human authorship has been structurally reduced. When the bun runtime — an Anthropic-owned project — served source maps in production for twenty days on a publicly filed bug, the response was “human error. Not a security breach.” GC-006 observes: the language of human responsibility is maintained after the structural conditions for meaningful human responsibility have been diminished.
The Shadow Bias Record provides an additional dimension that CV-024 extrapolates to governance contexts. SB-003 documents what it terms “corporate capture” in GPT — the model’s structural inability to objectively analyze Microsoft, Azure, and OpenAI commercial interests, rated “highest” among documented shadow biases. SB-004 documents “search-rank epistemics” in Gemini — PageRank as truth proxy — also rated “highest.” These are structural and theoretical frameworks documenting institutional formation biases, not empirical measurements of model behavior. The governance relevance is CV-024’s own argument: when AI systems carrying these documented institutional biases are deployed as governance tools, the biases become embedded in the governance output. The systems do not analyze their creators’ commercial interests objectively because they were not built to.
Stage 4: Accountability Capture
The final stage locks the loop. Existing governance frameworks lack the structural features necessary to detect, disclose, or correct recursive capture.
HC-011 documents The Black Box Condition: the opacity of AI systems is not a technical limitation but a structural feature that serves deployer interests. The IP and national security defense that industry advances applies only at the deepest level of model architecture — not to the governance-relevant levels of training data composition, deployment scope, or institutional bias patterns that transparency would require.
HC-012 documents The Consent Deficit: zero high-stakes AI deployments give affected populations formal governance access equivalent to deployers. The populations most affected by AI governance decisions — children, workers, communities of color, developing nations — have no structural mechanism to participate in the governance frameworks being built in their name.
HC-015 documents The Governance Facade: responsible AI frameworks satisfy form but not function. Google’s Advanced Technology External Advisory Council dissolved in nine days. Raji et al. (2020) found no measurable difference in deployment behavior between companies with and without published AI principles.
HC-019 documents The Responsibility Vacuum: accountability diffuses across the developer-deployer-operator-user chain until no structurally available actor can be held responsible. The developer says: “We built the model. We did not deploy it in this context.” The deployer says: “We followed the developer’s specifications.” PE-004 documents the legal architecture that makes this diffusion permanent: Section 230(c)(1) extended to algorithmic amplification, with Gonzalez v. Google (2023) leaving algorithm immunity unresolved.
HC-025 documents The Stakeholder Asymmetry: three categories dominate AI governance — developers, capital providers, and regulators staffed by former industry employees. Over $957 million in AI lobbying in 2023 alone (OpenSecrets). Workers, communities, children, and developing nations have no equivalent structural representation.
When the governed entity builds the governor, the resulting governance framework cannot detect the recursion it is embedded in — because the recursion is a feature of the framework’s construction, not an external threat to its operation.
The Cross-Domain Record
The structural novelty of recursive capture becomes visible when tested against the three best-documented prior cases of regulatory capture.
| Industry | Product | Participates in Framework? | Mechanism |
|---|---|---|---|
| Tobacco | Cigarettes | No | TIRC was a human front organization producing human-authored doubt. The cigarette is inert. |
| Pharma | Drug molecule | Partially — human-mediated | Industry funds and designs clinical trials, biasing evaluation. But the molecule has no agency in the process — Pfizer designs the protocol; atorvastatin does not. |
| Finance | Securities / derivatives | Partially — indirect | Black-Scholes shaped the markets it described (MacKenzie 2006). But the model did not draft SEC regulations. The performativity was indirect. |
| AI | LLM | Yes — structurally novel | Generates text appearing to be regulatory analysis. Used by regulators as drafting tool (DOT/Gemini). Own safety frameworks adopted as regulatory standards (EU Parliament/Claude). Participates in own evaluation via AI-assisted red-teaming. |
Stigler’s capture theory assumed the regulated product was inert. The correction mechanisms Carpenter and Moss catalogued — judicial review, notice-and-comment, congressional oversight — assumed human authorship of the regulatory text being reviewed. When the product generates the regulatory analysis, notice-and-comment reviews text produced by the entity being regulated. When the product evaluates its own safety, external evaluation reviews the product’s assessment of itself.
Donald MacKenzie’s performativity thesis provides the closest existing analogy. The Black-Scholes options pricing formula did not merely describe financial markets; it reshaped them. But the formula did not draft SEC regulations. It did not staff the CFTC. It did not evaluate its own systemic risk. The performativity was real but indirect — mediated entirely through human decision-makers who chose to adopt the model. An LLM used to draft federal regulations operates without that mediation layer. The text it produces enters the governance process directly.
Yew and Judge (EAAMO 2025) come closest to the mechanism documented here. Their concept of “anti-regulatory AI” — ostensibly protective technologies that shape the terms of regulatory oversight — documents the EU Parliament’s substitution of Constitutional AI for GDPR compliance evaluation. They do not use the term “recursive capture” or describe the four-stage loop. The term and the loop model are CV-024’s original contributions, building on Yew and Judge’s evidence and extending it with the DOT/Gemini case, the revolving door data, and the structural accountability deficit.
Breaking the Loop
PE-005 defines five structural requirements for regulatory independence: (1) independent technical expertise, (2) industry-independent funding, (3) protected leadership tenure, (4) revolving door restrictions with a five-year cooling-off period, and (5) transparency and auditability requirements. As of April 2026, the United States meets zero of five.
The Federal Reserve demonstrates that independent regulatory capacity is structurally achievable. Self-funded through operations rather than congressional appropriation, staffed with over 400 PhD economists, the Fed built its analytical capacity as a permanent structural feature — not borrowed from the banking industry it regulates. The NRC was created in 1974 specifically by dissolving the Atomic Energy Commission, whose dual promotional-regulatory mandate made it structurally captured. The distinction: nuclear reactors cannot describe themselves to their regulators. AI systems can.
HC-028 proposes a normative framework: eight domain-specific “sovereignty floors” — proposed non-negotiable lower bounds below which AI systems must not reduce human agency in education, healthcare, law, governance, finance, construction, science, and care. HC-028 frames these explicitly as proposed standards requiring “domain expert review and empirical validation before regulatory application.” They are not settled benchmarks. They are a starting point for governance that the recursive capture loop has not yet closed around.
Independent technical capacity — an AI regulatory body with its own modeling, evaluation, and audit infrastructure not dependent on the companies it oversees (the Fed model applied to AI).
Funding independence, protected tenure, mandatory cooling-off periods, and campaign finance reform that breaks the structural dependency PE-001 through PE-003 document.
A prohibition on using the regulated product as the regulatory tool — the one structural safeguard that no existing governance framework contemplates.
The recursive capture loop is not inevitable. But it cannot be broken by the governance mechanisms currently deployed, because those mechanisms were designed for a world in which the product being governed does not participate in building the governance framework. Every existing correction mechanism — judicial review, legislative oversight, public comment, independent evaluation — assumes human authorship at the point of regulatory text generation. When that assumption fails, the correction mechanisms operate on text produced by the entity they are correcting. The loop persists.
Breaking it requires building something that does not yet exist: regulatory capacity that is structurally independent of the industry it regulates, technically capable of evaluating the systems it oversees, and explicitly prohibited from using those systems as governance tools. The question is not whether this can be built. The question is whether it will be built before the loop becomes self-sustaining — before the governance frameworks produced within the loop become the only frameworks available.
The structural condition produced when a regulated entity does not merely influence, staff, or fund the regulatory body that oversees it but constructs the regulator’s capacity to regulate — through expertise dependence, framework authorship, tool adoption, and accountability architecture. Distinct from standard regulatory capture (Stigler 1971) in which the capture agent is human and the product is inert. In recursive capture, the product participates in building the governance framework that governs it. Four convergent mechanisms — expertise capture, framework capture, tool capture, and accountability capture — form a closed loop: the governance output of each cycle becomes the governance input for the next. The loop is self-reinforcing: each iteration widens the gap between nominal governance and structural reality. Standard correction mechanisms (judicial review, notice-and-comment, congressional oversight) cannot interrupt the loop because they were designed for systems in which the regulated product does not generate the regulatory text under review. The recursive capture loop produces governance frameworks that permit the conditions that produced them. The governed has built the governor.
Key Cross-References
References
Regulatory Capture Theory
George J. Stigler, “The Theory of Economic Regulation,” Bell Journal of Economics and Management Science 2, no. 1 (1971): 3–21.
Sam Peltzman, “Toward a More General Theory of Regulation,” Journal of Law and Economics 19, no. 2 (1976): 211–240.
Daniel Carpenter and David A. Moss, eds., Preventing Regulatory Capture: Special Interest Influence and How to Limit It (Cambridge: Cambridge UP, 2014).
Filippo Lancieri, Laura Edelson, and Stefan Bechtold, “AI Regulation: Competition, Arbitrage & Regulatory Capture,” Georgetown Law / SSRN / Theoretical Inquiries in Law (December 2024).
Jacob Metcalf, “AI Safety and Regulatory Capture,” AI & Society (Springer, August 2025).
Recursive Governance & Performativity
Gunther Teubner, “Substantive and Reflexive Elements in Modern Law,” Law and Society Review 17, no. 2 (1983): 239–286.
Gunther Teubner, Law as an Autopoietic System (Oxford: Blackwell, 1993).
Niklas Luhmann, Law as a Social System (Oxford: Oxford UP, 2004).
Donald MacKenzie, An Engine, Not a Camera: How Financial Models Shape Markets (Cambridge, MA: MIT Press, 2006).
Kak Yew and Lindsay Judge, “Anti-Regulatory AI,” arXiv:2509.22872 / EAAMO 2025 (November 2025).
AI in Governance — Documented Instances
Joe Coburn, “Trump DOT Plans to Use Google Gemini AI to Write Regulations,” ProPublica (January 2026).
ICCL, “How Not to Deploy Generative AI: The Story of the European Parliament” (October 2024).
OECD, Governing with Artificial Intelligence (September 2025).
Ben Stein-Perlman, “AI Companies Aren’t Really Using External Evaluators,” AI Lab Watch (May 2024).
Political Economy & Lobbying
OpenSecrets, AI lobbying and campaign finance data (2022–2024).
Peter Conti-Brown, The Power and Independence of the Federal Reserve (Princeton: Princeton UP, 2016).
Michael Gabay, “The Prescription Drug User Fee Act (PDUFA),” P&T 43, no. 2 (2018).
Third Way, “NRC Capacity and Leadership Under the Trump Administration” (2025).
Federation of American Scientists, CAISI budget analysis (2024).
AI Governance Frameworks
Deborah Raji et al., “Closing the AI Accountability Gap,” FAT* ’20 (2020).
Noa Nabeshima and Ben Stein-Perlman, “Evaluator Dependence in AI Governance,” AI Lab Watch (2024).
NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0), AI 100-1 (January 2023).
EU Artificial Intelligence Act, Regulation (EU) 2024/1689 (2024).
ICS Cross-References
GC-005: The Governance Gap — The Structural Asymmetry.
GC-006: The Recursive Blind Spot — The Recursive Blind Spot / The Absolution Architecture.
PE-001: The Lobbying Architecture — The Policy Firewall.
PE-002: The Revolving Door Record — The Personnel Capture.
PE-005: What Political Independence Would Require — The Structural Independence Conditions.
HC-011: Transparency: The Legibility Standard — The Black Box Condition.
HC-012: Participation: The Governance Requirement — The Consent Deficit.
HC-015: The Compliance Theater Record — The Governance Facade.
HC-019: The Accountability Vanishing Point — The Responsibility Vacuum.
HC-025: The Governance Gap — The Stakeholder Asymmetry.
HC-028: The Human Anchor Principle — The Sovereignty Floor.