The Pattern
Regulatory capture is not a theory. It is a documented institutional phenomenon with a substantial empirical literature. The mechanism was described by economist George Stigler in 1971: regulatory agencies, over time, tend to serve the interests of the industries they regulate rather than the public interest they were created to protect. The mechanism operates not through corruption — though corruption can accelerate it — but through structural forces: expertise asymmetry, revolving door employment, information asymmetry, and the concentrated benefits of regulatory outcomes to industry versus the diffuse costs to the public.
The four papers preceding this one have documented each of these structural forces in AI governance. What remains is to demonstrate that they constitute the same pattern documented in other regulatory domains — and to identify the specific feature that makes the AI instance of the pattern more consequential than its predecessors.
The Tobacco Parallel
The tobacco industry's regulatory capture is the most extensively documented case in the public health literature. The pattern operated over decades: the industry funded research that produced findings favorable to continued tobacco sales, placed industry-friendly scientists on advisory panels, hired former regulators as lobbyists, and maintained a public posture of cooperation with regulatory processes while working systematically to delay, weaken, and block binding regulation.
The tobacco timeline is instructive. The Surgeon General's 1964 report established the causal link between smoking and lung cancer. The Federal Cigarette Labeling and Advertising Act (1965) required warning labels. But comprehensive regulation — the Family Smoking Prevention and Tobacco Control Act, which gave the FDA authority to regulate tobacco products — was not enacted until 2009, forty-five years after the scientific consensus was established. During those 45 years, the tobacco industry deployed expertise asymmetry, revolving door employment, and industry-funded research to maintain a regulatory environment that permitted the sale of a product known to kill its users.
A Stanford Law School study found that 57% of departing FDA employees took positions in the biopharmaceutical industry. The FDA's Center for Tobacco Products has experienced the same pattern: Matt Holman, director of the Office of Science at CTP, and Keagan Lenihan, associate commissioner for external affairs, both moved to Philip Morris. The revolving door is not a metaphor. It is a documented employment pattern with specific names and specific positions.
The parallel to AI governance is structural, not analogical. The same four elements are present: expertise asymmetry (only tobacco companies had the internal research on health effects; only AI companies have the internal data on model capabilities), revolving door (FDA-to-industry personnel movement; NIST-to-industry and industry-to-NIST personnel movement), industry-funded research (tobacco industry funding of medical research; AI company funding of AI safety and policy research), and voluntary frameworks (industry self-regulation preceding binding legislation in both domains).
The Pharmaceutical Parallel
The pharmaceutical industry's relationship with the FDA illustrates a more subtle form of capture — one in which the regulator's dependence on industry user fees creates structural alignment between the regulator's institutional interests and the industry's commercial interests. The Prescription Drug User Fee Act (PDUFA), first enacted in 1992, allowed the FDA to collect fees from pharmaceutical companies to fund the drug approval process. The fees now constitute a substantial portion of the FDA's operating budget for drug review.
The user fee structure creates a specific incentive: the FDA's institutional capacity depends on the continued operation and profitability of the industry it regulates. This is not corruption. It is institutional architecture — a funding structure that aligns the regulator's survival with the regulated industry's prosperity. The result is a regulatory relationship that is formally adversarial and structurally cooperative.
The AI parallel is visible in the NIST AI Safety Institute's pre-release testing agreements with OpenAI and Anthropic. These agreements gave AISI access to frontier models — but the access depended on the companies' voluntary cooperation. The Institute's capacity to evaluate the most consequential AI systems depended on the cooperation of the companies that built them. When the Institute was gutted in 2025, the companies lost nothing. The public lost its only institutional mechanism for independent frontier model evaluation.
The Financial Services Parallel
The Dodd-Frank Wall Street Reform and Consumer Protection Act (2010) provides the most detailed record of industry lobbying against regulatory reform. The act was the legislative response to the 2008 financial crisis — the most severe economic disruption since the Great Depression. It was enacted two years after the crisis, which is fast by legislative standards. And it was immediately subjected to the most intensive lobbying campaign in the history of financial regulation.
During the summer of 2010, banks spent $27.3 million over three months to influence Dodd-Frank's provisions. A total of 2,961 organizations participated in lobbying during the bill's congressional stage, the rulemaking stage, or both. Bank lobbying expenditures scaled by total assets increased by 567% during the post-crisis period. Of 2,961 organizations that lobbied the SEC on Dodd-Frank rules, 88 had employed at least one former SEC regulator — and organizations with former SEC employees were more likely to be cited in the final rules.
Nearly three years after Dodd-Frank's passage, the rulemaking process remained incomplete. Regulatory lawyers engaged in what observers described as "hand-to-hand combat over every clause and comma." The act was designed to prevent a recurrence of the 2008 crisis. The lobbying effort was designed to ensure the act's implementation did not constrain the activities that produced the crisis.
The financial services parallel to AI governance is the most precise. In both domains, the regulated entities are large, technically sophisticated, well-funded, and capable of sustaining multi-year lobbying campaigns. In both domains, the regulatory response follows a crisis or a recognition of risk. In both domains, the lobbying effort targets not just the legislative text but the rulemaking process that translates legislative intent into binding requirements. And in both domains, the revolving door provides the regulated entities with personnel who understand the regulatory process from the inside.
The Cross-Domain Structure
The four-element pattern is consistent across all three domains and present in AI governance:
Tobacco: internal health research. Pharma: proprietary clinical data. Finance: complex instrument knowledge. AI: frontier model capabilities, emergent behaviors, safety evaluations. In each case, the regulated entity possesses information the regulator needs and cannot independently obtain.
Tobacco: FDA-to-Philip Morris. Pharma: 57% of FDA departures to industry. Finance: SEC-to-Wall Street. AI: OpenAI-to-NIST, government-to-OpenAI/Anthropic. Personnel move between regulator and regulated, carrying networks, norms, and assumptions.
Tobacco: voluntary marketing codes. Pharma: voluntary safety monitoring. Finance: self-regulatory organizations. AI: White House voluntary commitments. In each case, voluntary frameworks delay binding regulation and are designed by the regulated entities.
The structural identity of the pattern across four domains spanning seven decades is the central finding of this paper. Regulatory capture is not a failure of individual integrity. It is a predictable structural outcome of a specific institutional architecture — one in which the regulated entity has more expertise, more resources, more personnel, and more concentrated interest in regulatory outcomes than the regulating body or the public it serves.
What Makes AI Different
If the regulatory capture pattern is identical across domains, the question is what makes the AI instance different. The answer is speed and scale.
In tobacco, the gap between scientific consensus (1964) and comprehensive regulation (2009) was 45 years. During those 45 years, tobacco capabilities — the addictiveness of the product, the sophistication of marketing — advanced incrementally. The product in 2009 was more addictive and more effectively marketed than the product in 1964, but the difference was one of degree, not of kind. Regulators in 2009 were governing a product they could understand, evaluate, and test independently.
In financial services, the gap between the recognition of derivative risk (the early 2000s) and comprehensive regulation (2010) was approximately a decade. Financial instruments grew more complex during that decade, but the complexity was comprehensible to regulators with sufficient expertise and access. The SEC could, in principle, evaluate the instruments it was regulating — it needed more people and more resources, but the knowledge was accessible.
In AI, the capability trajectory is qualitatively different. The gap between GPT-3 (2020) and frontier models in 2026 represents a capability expansion that has no parallel in tobacco, pharmaceutical, or financial services regulation. The models being deployed in 2026 have emergent capabilities — behaviors not present in training data, not predicted by developers, and not fully characterized by any evaluation methodology — that did not exist when the current governance frameworks were designed.
The structural question is not whether governance will catch up to capability. It is whether the capability curve is steep enough that governance structurally cannot catch up — that the Governance Lag is not a temporary condition but a permanent feature of the domain.
The Governance Lag in AI is not analogous to the delay between scientific consensus and regulation in tobacco. It is a different kind of gap — one in which the object being regulated changes faster than the regulatory process can characterize it. The EU AI Act was designed for the AI landscape of 2021. By the time it entered into force in 2024, the landscape had changed through multiple generational leaps. The regulation governs categories that the technology has already exceeded. This is not a failure of regulatory design. It is a structural feature of attempting to regulate an exponentially advancing technology with linearly advancing institutions.
The Structural Asymmetry — Named
The condition in which the capability trajectory of AI development permanently outpaces the governance trajectory, producing a domain where regulatory capture is not a temporary condition correctable by legislative action but a structural feature of the relationship between exponential technology and linear institutions. The Structural Asymmetry incorporates and extends all four elements documented in the preceding papers — the Governance Lag, the Expertise Capture, the Voluntary Commitment, and the Openness Inversion — into a single structural diagnosis. In tobacco, pharmaceutical, and financial services regulation, capture was eventually addressed through binding legislation, even if that legislation was delayed and weakened. In AI, the Structural Asymmetry raises the possibility that the capability trajectory will advance beyond the point where any governance framework — however well designed — can evaluate, constrain, or meaningfully oversee the systems being deployed. The asymmetry is structural because it is produced by the difference in kind between the speed of technological development and the speed of institutional response, not by a failure of political will or regulatory design.
What the Record Shows
This series has documented the following, through the factual record:
The United States has no comprehensive federal AI legislation as of March 2026. The EU AI Act took 40 months from proposal to enforcement and was substantially outdated by the technology it governs before it entered into force. Executive orders establishing safety frameworks were partially revoked within 15 months. The NIST AI Safety Institute was functionally dismantled within 15 months of its establishment. California's SB 1047 was vetoed after industry opposition. The White House's primary governance mechanism was voluntary commitments designed and reported by the entities being governed.
AI companies increased lobbying expenditures by 577% (OpenAI) and 157% (Anthropic) between 2023 and 2024. In Q1 2025, the three largest AI companies each spent more on federal lobbying than the entire independent AI safety research field received in grants. NTIA public comments on AI accountability were 48% industry submissions. The NIST AI Safety Institute drew its technical leadership from the AI industry. The AI Safety Summits featured attendee lists dominated by industry representatives. The revolving door between AI companies and government AI roles operates in both directions with documented names and positions.
The companies that signed voluntary commitments at the White House opposed binding legislation. The companies that described their models as "open source" lobbied for regulatory definitions that would exempt their models from obligations. The companies that warned of civilizational AI risk continued deploying new models throughout the governance vacuum their warnings described.
This is not corruption. It is the documented structural product of expertise asymmetry, revolving door employment, industry-funded research, and voluntary governance frameworks — the identical four-element pattern documented across tobacco, pharmaceutical, and financial services regulation, operating in the highest-stakes domain.
The question that the Structural Asymmetry poses is not whether AI governance will eventually catch up. In tobacco, it took 45 years. In financial services, it took a global economic crisis. In AI, the capability trajectory raises the possibility that "eventually" may arrive after the window for meaningful governance has closed — not because regulators failed to act, but because the structural asymmetry between exponential capability growth and linear institutional response is a feature of the domain that no amount of political will can override.
The series documents the pattern. Whether the pattern produces the same outcome as its predecessors — delayed but eventual binding regulation — or a novel outcome — permanent structural capture of a domain too fast for governance to follow — is the open question that the documented record leaves unanswered. The record does not support optimism. But it does not foreclose the possibility that the pattern, once named, can be addressed differently than it was in the domains that came before.