The Expertise Problem
AI governance has a structural dependency that does not appear in most other regulatory domains. The technical knowledge required to understand, evaluate, and regulate frontier AI systems resides almost exclusively within the companies that build them. This is not the case for pharmaceuticals (where academic pharmacology and independent testing labs provide external expertise), nor for financial services (where academic economics and independent auditing firms provide external capacity), nor even for nuclear energy (where national laboratories and academic physics programs maintain independent technical authority).
In AI, the gap between industry technical capacity and government technical capacity is qualitatively different. The frontier models that pose the most significant governance questions are built by a handful of companies — OpenAI, Anthropic, Google DeepMind, Meta AI, and a small number of others. These companies employ a disproportionate share of the world's leading AI researchers. They control the compute infrastructure on which frontier models are trained. They possess proprietary data about model capabilities, failure modes, and emergent behaviors that is not available to regulators, academics, or civil society.
The result is a structural dependency: government bodies tasked with AI governance must rely on the expertise of the entities they are supposed to oversee. This is the expertise asymmetry, and it is the foundation on which every other element of governance capture rests.
The NIST AI Safety Institute
The AI Safety Institute (AISI) was established within NIST in November 2023 as the United States' primary institutional mechanism for AI safety evaluation. Its mandate was to develop standards, guidelines, and testing protocols for AI safety — precisely the kind of independent technical capacity that the expertise asymmetry demands.
The leadership appointments reveal the structural dependency. Director Elizabeth Kelly brought government and policy experience. Chief Technology Officer Elham Tabassi led the development of the NIST AI Risk Management Framework. But the head of AI safety was Paul Christiano — a former OpenAI researcher and founder of the Alignment Research Center. Senior advisor Rob Reich came from Stanford's Institute for Human-Centered AI, which receives substantial funding from AI companies. Head of international engagement Mark Latonero was a former White House OSTP official.
The appointments were individually reasonable. Christiano is widely respected in the AI safety research community. The problem is not the qualifications of the individuals but the structural pattern they illustrate: the government's AI safety body draws its technical leadership from the AI industry, because that is where the technical expertise resides. The dependency is not corruption — it is architecture.
NIST established the AI Safety Institute Consortium (AISIC) in February 2024, bringing together more than 200 organizations for collaborative AI safety research. The consortium's membership includes the AI companies whose models are the primary subject of safety evaluation. The entities being evaluated participate in developing the evaluation methodology.
The AISI also reached pre-release testing agreements with OpenAI and Anthropic — agreements that gave the Institute access to frontier models before public deployment. These agreements were significant precisely because they represented a degree of independent evaluation capacity. When the Institute was gutted in early 2025 — with Kelly's departure, planned NIST layoffs affecting AISI staff, and no AISI representation at the Paris AI Action Summit — that independent capacity was dismantled. The testing agreements, to the extent they remain, now lack the institutional infrastructure to operationalize them.
The Safety Summits
The global AI Safety Summit process illustrates the same expertise dependency at the international level. The Bletchley Park summit in November 2023 produced the Bletchley Declaration — signed by 28 countries, including the United States and China — acknowledging AI risks and the need for international cooperation on safety. The Seoul AI Summit in May 2024 extended these commitments.
The attendee composition at both summits is instructive. At Seoul, the industry attendees included Amazon, Anthropic, Google DeepMind, IBM, Meta, Microsoft, Mistral, OpenAI, Samsung Electronics, Tencent, and xAI. Company representatives — including Tesla CEO Elon Musk and Samsung Chairman Lee Jae-yong — sat at the table alongside heads of state. The summits produced commitments. The commitments were shaped by the companies present.
The summit structure reflects a reality that the expertise asymmetry makes unavoidable: meaningful AI safety governance cannot proceed without the participation of the companies building frontier AI systems, because those companies possess the technical knowledge the governance process requires. The question is not whether industry should participate — it must. The question is whether the governance structure includes sufficient independent technical capacity, independent research capacity, and independent institutional authority to prevent participation from becoming capture.
"These technological advances risk consolidating power into the hands of a limited number of private companies." — AI Seoul Summit observers
The Public Comment Record
In April 2023, the National Telecommunications and Information Administration (NTIA) issued a request for public comments on AI accountability policy. The NTIA received approximately 1,447 written comments. The composition of respondents is a data point in the expertise capture analysis.
Industry submissions — including trade associations — accounted for approximately 48% of all comments. Nonprofit advocacy organizations submitted approximately 37%. Academic and other research organizations contributed approximately 15%. The companies and their trade associations constituted nearly half of all public input into the government's AI accountability policy development process.
The asymmetry is compounded by resource disparity. Industry comments are typically produced by well-funded policy teams, legal departments, and hired consultants. They are detailed, technically sophisticated, and oriented toward specific regulatory outcomes. Civil society and academic comments, while substantive, are produced with fewer resources and less access to proprietary technical information about the systems under discussion. The public comment process is formally open and equal. The capacity to participate in it is structurally unequal.
The Revolving Door
The movement of personnel between AI companies and government AI roles follows the same revolving door pattern documented in pharmaceutical, financial services, and defense industry regulation. The mechanism is straightforward: individuals develop expertise in the private sector, bring that expertise to government service, and often return to the private sector with the networks and knowledge gained in government.
OpenAI hired Chris Lehane — a political veteran with extensive government experience — as VP of policy. OpenAI also brought on Meghan Dorn, who worked for five years for Senator Lindsey Graham, as an in-house lobbyist, and Chan Park, former senior director of congressional affairs at Microsoft, to head US and Canada partnerships. Anthropic hired Rachel Appleton, a Department of Justice alumna, as its first in-house lobbyist.
In the other direction, Paul Christiano moved from OpenAI to head AI safety at NIST's AISI. Mark Latonero moved from the White House OSTP to AISI. The movement is bidirectional, and it is not inherently corrupt — it is how expertise flows in a specialized field. But it creates a structural condition: the personnel who staff government AI governance bodies and the personnel who staff AI company policy teams share professional networks, professional norms, and professional assumptions that were developed within industry.
Paul Christiano (OpenAI to NIST AISI). Technical expertise flows from the companies building AI to the bodies evaluating it.
Chris Lehane (government to OpenAI VP of policy). Chan Park (Microsoft congressional affairs to OpenAI). Political and regulatory expertise flows to companies seeking to influence governance.
Shared professional networks, shared assumptions, shared vocabulary. The people on both sides of the regulatory relationship know each other, trained together, and share a professional worldview developed in industry.
The RAND Analysis
In 2024, RAND Corporation published a research brief titled "Managing Industry Influence in U.S. AI Policy," based on interviews with 17 experts across government, industry, civil society, and academia. The findings provide the most systematic analysis of AI governance capture to date.
The study identified six primary channels through which AI companies influence governance: agenda-setting (identified by 15 of 17 interviewees), advocacy activities (13 of 17), influence in academia and research (10 of 17), information management (9 of 17), cultural capture through status (7 of 17), and media capture (7 of 17). The researchers concluded that AI policy is not yet "captured" but that capture might in the future impede effective regulation.
The agenda-setting channel is the most significant because it operates before formal policy processes begin. When AI companies define the terms of the debate — which risks to prioritize, which governance mechanisms to consider, which metrics to evaluate — they shape the outcome regardless of what happens within the formal process. Sam Altman testifying about what kind of regulation is needed does not merely influence regulation. It defines the regulatory imagination.
OpenAI increased federal lobbying expenditures from $260,000 in 2023 to $1.76 million in 2024 — a 577% increase. Anthropic more than doubled its spend from $280,000 to $720,000. In Q1 2025, OpenAI, Anthropic, and Google each spent more on federal lobbying than the entire independent AI safety research field received in grants during the same period.
The Expertise Capture — Named
The structural condition in which the technical knowledge required to design, staff, and evaluate AI governance resides almost exclusively within the entities to be governed. The Expertise Capture operates through four reinforcing channels: staffing (government AI bodies draw technical leadership from industry), advisory (governance processes rely on industry participation for technical input), research (the academic research that informs policy is substantially funded by AI companies), and agenda-setting (the companies being regulated define which risks, mechanisms, and metrics the governance process considers). The Expertise Capture is not corruption. It is the predictable structural consequence of a regulatory domain where the regulated entities possess an overwhelming asymmetric advantage in the technical knowledge the regulatory process requires.
The Expertise Capture creates the condition for the next paper's subject: the Safety Theater. When the entities being governed provide the expertise for governance design, and the governance mechanisms are voluntary, the result is a system that performs the appearance of oversight without the substance of accountability. The voluntary commitments examined in GC-003 are not failures of governance. They are the structural product of governance designed by the entities it is supposed to govern.
The Independent Capacity Deficit
The structural remedy for expertise capture is independent technical capacity — government and civil society institutions with the resources, personnel, and infrastructure to evaluate AI systems without depending on the cooperation of the companies that built them. In the United States, NIST AISI was the closest approximation to this capacity, and it was functionally dismantled within 15 months of its establishment.
The RAND study recommended building robust civil society institutions with independent funding streams, strengthening government ethics policies including conflict-of-interest reviews, building technical capacity in government and civil society through competitive hiring, and increasing transparency regarding AI industry influence activities. These recommendations describe what would be necessary to address the Expertise Capture. They also describe what has not been done.
The deficit is self-reinforcing. Without independent technical capacity, governance bodies depend on industry expertise. Dependence on industry expertise shapes governance toward industry-favorable outcomes. Industry-favorable outcomes include the maintenance of conditions that prevent independent capacity from being built — including the defunding, restructuring, or elimination of the institutions that would provide it. The AISI's trajectory from establishment to dismantlement in 15 months is the deficit made visible.