I

The Gap

The factual record is straightforward. The United States does not have a single comprehensive federal law regulating artificial intelligence. As of March 2026, federal AI governance relies on agency enforcement under existing laws, executive orders that can be revoked by subsequent administrations, voluntary industry commitments, and guidelines from standards bodies. No binding federal statute addresses AI development, deployment, safety testing, or accountability in a comprehensive framework.

This is not for lack of awareness. The 118th Congress saw more than 120 AI-related bills introduced. None produced comprehensive legislation. The Senate AI Insight Forums — convened by Majority Leader Chuck Schumer beginning in September 2023 — brought together more than 60 senators with AI company executives, researchers, and civil society representatives. The forums produced recommendations. The recommendations did not produce legislation.

The pattern is not unique to AI. Complex technical regulation routinely lags the technology it seeks to govern. What is unique to AI is the speed differential. The gap between capability advancement and regulatory response is not merely wide — it is widening, because the capability curve is exponential and the legislative curve is not.

Timeline Comparison

EU AI Act: proposed April 2021, political agreement December 2023, adopted March 2024, entered into force August 2024 — 40 months. During those 40 months, AI capabilities advanced through at least four major generational leaps in language models, image generation, and multimodal reasoning. The regulation that entered into force in August 2024 was designed to govern a technology landscape that no longer existed by the time it became law.

II

The Congressional Record

On May 16, 2023, OpenAI CEO Sam Altman testified before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. His testimony included a direct call for AI regulation — specifically, the creation of a federal licensing agency for AI systems above a certain capability threshold. The hearing was widely covered. Altman's call for regulation was treated as significant precisely because the CEO of a leading AI company was asking to be regulated.

The structural dynamic of that hearing deserves attention. The primary witness testifying about how AI should be regulated was the chief executive of one of the companies that would be subject to that regulation. His testimony shaped the terms of the discussion. He proposed the regulatory architecture. The committee asked questions within the framework he established.

On September 13, 2023, Senator Schumer convened the first AI Insight Forum. The closed-door session brought together more than 60 senators with a 22-person panel that included Sam Altman (OpenAI), Elon Musk (xAI), Sundar Pichai (Google), Satya Nadella (Microsoft), and Mark Zuckerberg (Meta). Every person in the room raised their hand when Schumer asked whether government needed to play a role in regulating AI.

The unanimous agreement on the need for regulation did not produce regulation. What it produced was a series of forums, a set of bipartisan recommendations, and eventually — nothing that became law. The 118th Congress expired in January 2025 without passing comprehensive AI legislation. The only standalone federal AI law enacted as of early 2026 is the TAKE IT DOWN Act, signed in May 2025, which addresses AI-generated deepfakes — a narrow slice of the governance problem.

The Forum Dynamic

In September 2023, Elon Musk told senators that AI poses a "civilizational risk" to governments and societies. He proposed that a government referee was needed. The same companies whose CEOs warned of civilizational risk continued deploying new models throughout 2024 and 2025 without waiting for the referee they had recommended.

III

The EU AI Act: A Case Study in Legislative Timelines

The European Commission published its proposal to regulate artificial intelligence on April 21, 2021. The proposal established a risk-based classification system: unacceptable risk (banned), high risk (subject to compliance requirements), limited risk (transparency obligations), and minimal risk (no specific requirements). The framework was designed before ChatGPT's public release in November 2022 — before the current wave of generative AI capability had become visible to the public.

The legislative process that followed is a precise illustration of the Governance Lag. The European Parliament's committees began joint negotiations in December 2021. The European Council adopted its general orientation in December 2022 — the same month ChatGPT demonstrated that the technology being regulated had capabilities the regulation did not anticipate. The emergence of foundation models and general-purpose AI systems required the framework to be substantially revised during negotiations.

Political agreement was reached on December 9, 2023, after three days of marathon talks. The European Parliament adopted the final text on March 13, 2024, by a vote of 523 to 46. The European Council formally adopted the Act on May 21, 2024. The Act was published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024 — with implementation provisions rolling out over the following 6 to 36 months.

The EU AI Act is the most significant piece of AI regulation enacted anywhere in the world. It is also a document that was designed to govern the AI landscape of 2021, substantially revised to account for the AI landscape of 2023, and entered into force into the AI landscape of 2024 — by which time the capability frontier had already moved beyond what the Act's categories were built to classify. The Act's provisions on general-purpose AI models were added during negotiations, not present in the original proposal. The regulation is governing capabilities that did not exist when the regulation was designed.

IV

The Executive Order Cycle

In the absence of legislation, executive orders became the primary mechanism for US AI governance. On October 30, 2023, President Biden signed Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order was the most comprehensive federal AI governance action taken to date. It directed federal agencies to establish safety testing requirements, required developers of powerful AI systems to share safety test results with the government, and tasked NIST with developing standards for red-team testing.

Executive Order 14110 had a structural limitation that is inherent to executive orders: it could be revoked by a subsequent president. On January 20, 2025, the Trump administration issued Executive Order 14179, which revoked portions of the Biden AI executive order that emphasized safety testing and reporting requirements. The policy reorientation prioritized promoting AI innovation and American competitiveness over the safety-focused framework of the previous order.

The executive order cycle demonstrates a specific failure mode of governance-by-executive-action. The safety testing requirements established in October 2023 were operational for approximately 15 months before being partially revoked. Companies that had begun building compliance infrastructure found that infrastructure devalued overnight. The governance framework was not merely slow to establish — it was impermanent once established, because it rested on executive authority rather than statutory foundation.

Biden EO 14110

October 2023: Safety testing requirements, red-team standards, NIST framework. Operational for ~15 months before partial revocation.

Trump EO 14179

January 2025: Revoked safety-focused provisions. Reoriented toward innovation promotion and American AI dominance. No replacement safety framework.

March 2026 Framework

Trump administration released legislative recommendations emphasizing federal preemption of state AI laws and innovation-first approach. No binding law.

V

The NIST Framework

The National Institute of Standards and Technology published its AI Risk Management Framework (AI RMF 1.0) in January 2023. The framework is voluntary. It provides a structured approach for organizations to identify, assess, and manage AI risks through four core functions: Govern, Map, Measure, and Manage. It was developed through an open, transparent, consensus-driven process involving more than 240 organizations.

The AI RMF is widely cited as the most technically rigorous US government contribution to AI governance. It is also non-binding. Organizations may adopt it or ignore it. There is no compliance requirement, no reporting obligation, no enforcement mechanism. The framework's influence depends entirely on voluntary adoption by the entities it seeks to guide.

NIST's AI Safety Institute (AISI), established in November 2023, represented the closest the US government came to building an independent technical capacity for AI safety evaluation. Under Director Elizabeth Kelly, AISI reached agreements with OpenAI and Anthropic to test their models prior to release and began collaborating with international AI safety bodies. The Institute appointed a leadership team in April 2024 that included Paul Christiano, a former OpenAI researcher, as head of AI safety.

In February 2025, Elizabeth Kelly departed AISI as the Trump administration shifted course on AI policy. The administration announced plans to fire as many as 500 NIST staffers, including AISI personnel. Most AISI workers were still on probation and thus vulnerable to termination. No AISI workers were invited to join the Trump administration delegation at the AI Action Summit in Paris. The closest thing to an independent US government technical capacity for AI safety evaluation was functionally dismantled within 15 months of its establishment.

VI

The State-Level Response

In the absence of federal action, states have moved to fill the regulatory vacuum. More than 700 AI-related bills were introduced across state legislatures in 2024 and 2025. The state-level response is significant precisely because it demonstrates the demand for governance that the federal vacuum has left unmet.

The most instructive state-level case is California's SB 1047 — the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. Introduced by State Senator Scott Wiener, SB 1047 would have required developers of AI models trained at a computing cost exceeding $100 million to establish safety and security protocols, conduct pre-deployment testing, and maintain the ability to shut down deployed models. The bill addressed frontier models specifically — the category of AI system whose capabilities are advancing fastest and whose governance gap is widest.

SB 1047 attracted opposition from Google, Meta, and OpenAI, as well as numerous members of Congress. It also attracted support: at least 113 current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI signed a letter to Governor Newsom supporting the bill. Governor Newsom vetoed SB 1047 on September 29, 2024, citing concerns that the bill's focus on computational size rather than deployment context could burden California's AI industry.

The SB 1047 episode illustrates the complete governance cycle: demand for regulation, legislative response, industry opposition, and executive veto. The bill's death did not eliminate the safety concerns it addressed. It eliminated the governance mechanism that would have addressed them. The March 2026 Trump administration framework includes legislative recommendations for federal preemption of state AI laws — which would eliminate the state-level governance mechanisms that emerged precisely because federal governance did not.

VII

The Governance Lag — Named

Named Condition — GC-001
The Governance Lag

The structural temporal gap between the speed of AI capability advancement and the speed of regulatory response. The Governance Lag is not merely a delay — it is a widening gap, because AI capabilities advance on an exponential curve while legislative processes operate on a linear timeline constrained by deliberation, negotiation, and political cycles. The lag is compounded by three factors: the executive order cycle (governance mechanisms established by one administration can be revoked by the next), the expertise dependency (the technical knowledge required to design regulation resides primarily in the entities to be regulated), and the jurisdictional vacuum (federal inaction leaves governance to states, whose efforts face industry opposition and potential federal preemption). The Governance Lag is not a temporary condition that will resolve as regulators catch up. It is a structural feature of the relationship between exponential capability growth and linear institutional response.

The Governance Lag establishes the structural context for the remaining papers in this series. The regulatory vacuum is not empty — it is filled by industry self-governance, voluntary commitments, and the expertise of the companies being governed. The next paper examines how that expertise dependency operates in practice: how the companies building AI became the primary staffing source, advisory body, and policy author for the governance frameworks that are supposed to oversee them.

VIII

What the Vacuum Contains

A regulatory vacuum is never truly empty. Something fills it. In AI governance, what fills the vacuum is industry self-regulation — the voluntary commitments, the corporate responsibility reports, the safety pledges, the self-imposed testing protocols. These are not nothing. Some represent genuine safety engineering effort. But they share a structural feature: the entity being governed and the entity doing the governing are the same.

The next four papers in this series document what fills the vacuum and how. The Industry Chair examines the expertise dependency that places AI companies at the center of every governance body. The Safety Theater documents the voluntary commitment mechanism and its enforcement gap. The Open Source Weapon traces how openness rhetoric serves simultaneously as safety mechanism, competitive strategy, and regulatory shield. And The Governance Gap synthesizes the pattern into the cross-domain structural analysis that connects AI governance capture to the identical pattern documented in tobacco, pharmaceutical, and financial services regulation.

The Governance Lag is not the problem. It is the condition that makes the problem possible. The problem is what happens in the lag — who fills the vacuum, on what terms, and with what accountability. The documented answer, as the following papers will show, is that the vacuum is filled by the companies that benefit most from the absence of binding regulation, on terms those companies define, with accountability mechanisms those companies designed and report on themselves.