Existing responsible AI frameworks systematically satisfy the form of FTP requirements without the function — and this pattern is documented, not incidental
Organizations create ethics boards, publish principles, commission audits — and the resulting governance satisfies the form without the function. Ethics boards lack authority to override commercial decisions. Principles lack enforcement mechanisms. Audits lack independence from the entities being audited. This is not an occasional failure. It is the dominant pattern in corporate AI governance, and it is documented.
The pattern has a specific structure: every element of governance theater serves a dual function. The ethics board provides reputational cover while having no binding authority. The published principles provide a framework reference while creating no enforceable obligations. The commissioned audit provides a certificate while maintaining the auditor's commercial relationship with the auditee. Each element looks like governance from the outside. None function as governance from the inside.
Google's Advanced Technology External Advisory Council (ATEAC) was announced on March 26, 2019. It was dissolved on April 4, 2019 — within nine days. The council was created to provide external oversight of Google's AI ethics commitments. It included members whose appointment was immediately contested by Google employees and external advocates. Google's response was not to address the contestation but to dissolve the council entirely. The message was precise: external governance that creates friction with internal decisions will not be tolerated.
IBM dissolved its AI ethics team in 2021 as part of broader restructuring. The team's function — providing internal guidance on ethical AI development — was redistributed across product teams. The redistribution is the tell: when ethics oversight is embedded in product teams, it becomes subordinate to product objectives. An ethics function that reports to the product leader it is meant to oversee is not oversight. It is a label.
These dissolutions are not failures of the governance system. They are the system working as designed. Advisory structures that cannot override commercial decisions are not governance — they are permission structures. When the advisory structure threatens to create genuine friction, it is removed. When it provides cover without friction, it is maintained. The survival criterion is not ethical effectiveness but organizational compatibility.
The ethics board that has never blocked a deployment is not a governance mechanism. It is a press release.
Raji et al. (2020), presented at FAccT, documented the gap between corporate AI ethics commitments and actual practice. The finding was specific: organizations that publicly committed to responsible AI principles showed no measurable difference in deployment practices compared to organizations that did not. The principles existed as documents. They did not exist as constraints on behavior.
The gap is not a failure of implementation. It is a structural feature. Principles that lack enforcement mechanisms are not weak governance — they are non-governance. A principle that states "we will ensure fairness in AI systems" but provides no definition of fairness, no measurement protocol, no threshold for deployment blocking, and no accountability mechanism for violation is not a principle. It is a sentence.
The standard corporate AI ethics architecture: (1) Published principles — aspirational statements with no enforcement mechanism. (2) Ethics board — advisory, no binding authority, members selected by the entity being governed. (3) Internal review process — conducted by employees whose careers depend on the projects being reviewed. (4) External audit — commissioned and paid for by the auditee, with scope defined by the auditee. Each element satisfies the form of a governance requirement. None satisfy the function.
Metcalf et al. (2021), presented at FAccT, named the structural problem precisely: the ethics owner's dilemma. Ethics functions in technology companies are structurally subordinate to product and revenue functions. The ethics team reports to leadership whose primary metrics are growth, revenue, and competitive position. The ethics team's recommendations must pass through a decision structure that weighs ethical considerations against commercial ones — and in that weighing, the commercial considerations have structural advantage.
This is not a failure of the individuals in ethics roles. It is a structural constraint. The ethics owner who consistently blocks commercially important deployments on ethical grounds will be replaced by one who does not. The career incentive aligns with approval, not with blocking. The ethics owner who survives is the one who finds ways to approve — who identifies mitigations that allow deployment to proceed, who frames concerns as risks to be managed rather than reasons to stop. This is rational behavior within the incentive structure. It is also compliance theater.
Whittaker (2021), published in Communications of the ACM, documented the steep cost of capture: how industry funding shapes AI ethics research itself. The finding extends the compliance theater pattern beyond corporate governance into the research ecosystem that is supposed to provide independent oversight. When the primary funders of AI ethics research are the technology companies whose practices the research should scrutinize, the research agenda shifts — not through explicit censorship but through the subtler mechanisms of funding priorities, access dependencies, and career incentives.
The capture pattern produces a specific distortion: AI ethics research increasingly focuses on technical interventions (debiasing algorithms, fairness metrics, interpretability tools) rather than structural interventions (governance reform, regulatory requirements, power redistribution). Technical interventions are compatible with the current deployment model — they can be added to existing systems without changing who decides, who benefits, or who is harmed. Structural interventions threaten the deployment model itself. The funding incentive favors the former.
"Corporate AI ethics has improved significantly since 2018. Companies have invested billions in responsible AI teams, tools, and processes. The ATEAC dissolution was a learning moment, not the current state." The investment is real. The question is whether the investment has produced governance or has produced more sophisticated theater. The FTP diagnostic provides the test: apply the three criteria. If the ethics structures still lack Transparency (is the authority structure public?), Participation (are affected populations represented?), and Fidelity (is capability preservation measured?), then the investment has improved the production values of the theater without changing the script.
For each claimed governance mechanism, apply the three FTP criteria. The diagnostic is simple and produces unambiguous results.
Can an external observer determine: who sits on the ethics board, what authority they have, whether that authority is binding or advisory, what decisions the board has made, and whether any deployment has been blocked or substantially modified by the board's intervention? For the majority of corporate AI ethics structures, the answer to at least one of these questions is no. Advisory authority is not distinguished from binding authority in public communications. Decision records are not published. The Transparency requirement from HC-011 is not met.
Does the governance structure include formal representation of the populations most affected by the AI deployment — not as consultees, not as research subjects, not as user testers, but as governance participants with structural input? For every framework examined, the answer is no. Ethics boards are composed of executives, academics, and occasionally civil society representatives — none with formal accountability to the affected populations they claim to represent. The Participation requirement from HC-012 is not met at even the Threshold level.
Does the governance structure include any mechanism for measuring whether the humans in the collaboration are becoming more or less capable over time in domain-irreducible functions? For every framework examined, the answer is no. No corporate AI ethics framework includes Fidelity measurement. No ethics board charter references capability preservation. No responsible AI audit protocol includes the 30-day test or any equivalent. The Fidelity requirement from HC-013 is not addressed at all.
The diagnostic result is consistent across every framework examined: failure on at least two of three criteria, and in most cases, failure on all three. This is not a harsh grading standard. Transparency, Participation, and Fidelity are minimum requirements for governance that actually governs. The frameworks fail not because the standard is high but because the frameworks were not designed to govern. They were designed to demonstrate governance.
This paper completes Series 2 of the FTP Framework. The series has established: the three-criteria cascade (Transparency, Participation, Fidelity), the operational audit instrument (HC-014), and the documented pattern of compliance theater that the framework is designed to detect and distinguish from genuine governance.
The compliance theater record is not an indictment of individuals. The people working in corporate AI ethics roles are, in many cases, doing the best work possible within structural constraints that prevent genuine governance. The record is an indictment of the structures — and a demonstration that the FTP framework provides the diagnostic tools to distinguish governance from its performance.
Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.