AI deployment decisions are structurally made by an unrepresentative subset of stakeholders — while the most affected populations have no equivalent governance access.
Three categories of stakeholder dominate every major AI governance decision: the developers who build the systems, the capital that funds them, and the regulators who oversee them. These three groups have offices, budgets, legal teams, and direct access to legislative processes. They publish position papers. They attend hearings. They shape frameworks.
The populations most affected by AI deployment — workers whose labor is restructured, communities subjected to algorithmic surveillance, children whose developmental experiences are mediated, patients whose diagnoses are automated — have no equivalent access. They are not absent by accident. They are absent because the economic structure of AI development does not produce mechanisms for their inclusion.
This paper documents the gap. Not as a moral failing of any individual actor, but as a structural consequence of how AI governance is currently organized — and why the HEXAD architecture (HC-026) is designed to close it.
Whittaker (2021) documented in ACM FAccT what she termed the steep cost of capture: the systematic pattern by which industry funding shapes AI ethics research. The mechanism is not crude. It is not that companies pay researchers to produce favorable conclusions. It is that the funding structure determines which questions get asked, which methodologies are resourced, and which findings receive institutional amplification.
The result is an AI ethics discourse that is structurally responsive to industry concerns — safety as defined by deployers, fairness as operationalized by developers, accountability as framed by the organizations that would be held accountable. The populations who experience the consequences of AI deployment are subjects of this research, not participants in its design.
Couldry and Mejias (2019) in The Costs of Connection provided the framework that makes the governance gap legible at scale. Data colonialism is not a metaphor. It is a structural analysis: the appropriation of human life data for the benefit of extractive capital, following the same economic logic — and producing the same governance asymmetries — as historical colonialism.
The parallel is precise. In historical colonialism, the governance structures of extraction were designed by the extractors. The populations whose resources were extracted had no structural input into the governance of that extraction. In data colonialism, the governance structures of AI deployment are designed by deployers. The populations whose data and labor are extracted have no structural input into the governance of that deployment.
The gap is not an oversight. It is produced by the economic structure itself. Capital concentration produces governance concentration. When the resources required to build, train, and deploy AI systems are concentrated in a small number of organizations, governance access concentrates in those same organizations. The affected populations are structurally excluded not because anyone decided to exclude them, but because inclusion would require a governance architecture that the economic structure does not produce.
OpenSecrets (2023) documented $957 million in AI-related lobbying expenditure. The number is significant not for its size alone, but for its distribution. The lobbying is conducted by technology companies, industry associations, and allied organizations. The populations most affected by AI deployment — workers, communities, patients, defendants — have no equivalent lobbying infrastructure.
This is the governance gap in its most measurable form. Policy influence is a function of organizational capacity, and organizational capacity is a function of capital. The populations that bear the consequences of AI deployment do not have the capital to build equivalent policy influence. They cannot hire lobbyists. They cannot fund research that shapes the terms of debate. They cannot attend regulatory proceedings as participants rather than subjects.
The governance gap is not an oversight. It is produced by the economic structure of AI development, where capital concentration produces governance concentration.
The FTC (2024) AI Surveillance Economy report documented the specific mechanisms by which AI deployment produces governance asymmetry. The report found that major technology companies collect and process personal data at a scale that precludes meaningful individual consent, deploy algorithmic systems whose decision-making processes are opaque to the affected populations, and operate within governance structures where affected populations have no formal representation.
The FTC findings converge with Zuboff's (2019) surveillance capitalism analysis: the economic logic of behavioral data extraction produces governance structures that are fundamentally incompatible with the interests of the populations whose behavior is being extracted. The governance gap is not a side effect of the surveillance economy. It is a necessary condition for it. If affected populations had governance access equivalent to deployers, the extractive practices documented by the FTC would not be economically viable.
"The market corrects for governance failures. If affected populations are harmed, they will switch to competitors, and the market will punish the deployer." This objection fails on three documented grounds. First, many AI deployments are not market-mediated — criminal justice, welfare, education deployments are imposed, not chosen. Second, information asymmetry prevents affected populations from identifying harms attributable to algorithmic systems. Third, network effects and switching costs eliminate the competitive pressure that the objection assumes. The market correction mechanism requires conditions that AI deployment systematically violates.
The governance gap is not a problem of bad actors. It is a structural feature of how AI development is currently organized. The gap emerges from three converging forces:
Capital concentration. The resources required to develop and deploy AI at scale are concentrated in a small number of organizations. These organizations have governance access proportional to their economic power. Affected populations do not.
Information asymmetry. The technical complexity of AI systems creates a knowledge barrier that excludes affected populations from meaningful governance participation. The deployers understand what the system does. The affected populations experience what the system does to them. These are fundamentally different positions.
Incentive misalignment. The economic incentives of AI deployment reward efficiency, scale, and speed. Governance inclusion of affected populations slows deployment and introduces constraints. The incentive structure punishes inclusion and rewards exclusion.
These three forces — capital concentration, information asymmetry, incentive misalignment — produce the governance gap as a structural necessity, not an incidental failure. Closing the gap requires a governance architecture that counteracts all three. The HEXAD structure (HC-026) is designed to do exactly that.
The governance gap is the problem. HC-026 (The HEXAD Translation) proposes the architecture: six governance nodes, supermajority requirements, veto mechanisms, and structural corrections for the three forces that produce the gap. The translation from diagnosis to architecture is the work of Series 5.
The argument proceeds through four papers. This paper (HC-025) documents the gap. HC-026 proposes the six-node architecture. HC-027 establishes the minority protection standard — the veto mechanism that prevents majority-stakeholder governance from producing minority-population harm. HC-028 establishes the Human Anchor Principle — the non-negotiable floor below which no governance decision can push a collaboration design.
Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.