Six nodes. Supermajority. Veto. Human Anchor. The governance architecture for AI deployment that no existing framework provides.
HC-025 documented the governance gap: AI deployment decisions are structurally made by an unrepresentative subset of stakeholders. This paper translates the HEXAD governance architecture — originally developed within the CSI research program — into a specific proposal for AI deployment governance. The translation is from principle to structure: from the recognition that governance requires multiple stakeholder categories to the specification of which categories, what decision rules, and what corrections are necessary.
The existing landscape is not empty. Ostrom (1990) established the principles of polycentric commons governance — multiple, overlapping decision-making bodies with distinct authorities. The Federal Advisory Committee Act (FACA) created a structure for stakeholder input in U.S. federal governance, with documented limitations: advisory committees are advisory, not governing, and their composition is determined by the agencies they advise, not by the populations they affect. The EU AI Act creates a regulatory framework, but regulatory frameworks operate on deployers, not within the governance structure of deployment itself.
The HEXAD architecture is not advisory. It is governing. Six nodes, each with structural power, each with veto capability, each with specific accountability to a distinct stakeholder category.
Developers, engineers, deployers. The people who design, build, and operate the AI system. Their governance interest is legitimate: they understand the technical capabilities and limitations of the system. Their structural accountability is to technical accuracy and operational safety. They answer the question: can the system do what is proposed?
Investors, shareholders, boards. The people who fund the development and expect returns. Their governance interest is legitimate: they bear financial risk. Their structural accountability is to economic viability and fiduciary obligation. They answer the question: is the deployment economically sustainable?
Workers, communities, children, patients. The people who experience the consequences of AI deployment — whose labor is restructured, whose communities are surveilled, whose development is mediated, whose diagnoses are automated. Their governance interest is existential: they bear the consequences. Their structural accountability is to the populations they represent. They answer the question: does this deployment serve or harm the people it affects?
Domain specialists, ethicists, safety researchers. The people with deep knowledge of the domain in which the AI is deployed — not the AI domain, but the human domain. Education experts for education AI, clinicians for healthcare AI, legal scholars for justice AI. Their structural accountability is to domain-specific knowledge. They answer the question: does this deployment respect the irreducible requirements of the domain?
Regulators, legislators, courts. The institutional structures with legal authority over the deployment context. Their governance interest is public: they are accountable (at least formally) to the polity. Their structural accountability is to legal compliance and democratic legitimacy. They answer the question: is the deployment lawful and democratically legitimate?
Proxy representation for long-term and non-present stakeholders. The people who cannot represent themselves because they do not yet exist, or because the consequences of current deployment will not be visible for years or decades. Their governance interest is intergenerational: they bear consequences that current stakeholders will not live to experience. Their structural accountability is to documented long-term impact assessment.
Three rules govern the HEXAD architecture:
The most immediate objection to the HEXAD architecture targets Node 6: how do you operationalize representation for people who do not yet exist? The objection is serious. Proxy representation for future stakeholders has no perfect implementation. But it has documented, imperfect precedents that demonstrate feasibility.
Wales Commissioner for Future Generations (2015). The Well-being of Future Generations (Wales) Act created a statutory commissioner with a mandate to act as guardian of future generations' interests. The commissioner reviews policy, audits public bodies, and publishes assessments. The role is advisory with structural teeth: public bodies must demonstrate they have considered long-term impact.
Finland Committee for the Future. A permanent parliamentary committee with a specific mandate for long-term and intergenerational policy analysis. The committee has operated continuously and produces reports that shape legislative debate on technology, demographics, and environmental policy.
UN Secretary-General's Special Adviser on Future Generations (2024). Appointed under the Pact for the Future framework to provide institutional voice for intergenerational interests at the international level.
These precedents are imperfect. The Wales Commissioner is advisory, not governing. The Finland Committee operates within parliamentary constraints. The UN role is nascent. None of these institutions have the structural power that the HEXAD Future node requires. But they demonstrate that proxy representation for future stakeholders is not a utopian abstraction — it is a governance innovation with real-world implementations that can be strengthened, not invented from scratch.
The HEXAD architecture without correction reproduces existing power asymmetries. Giving the Governed node a formal seat does nothing if the Governed node lacks the resources to exercise its governance power. Three structural corrections are necessary:
The Governed and Future nodes must receive dedicated resourcing equivalent in capacity (not identical in amount) to the Builders and Capital nodes. This means: paid staff, legal counsel, technical advisors, and operational budget. Governance participation without resources is theater. The resourcing requirement is funded by a governance levy on deployment — a small percentage of deployment cost earmarked exclusively for the representation of affected and future populations.
All six nodes receive the same technical documentation about the proposed deployment, translated into accessible form for each node's constituency. The Governed node receives the same impact assessments, risk analyses, and capability evaluations as the Builders node — not redacted versions, not summaries, but the same documentation in comprehensible form. Information asymmetry is a governance weapon; this correction disarms it.
Adequate review time before forced votes. The Builders and Capital nodes have teams that analyze deployment proposals continuously. The Governed and Future nodes do not have this luxury. A mandatory review period — proportional to the deployment's scale and impact — ensures that nodes with fewer resources have sufficient time to conduct independent analysis before being asked to vote. No forced votes before the review period expires.
Six nodes without asymmetry correction is a governance Potemkin village — the appearance of inclusion with the structure of exclusion.
The HEXAD architecture creates a governance structure. But structures without protections produce majoritarian harm. HC-027 (The Minority Protection Standard) develops the veto mechanism in detail: why it exists, how it operates, and what happens when it is invoked. HC-028 (The Human Anchor Principle) establishes the non-negotiable floor — the sovereignty threshold below which no governance decision, however broadly supported, can push a collaboration design.
Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.