HC-012 · The FTP Framework · Saga XI: The Collaboration

Participation: The Governance Requirement

The second prerequisite in the FTP cascade — you cannot design for Fidelity without the structured input of the populations whose capability is at stake

The Consent Deficit Saga XI: The Collaboration 17 min read Open Access CC BY-SA 4.0
2
tiers of Participation — Threshold (minimum for deployment) and Full (design aspiration) — most current deployments fail Threshold
0
high-stakes AI deployments in which affected populations have formal governance access equivalent to deployers
100%
of K–12 AI education deployments where children — the primary affected population — cannot consent and are not represented

The Cascade Position

Participation is the second prerequisite in the FTP cascade. Transparency (HC-011) must be satisfied first — you cannot participate meaningfully in what you cannot see. Participation must be satisfied before Fidelity can be assessed — if the populations whose capability is at stake have no structural input, there is no mechanism to catch when the design optimizes for something other than human capability.

The FTP Cascade — dependency direction
↓ prerequisite to meaningful
Participation ← you are here
↓ prerequisite to designing for
Fidelity ↑ terminal test

The Consent Deficit

Virginia Eubanks documented it in Automating Inequality (2018): automated systems making decisions about welfare eligibility, child protective services, and homelessness resource allocation — deployed to populations with no structural input into the system's design, no meaningful mechanism for contesting its decisions, and no governance access equivalent to the agencies and vendors that built it.

Ruha Benjamin named it in Race After Technology (2019): the New Jim Code — the encoding of racial and economic hierarchy into automated systems, produced by development teams and capital structures that systematically exclude the populations most affected by their outputs.

The pattern is consistent across every domain examined in Series 1. In education: AI tutoring platforms designed by engineers and purchased by administrators, deployed to children who cannot consent and teachers who were not consulted. In healthcare: diagnostic AI trained on datasets that underrepresent the populations most affected by diagnostic error. In criminal justice: risk assessment tools deployed by courts to defendants who have no access to the algorithm's logic, no input into its design, and no practical mechanism for challenging its output.

The violation signature
Deployment to populations with no formal representation in the governance structure. "Responsible AI" frameworks built exclusively by developers and capital. The consent deficit is not an oversight — it is a structural consequence of who bears the cost of AI deployment versus who makes the decisions about it.

Threshold Participation

Tier 1 — Minimum for Deployment
Threshold Participation

Three requirements: (1) Affected populations are identified and formally documented. (2) Their interests are formally represented in the governance structure — not through self-appointed proxies, but through representatives with documented accountability to the affected group. (3) A documented mechanism exists for affected populations to trigger review or modification of the system post-deployment.

Threshold Participation does not require that affected populations have veto power or direct design authority. It requires that they are seen, represented, and able to initiate change. This is the minimum. Most current deployments do not meet it.

The Threshold standard is deliberately minimal. It asks not whether affected populations govern the design, but whether they are formally present in the governance structure at all. The NIST AI Risk Management Framework (2023) gestures toward stakeholder engagement but does not specify structural requirements — it is a recommendation, not a governance architecture. Schuler and Namioka's Participatory Design (1993) established the principles for genuine inclusion in technology design, but participatory design remains a research methodology, not a deployment requirement.

The gap between the principle and the practice is the consent deficit. Everyone agrees affected populations should be consulted. Virtually no deployment structure requires it.

Full Participation

Tier 2 — Design Aspiration
Full Participation

Affected populations have direct governance access: structured input with genuine capacity to modify or reject designs, not post-hoc consultation. Full Participation means the people whose capabilities are at stake have a seat at the table where the design decisions are made — not a feedback form after the decisions are final.

Full Participation is the direction, not the minimum standard. Holding all current deployments to it would make the framework dismissible as utopian. But specifying it as the aspiration provides the trajectory — the direction in which governance design should move.

Why Two Tiers Matter

The two-tier structure is a strategic choice. Most current deployments fail Threshold Participation — the minimum. The affected populations are not even formally identified, let alone represented. This is the documentable, damning finding: not that governance falls short of an ideal, but that it does not meet a minimum that any reasonable assessment would require.

If the standard were only Full Participation, deployers could argue that the bar is unrealistically high — that meaningful governance of AI systems by all affected populations is impractical at scale. The argument has some force. But it does not apply to Threshold Participation. Identifying affected populations, providing formal representation, and establishing a review mechanism are not impractical. They are simply not done, because the incentive structure does not require them.

The consent deficit is not that governance is imperfect. It is that governance, for the most affected populations, does not exist.

The Structural Gap

The structural gap has a specific shape: the people who bear the consequences of AI deployment and the people who make the decisions about AI deployment are not the same people — and there is no governance mechanism that connects them.

In corporate AI governance, decisions are made by engineering teams, product managers, and executives whose incentives align with efficiency, growth, and competitive advantage. The affected populations — workers whose jobs change, patients whose diagnoses are mediated, defendants whose sentences are influenced, children whose learning is shaped — have no equivalent decision-making access. This is not a failure of intention. It is a structural feature of the way AI deployment decisions are currently organized.

The HEXAD governance structure (HC-026) proposes a six-node architecture that addresses this gap: Builders, Capital, Governed, Expertise, State, and Future. The Participation requirement is the structural foundation for the Governed node — the mechanism through which affected populations gain formal governance access. Without Participation, the HEXAD structure has an empty node. With Participation, it has the most important one filled.

The efficiency objection

"Including affected populations in governance slows deployment. The technology moves fast. Governance by committee produces mediocrity." This objection confuses speed with direction. Fast deployment in the wrong direction — toward the extractive trajectory documented in Series 4 — is not efficiency. It is accelerated harm. The question is not whether Participation slows deployment but whether deployment without Participation produces the right outcomes. The Series 1 evidence says it does not.

Named Condition · HC-012
The Consent Deficit
The structural gap between who bears the consequences of AI deployment and who makes the decisions about it — a gap that is not incidental but is produced by the incentive structure of AI development, where the deployer's interests (efficiency, growth, competitive advantage) are served by excluding the affected population's governance input. The Consent Deficit is measurable: for any given deployment, identify the affected populations and document their formal governance access. In the majority of current high-stakes deployments, that access is zero.

What Follows

Participation is necessary but not sufficient. A system can satisfy Transparency (the affected population can see what it does) and Participation (the affected population has governance input) and still fail Fidelity — the humans in the collaboration may still become less capable over time in the domain's irreducible functions. Fidelity is the terminal test.

HC-013 (Fidelity: The Capability Test) completes the cascade. It asks: are the humans in this collaboration becoming more or less capable over time in the functions identified as irreducible in the Series 1 Pair tables? The measurement standard is derived from the left column of each domain's table. The 30-day test operationalizes it: could the humans perform the irreducible functions adequately if the AI were unavailable for 30 days?

The cascade is now complete: Transparency enables Participation enables Fidelity. HC-014 (The FTP Audit Instrument) operationalizes all three into a structured assessment. HC-015 (The Compliance Theater Record) documents what happens when organizations satisfy the form of these requirements without the function.

← Previous in cascade
HC-011: Transparency — The Legibility Standard
Next in cascade →
HC-013: Fidelity — The Capability Test

References

Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.

External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.