The full synthesis. If cited externally, this is the frame. Every other paper is a chapter. This is the whole argument.
Every domain has a natural pairing. Where AI covers the technical, scalable, consistent, and physically demanding — humans cover the relational, ethical, aesthetic, and contextually sovereign. The goal is not to merge them. The goal is to identify the lock and design the key.
The current dominant deployment design does not do this. It is extractive: it replaces human practice in irreducible functions rather than freeing humans from machine-appropriate tasks. The consequence is documented and measurable: a depreciation curve that begins with practice atrophy, proceeds through tacit knowledge loss, and terminates in single-point fragility — systems that fail catastrophically when the automation they depend on encounters conditions outside its training distribution, and no human with sufficient competence exists to intervene.
This saga names the mechanism, provides the measurement standard, maps the collapse trajectory, and specifies the governance architecture and prevention conditions that produce the alternative. What follows is the synthesis.
Ten papers (HC-001 through HC-010) applied a three-axis framework to eight domains. In each, the analysis identifies the natural human-machine complementarity (Axis 1), tests current deployment against FTP criteria (Axis 2), and documents the consequences of extractive design (Axis 3). The unified capability taxonomy:
The left column draws from three structural dependencies identified in HC-001: embodiment, lived consequence, and relational presence. The right column draws from four structural capabilities identified in HC-002: scale, consistency, endurance, and speed. The Capability Floor (left) and the Scale Threshold (right) define the natural complementarity — the lock and the key.
Fidelity, Transparency, Participation — defined as a dependency cascade, not three parallel tests:
Transparency (HC-011) at three levels: functional (what the AI does), process (how it produces outputs), audit (independent verification). The security/IP defense is valid only against Level 3 — most failing deployments are opaque at Levels 1 and 2 where the defense does not apply.
Participation (HC-012) at two tiers: Threshold (affected populations identified, represented, and able to trigger review) and Full (direct governance access with genuine capacity to modify designs). Most current deployments fail Threshold.
Fidelity (HC-013): are the humans in this collaboration becoming more or less capable over time in the domain's irreducible functions? Measured against the left column of each domain's Pair table. The 30-day test: could the humans perform the irreducible functions adequately if the AI were unavailable for 30 days?
The FTP Audit Instrument (HC-014) operationalizes these into 18 structured questions with cascade enforcement: Fidelity cannot receive a "Satisfies" verdict if either preceding criterion fails.
Validation status: The FTP framework, HEXAD, CSI, DAP, and the 30-day test are theoretically grounded instruments awaiting empirical validation. They have not been tested in controlled settings or deployed at scale. The specifications are presented as testable standards, not as validated assessment tools.
Five stages from extractive deployment to civilizational fragility, with observable leading indicators at each transition:
The irreversibility threshold is at the Stage 2→3 transition. Before Stage 3, recovery is possible through training, practice mandates, and capability preservation investment. After Stage 3, the practitioner base needed for recovery has itself depreciated below the transmission threshold.
Prior automation waves targeted domain-specific capabilities sequentially: physical strength in manufacturing, arithmetic in finance, pattern recognition in logistics. The atrophy in each domain was domain-specific and did not compound across domains.
The current AI wave targets language, reasoning, and judgment — the same underlying cognitive substrate across all domains simultaneously. If these faculties atrophy, they do not atrophy in construction or in medicine. They atrophy in the human. This produces single-point fragility at the level of human cognition itself — a risk no prior automation wave created.
Six nodes in AI deployment governance:
Supermajority: 4 of 6 node agreement required for deployment. Veto: any node can trigger mandatory review — not rejection, but review that must be completed before deployment proceeds. Human Anchor: the Fidelity criterion cannot be overridden by any consensus outcome — a deployment that fails Fidelity cannot be approved regardless of votes.
Three power asymmetry corrections prevent the structure from reproducing existing imbalances: (1) dedicated resourcing for The Governed and The Future equivalent to what Builders and Capital bring; (2) information asymmetry correction — all parties receive the same technical documentation in accessible form; (3) time asymmetry correction — adequate review time before forced votes.
A non-negotiable lower bound below which no efficiency argument, consensus decision, or governance outcome can push a legitimate collaboration design. Operationalized per domain:
Education: Children retain developmental experiences that require human relational presence — social-emotional learning during critical periods cannot be AI-mediated. Healthcare: Patients retain the right to human clinical judgment at critical junctures — diagnosis communication, treatment decisions integrating values, end-of-life care. Law: Defendants retain the right to human decision-makers at sentencing and conviction — the Chouldechova impossibility makes algorithmic-only sentencing structurally unjust. Governance: Citizens retain the right to human deliberation in collective decisions that bind them — the Habermas legitimacy condition.
Finance: Strategic judgment with moral accountability cannot be fully delegated. Construction: Craft judgment and safety assessment require embodied presence. Science: Hypothesis formation and research ethics require human judgment. Care: Therapeutic presence cannot be substituted — the product is the relationship, not the service.
Four structural conditions prevent Stage 3 and Stage 4 — and none is currently produced by default:
Mandatory practice requirements. The FAA AC 120-111 model: when automation displaces human practice in critical functions, mandate periodic practice to prevent atrophy. Aviation recognized this. No other domain has.
FTP compliance as deployment prerequisite. The audit instrument (HC-014) applied before deployment, not after harm. The cascade enforcement prevents the compliance theater that characterizes current governance.
HEXAD governance with affected population representation. The Governed node filled with genuine representation, not self-appointed proxies. The veto mechanism protecting structural minorities from majority-stakeholder harm.
Cultural and institutional valuation of human capability. The hardest condition. Market systems optimize for efficiency. Human capability preservation is an externality — a cost borne by the practitioners and communities, not by the deployers. Until this changes, the other three conditions fight the current. Policy can create the conditions. Markets, left alone, will not.
The Anti-Extractive Architecture is not utopian. It is the minimum viable alternative to the documented collapse trajectory. Every component — the Pair tables, the FTP cascade, the collapse gradient, the HEXAD governance, the Sovereignty Floor — is derived from documented evidence, built on established frameworks, and designed to produce consistent, reproducible assessments. The question is not whether it is achievable. The question is whether we will design toward it before the Stage 2→3 transition makes recovery structurally impractical.
A research program that cannot name its own disconfirmation criteria is not a research program — it is an assertion. This section names the evidence that would weaken or falsify Saga XI's central argument.
If these conditions were demonstrated at scale and replicated across contexts, the thesis would require fundamental revision.
Internal: This paper is part of The Collaboration (I11 series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.