Whether humans in a collaboration become more or less capable over time in domain-irreducible functions is measurable — not a value preference
Fidelity is the goal the cascade serves. Transparency (HC-011) must be satisfied first — you cannot assess what you cannot see. Participation (HC-012) must be satisfied second — if the populations whose capability is at stake have no structural input, there is no mechanism to define what counts as capability preservation, and no way to detect when the collaboration is optimizing for something else. Fidelity is the terminal test: are the humans in this collaboration becoming more or less capable over time in the functions that cannot be delegated to AI without loss?
The cascade is not a hierarchy of importance. Fidelity is what matters. Transparency and Participation are necessary preconditions — structural requirements that must be in place before Fidelity can be meaningfully assessed. A system that claims Fidelity without Transparency is unverifiable. A system that claims Fidelity without Participation has no mechanism for the affected population to contest the claim.
Fidelity is not measured against total human capability. This distinction is critical, and it is where most "human-centered AI" frameworks fail before they begin.
A finance professional working with AI may become less capable at manual spreadsheet construction and more capable at strategic judgment, pattern recognition across markets, and scenario modeling. If the capability gains are in AI-mediated tasks — tasks performed through or with AI — they do not satisfy Fidelity. Fidelity measures capability in the domain's irreducible functions: those that cannot be delegated to AI without loss of something essential to the domain's purpose.
Ward et al. (2017) demonstrated that the mere presence of a smartphone reduces available cognitive capacity, even when the phone is face-down and silent. The cognitive drain is not a function of use — it is a function of availability. Carr (2011) documented the broader pattern in The Shallows: sustained use of tools that outsource cognitive functions produces measurable changes in the cognitive capacities that were outsourced. The tool does not merely assist; it restructures the cognitive ecology of the user.
Parasuraman and Manzey (2010) established the automation complacency framework in Human Factors: when humans work with reliable automated systems, their monitoring performance degrades over time. The degradation is not carelessness — it is a predictable, measurable cognitive adaptation. The human learns to trust the system, and the cognitive resources previously allocated to the automated function are reallocated elsewhere. This is efficient in the presence of automation. It is catastrophic when the automation is unavailable.
The measurement standard for Fidelity is not abstract. It is derived from the left column of each domain's Pair table in Series 1 — the column that identifies the irreducible human capabilities for that domain. This makes Series 1 structurally load-bearing for the entire FTP framework: without the domain-specific identification of irreducible human capabilities, Fidelity has no measurement standard.
Each of the eight domains examined in Series 1 produces a specific Fidelity criterion derived from its Pair table. In education: the capacity for pedagogical judgment — reading a student, adapting in real-time, making the decision that no algorithm captures. In healthcare: independent diagnostic reasoning — the capacity to reach a clinical judgment without AI confirmation. In law: legal reasoning from first principles — the capacity to construct an argument that was not retrieved from a database.
The criteria are different because the domains are different. A single "Fidelity score" applied uniformly across all domains would be meaningless. The whole point is that each domain has specific irreducible human capabilities, and Fidelity measures preservation of those specific capabilities.
Braverman (1974) documented the historical pattern in Labor and Monopoly Capital: the systematic separation of conception from execution in industrial labor. The worker who once designed and built a product is divided into a planner and an operator — the operator retaining only the execution, the conception moving to management. The AI analog is precise: the professional who once exercised judgment and executed is divided into a reviewer and a confirmer — the AI generating, the human approving. The irreducible capability is the judgment, not the approval.
The Fidelity test has a concrete operationalization: could the humans in this collaboration perform the irreducible domain functions adequately if the AI were unavailable for 30 days?
The 30-day test is not a thought experiment. It is the operational definition of Fidelity. If the answer is "no" — if the humans could not perform the irreducible functions without the AI — then the collaboration has failed Fidelity, regardless of how productive or efficient it appears.
The 30-day window is not arbitrary. It is long enough to distinguish genuine capability loss from temporary rustiness — a professional who has not performed a function recently may need a few days to regain fluency, and this is not a Fidelity failure. But it is short enough to represent a realistic disruption scenario: system outage, regulatory suspension, vendor failure. The question is not whether humans could eventually relearn the capabilities, but whether they currently possess them.
A declining Fidelity score over time is the violation signature. If the 30-day test produces a lower score in Year 2 than Year 1, the collaboration is eroding the irreducible human capabilities it should be preserving. This is the trajectory that matters: not the absolute score at any point, but the direction.
Fidelity requires a baseline. Before the AI collaboration begins — or, for existing collaborations, at the earliest practical point — the irreducible capabilities must be assessed. The 30-day test is then administered periodically: annually at minimum, more frequently in high-stakes domains. The comparison is longitudinal: the same professionals, the same irreducible functions, measured over time.
This is not impractical. Professional licensing already requires periodic competency demonstration in medicine, law, aviation, and engineering. Fidelity extends the principle: the competency assessment must specifically target the irreducible functions identified in the domain's Pair table, and it must be administered under conditions of AI unavailability.
The violation is not a failure of intention. The deployers are not trying to erode human capability. The violation is structural: the incentive system optimizes for throughput, efficiency, and cost reduction. These metrics improve as more cognitive work is delegated to AI. Human capability in delegated functions atrophies as a predictable consequence of the delegation. The system is working as designed. The question is whether the design serves human capability or organizational efficiency, and the Fidelity test makes the answer measurable.
"AI augments human capability, it doesn't replace it. Humans become more capable, not less, by working with AI." This is sometimes true — for AI-mediated capabilities. A lawyer using AI legal research may become more capable at pattern identification across large case corpora. But Fidelity does not measure AI-mediated capability. It measures capability in irreducible functions — the functions that depend on human judgment, experience, and reasoning that cannot be replicated by the AI without loss. The augmentation objection assumes that all capability gains are equivalent. They are not. Capability gained through the AI is contingent on the AI's availability. Capability in irreducible functions is sovereign.
The cascade is now complete. Transparency enables Participation enables Fidelity. The three criteria are interdependent, ordered, and measurable. What remains is operationalization: how do you take these three criteria and turn them into a structured assessment that can be applied to any human-AI collaboration?
HC-014 (The FTP Audit Instrument) constructs that assessment. It takes the three cascade criteria — Transparency, Participation, Fidelity — and operationalizes them into a structured audit with specific indicators, scoring rubrics, and domain-specific measurement protocols. HC-015 (The Compliance Theater Record) documents what happens when organizations satisfy the form of these requirements without the function — the pattern of ethics boards without authority, principles without enforcement, and audits without independence.
Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.