HC-024 · The Collapse Vector · Saga XI: The Collaboration

What Prevention Actually Requires

A specific, operationalizable set of structural conditions prevents Stage 3 and Stage 4 — and these conditions are not currently being produced by default.

The Resilience Floor Saga XI: The Collaboration 18 min read Open Access CC BY-SA 4.0
4
structural conditions for prevention: policy, design, governance, and cultural valuation of human capability
8
domains requiring domain-specific minimum viable human capability thresholds
0
current AI governance frameworks that specify prevention conditions for the collapse gradient

The Problem

The collapse gradient documented across HC-020 through HC-024b describes a trajectory. This paper specifies what prevents it. Not in general terms — not "we need to be careful" or "governance is important" — but in specific, operationalizable structural conditions that either obtain or do not, that can be verified or falsified, and that produce the prevention outcome when present.

The prevention problem has a specific shape: the collapse gradient is produced by default when AI deployment is optimized for efficiency without capability preservation requirements. Markets do not spontaneously produce human capability preservation because capability preservation is a cost with no short-term return. The Depreciation Curve (HC-020) is invisible under normal operating conditions because the automation that causes it also compensates for it. The Tacit Knowledge Problem (HC-021) manifests only when the pre-automation generation retires — a timeline measured in decades. Single-Point Fragility (HC-022) becomes visible only during system failure. The Common Faculty Problem (HC-023) becomes visible only when multiple domains reach Stage 2 simultaneously.

Every incentive in the system points toward extractive deployment. Prevention requires structural conditions that counteract those incentives. This paper specifies four.

The Sovereignty Floor: Minimum Viable Human Capability

The Sovereignty Floor is the operationalization of a simple principle: for each domain, there exists a minimum level of human capability below which the domain cannot perform its essential functions during automated system failure, and below which the democratic accountability chain breaks because nominal human decision-makers lack the competence to evaluate the systems they oversee.

The Sovereignty Floor must be defined per domain because the irreducible functions differ. The minimum viable human capability for education is different from that for healthcare, finance, or governance. The FTP Framework's Pair tables (Series 1) specify the irreducible functions. The collapse gradient (Series 4) specifies the stages of their degradation. The Sovereignty Floor specifies the threshold below which degradation becomes unacceptable.

Domain Thresholds
Minimum Viable Human Capability by Domain

Education: Sufficient trained educators to maintain relational scaffolding (SEL capacity) for all students, with practice hours adequate to develop and maintain developmental attunement. Threshold: no student's primary educational relationship is with an AI system.

Finance: Sufficient human practitioners to intervene in automated trading during system failure, with tacit market knowledge maintained through regular practice. Threshold: human override capability demonstrated quarterly.

Construction: Sufficient master craftspeople to train the next generation in each trade, with apprenticeship pipelines maintained at historically sustainable levels. Threshold: apprenticeship-to-retirement ratio above 1.0 in each trade.

Healthcare: Sufficient physicians with independent diagnostic capability to operate during system failure, with relational capacity protected from administrative displacement. Threshold: the 30-day test — could the healthcare system function adequately if AI diagnostic tools were unavailable for 30 days?

Law: Sufficient judicial capacity for independent case assessment without algorithmic input, with override rates maintained above automation-bias thresholds. Threshold: judicial override exercised and documented in at least 15% of cases where algorithmic recommendation is available.

Governance: Sufficient civil servant capacity for independent policy analysis and decision-making, with democratic accountability chain intact. Threshold: no consequential government decision is made by an automated system without a human official who demonstrably understands and can justify the decision.

Science: Sufficient human capacity for independent hypothesis evaluation, experimental design, and peer review at the rate needed to govern AI-generated scientific output. Threshold: replication and review rates keep pace with publication rates.

Care: Sufficient human caregivers for all relational care functions, with no substitution of AI systems for human relational presence in eldercare, childcare, disability support, or mental health. Threshold: zero substitutive deployment in relational care functions.

Condition 1: Mandatory Practice Requirements

The FAA's response to manual flight skill degradation (AC 120-111) is the strongest existing precedent for prevention. When the FAA recognized that cockpit automation was degrading pilots' manual flight skills, it mandated periodic manual flight practice. The mandate works because it is structural: regardless of the efficiency argument for full-time automation, pilots must practice manual flight. The practice preserves the capability. The capability provides the safety margin when automation fails.

Prevention Condition 1 extends this precedent to all eight domains: mandatory practice requirements for the irreducible human functions identified in the FTP Framework's Pair tables. The requirements must be:

Domain-specific. The practice requirements for healthcare differ from those for education, finance, or law. Each domain's requirements must be developed by domain experts with reference to the irreducible functions specified in the Pair tables.

Time-protected. Practice time must be structurally protected from efficiency pressure. If the practice requirement can be waived when the system is running smoothly — when it appears unnecessary — it will be waived. The FAA mandate works because it cannot be waived regardless of how well the autopilot is performing.

Assessed. Practice without assessment is compliance theater. The practice must be evaluated against performance standards that demonstrate maintained capability, not merely logged hours.

The aviation precedent
Aviation is the only domain that has recognized Stage 1 atrophy and implemented a structural prevention mechanism. The fact that aviation — with its rigorous safety culture, regulatory authority, and catastrophic failure visibility — required a regulatory mandate to preserve human capability suggests that no domain will preserve human capability through voluntary action. The market does not produce capability preservation. Regulation must require it.

Condition 2: FTP Compliance as Deployment Prerequisite

The Fidelity-to-Purpose Framework specifies design criteria for AI deployment that preserve human capability in the irreducible domain functions. Currently, FTP compliance is aspirational — a design principle rather than a deployment requirement. Prevention requires that FTP compliance become a prerequisite for deployment, not a post-hoc audit.

This means that before an AI system is deployed in a high-stakes domain, it must demonstrate that its design preserves human capability in the domain's irreducible functions. The burden of proof is on the deployer, not on the affected population. The assessment must occur before deployment, when design changes are possible, not after deployment, when the extractive pattern is structurally embedded and the costs of reversal are prohibitive.

The lock-and-key model described in the FTP Framework — where AI handles the functions it is suited for and humans retain the functions that require human capabilities, with the interface designed to preserve and enhance human practice rather than replace it — must become the default deployment architecture. Currently, the extractive model is the default because it is cheaper and simpler. Changing the default requires regulatory or institutional mandates.

The pre-deployment FTP assessment must evaluate:

Capability impact. Which human capabilities will be affected by the deployment? Which of these are irreducible domain functions? What practice structures are in place to maintain them?

The 30-day test. If this AI system were unavailable for 30 days, could the humans in this collaboration perform the irreducible domain functions adequately? If not, the deployment creates single-point fragility and fails FTP compliance.

Dignity impact. Does the deployment preserve meaningful engagement for the humans involved? Does it maintain the conditions for competence, autonomy, and purposeful contribution identified in HC-024b?

Condition 3: HEXAD Governance with Affected Population Representation

Governance of AI deployment decisions must include representation of the populations affected by those decisions. This is not a novel principle — it is the foundational insight of democratic governance applied to a new domain. But it is structurally absent from current AI deployment decisions, which are made by deployers (who benefit from efficiency gains) without meaningful input from practitioners (who bear the capability and dignity costs) or the populations served (who bear the risk of system failure without human backup).

The HEXAD governance model specifies multi-stakeholder governance for AI deployment decisions in high-stakes domains. Prevention Condition 3 requires that this governance include:

Practitioner voice. The people whose capabilities will be affected by the deployment must have a structural role in the deployment decision — not as consultants whose input may be ignored, but as stakeholders whose concerns must be addressed in the deployment design.

Affected population representation. The patients, students, citizens, and communities who will be served by the AI-augmented system must have representation in governance. Their interest in human capability preservation is direct: they are the ones who will experience the consequences of system failure without human backup.

Independent assessment. The governance body must have access to independent technical and domain expertise sufficient to evaluate deployment proposals. A governance body that lacks the competence to evaluate the systems it oversees cannot exercise meaningful oversight — this is itself a Stage 2 risk in governance, as noted in HC-024a.

Ostrom's (1990) research on governing the commons provides the theoretical foundation. Human capability in high-stakes domains is a commons — a shared resource that benefits everyone but that no individual actor has an incentive to preserve. Like fisheries, forests, and water systems, human capability commons can be depleted by individually rational extraction that is collectively catastrophic. Ostrom demonstrated that commons governance requires clear boundaries, collective choice arrangements, monitoring, graduated sanctions, and conflict resolution mechanisms. The same structural requirements apply to governing the human capability commons.

Condition 4: Cultural and Institutional Valuation of Human Capability

This is the hardest condition. Policy can mandate practice. Regulation can require FTP compliance. Governance can include affected populations. But none of these structural interventions are durable unless the culture and institutions in which they operate value human capability as something worth preserving independent of its efficiency contribution.

Market systems do not produce this valuation by default. In market logic, human capability is valuable when it is more efficient or effective than the alternative. When AI becomes more efficient or effective at a task, the market value of the human capability for that task drops to zero. The market does not distinguish between a capability that is economically obsolete and a capability that is civilizationally essential. The market does not price the option value of human capability — the value of having humans who can perform a function, even when they are not currently needed to perform it.

The deepest prevention condition is cultural: a society that values human capability only for its economic output will not preserve it when machines can produce the same output more cheaply. Prevention requires valuing human capability as a feature of human dignity, not merely as a factor of production.

Taleb's (2012) concept of antifragility provides part of the structural argument. Antifragile systems gain from disorder — they become stronger under stress. Human capability that is maintained through practice is antifragile: challenged practitioners develop deeper competence. Automated systems that replace practice produce fragility: the system works well until it fails, and then there is no human capability to compensate. The antifragility argument is that preserving human capability is not a cost — it is an investment in system resilience that pays off precisely when it is most needed.

Wildavsky's (1988) Searching for Safety makes the complementary argument that resilience requires redundancy. Safety does not come from preventing all failures — it comes from maintaining the capacity to recover from failures when they occur. Human capability is the redundancy layer in automated systems. Eliminating that redundancy to reduce costs is the structural equivalent of removing the backup generator to save on electricity bills. The savings are real until the power goes out.

Human Capability as Commons

Ostrom's framework for governing shared resources applies directly to human capability under AI deployment. Human capability in high-stakes domains has the structural features of a commons:

Subtractability. When one deployer eliminates human capability in their operations, the aggregate capability pool diminishes for everyone. Goldman's trading floor transition did not only affect Goldman — it reduced the total population of experienced traders available to the financial system as a whole.

Difficulty of exclusion. No individual actor can prevent others from depleting the commons. A hospital that maintains physician diagnostic capability bears the cost of that maintenance while competing hospitals that eliminate it capture the efficiency gains.

Long time horizons. The costs of commons depletion are borne in the future, while the benefits of extraction are captured in the present. This temporal mismatch drives overextraction in all commons — fisheries, forests, and human capability alike.

The Ostrom conditions for sustainable commons governance — clear boundaries, collective choice, monitoring, graduated sanctions, conflict resolution, nested governance, and recognition by external authorities — provide the structural template for preventing human capability depletion. The Sovereignty Floor defines the boundary. HEXAD governance provides collective choice. The Stage Indicators (HC-024a) provide monitoring. FTP compliance provides graduated enforcement. And Prevention Condition 4 — cultural valuation — provides the normative foundation without which the structural mechanisms are brittle.

Antifragility and the Resilience Argument

The prevention case is not sentimental. It is structural. The argument is not that human capability should be preserved because humans deserve to feel useful. The argument is that human capability is a system component whose removal creates fragility — and that the fragility is invisible until the system fails.

CISA's (2023) Critical Infrastructure Resilience framework specifies that critical systems must maintain the ability to withstand and rapidly recover from disruptions. Applied to AI-augmented domains: a system that depends entirely on AI for its essential functions is not resilient. It is a single point of failure, as documented in HC-022. The prevention conditions specified in this paper are the structural requirements for maintaining resilience in AI-augmented systems.

The cost of prevention is measurable: maintaining practice requirements, conducting FTP assessments, operating governance bodies, and preserving capability that markets would otherwise eliminate. The cost of failure is also measurable, but it is measured in different units: flash crashes without human intervention capability (finance), diagnostic failure without human backup (healthcare), judicial decisions without human judgment (law), infrastructure failure without craft capability (construction), and the deaths of despair documented in HC-024b when meaningful work disappears without replacement.

The default trajectory
Without the four prevention conditions, the default trajectory is the collapse gradient. Not because anyone intends it, but because the structural incentives produce it. Each individual deployment decision is locally rational — more efficient, more accurate, less costly. The aggregate outcome is civilizational fragility. Prevention requires changing the structural conditions under which deployment decisions are made, not persuading individual deployers to be more thoughtful. The problem is structural. The solution must be structural.
Named Condition · HC-024
The Resilience Floor
The minimum set of structural conditions — mandatory practice requirements, FTP compliance as deployment prerequisite, HEXAD governance with affected population representation, and cultural valuation of human capability independent of economic output — that prevent the collapse gradient from progressing to Stage 3 (Single-Point Fragility) and Stage 4 (Civilizational Lock-In). The Resilience Floor is not an aspiration. It is an operationalizable specification: each condition either obtains or does not, each can be assessed against defined criteria, and the absence of any one condition permits the collapse gradient to advance. No current AI governance framework specifies these conditions. The Resilience Floor is what prevention actually requires.

What Follows

This paper closes The Collapse Vector series (Series 4 of the Human Capability sequence). The series has mapped a trajectory from documented capability atrophy (HC-020) through tacit knowledge transmission failure (HC-021), single-point fragility (HC-022), cross-domain generality (HC-023), empirical testability (HC-024a), the human meaning cost (HC-024b), and the structural prevention conditions specified here.

The collapse gradient is not inevitable. It is the default outcome of structural conditions that can be changed. The prevention conditions are specific, operationalizable, and within the capacity of existing institutions to implement. The question is not whether prevention is possible. It is whether the political and institutional will to implement the prevention conditions will emerge before the irreversibility threshold (Stage 2 to Stage 3) is crossed in the domains that matter most.

The Human Anchor Principle (HC-028) takes the Resilience Floor and grounds it in a broader theoretical framework: the principle that human capability is not merely an input to production but a feature of civilizational resilience that must be structurally preserved regardless of the availability of automated alternatives. The Risk Architecture series (RA-005) examines how the prevention conditions interact with existing regulatory and institutional structures.

← Previous
HC-024b: The Meaningful Work Problem
Series Hub
The Collapse Vector
Next Sub-series →
The HEXAD Applied

References

Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.

External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.