For each collapse stage, the observable leading indicators, the data sources that track them, and the threshold values that signal stage transition.
A collapse model that cannot be tested is not a model. It is rhetoric. This paper makes the collapse gradient specified in HC-020 through HC-023 empirically testable and operationally useful by defining, for each of the eight domains analyzed across this series, the current estimated collapse stage, the currently measurable leading indicators, the data sources that track them, and the threshold values that signal stage transition.
The collapse gradient describes five stages: Stage 0 (Extractive Deployment Begins), Stage 1 (Practice Atrophy), Stage 2 (Transmission Failure), Stage 3 (Single-Point Fragility), and Stage 4 (Civilizational Lock-In). Each stage has observable precursors. The precursors are not predictions — they are measurable conditions that either obtain or do not. The contribution of this paper is to specify what those measurements are, domain by domain, so that the model can be falsified, updated, or confirmed by evidence rather than argument.
The domain assessments below draw from the evidence bases established in HC-020 (the depreciation curve and practice atrophy data), HC-021 (the tacit knowledge transmission problem), HC-022 (single-point fragility precedents), and HC-023 (the common faculty problem and cross-domain generality). Each assessment is approximate, deliberately conservative, and intended as a baseline for longitudinal tracking rather than a definitive classification.
The collapse gradient is a staging system, not a timeline. Different domains may occupy different stages simultaneously. A domain may occupy different stages in different sub-functions — routine finance may be at Stage 2 in equity trading while remaining at Stage 0 in relationship banking. The staging describes the structural condition, not the calendar date.
The CISA (Cybersecurity and Infrastructure Security Agency) Critical Infrastructure Resilience framework provides the structural template for Stage 3 thresholds. CISA defines critical infrastructure as systems whose incapacitation would have a debilitating effect on security, economic stability, public health, or safety. The collapse gradient applies the same logic to human capability: the Stage 3 threshold is reached when human capability in a domain has degraded to the point where its absence would produce debilitating effects on the domain's essential functions during automated system failure.
AI-assisted content delivery is expanding rapidly — adaptive learning platforms, automated grading, AI tutoring systems. The deployment is concentrated in content transmission: delivering information, assessing factual recall, providing practice problems. This is the domain sub-function most amenable to automation and the sub-function with the weakest claim to irreducibility.
The irreducible functions of education — social-emotional learning (SEL) capacity, developmental attunement, the relational scaffolding that enables a child to tolerate frustration and persist through difficulty — are not yet directly displaced. But the structural compression is beginning: as AI handles content delivery, the institutional pressure is to reduce the teacher-to-content ratio rather than to redeploy teacher capacity toward relational functions. The extractive pattern is present in the economic logic even where it has not yet produced measurable atrophy.
Routine finance is the domain furthest along the collapse gradient with documented evidence. The Goldman Sachs equity trading floor transition — from approximately 600 traders in 2000 to 2 traders plus 200 engineers and automated systems in 2017 — represents a completed Stage 0 and advanced Stage 1. DTCC straight-through processing eliminated manual reconciliation entirely.
The Stage 2 assessment is based on the tacit knowledge transmission criterion defined in HC-021: when the population of experienced practitioners drops below the threshold needed to transmit tacit knowledge to the next generation through mentorship, modeling, and situated learning. In routine equity trading, this threshold has plausibly been crossed. The people who understood market microstructure through embodied practice — who could read order flow, sense liquidity shifts, and exercise judgment under uncertainty — are largely retired or retrained. The knowledge they carried is not fully encoded in the automated systems that replaced them.
Construction remains largely pre-automation in its core craft functions. Robotic bricklaying, 3D-printed structures, and AI-assisted design exist but have not displaced the skilled trades at population scale. The domain's physical complexity, site variability, and the embodied nature of craft knowledge — the carpenter's feel for grain, the mason's sense of mortar consistency, the electrician's judgment about load distribution in non-standard configurations — create structural barriers to automation that do not exist in information-processing domains.
The Stage 0 assessment reflects the beginning of extractive deployment in construction planning, estimation, and project management — information-processing sub-functions where AI is being deployed for efficiency. The craft functions themselves remain at pre-Stage 0, but the economic pressure on apprenticeship programs is already measurable.
Healthcare presents a compound picture. AI diagnostic tools — radiology image analysis, pathology slide screening, ECG interpretation — are deployed and in many cases outperform human readers on narrow metrics. This is Stage 0 in diagnostic sub-functions. The practice atrophy concern is real: if radiologists stop reading routine scans because AI handles them, the skill base for identifying the atypical cases that AI misses will erode on the depreciation curve documented in HC-020.
But the more immediate threat to healthcare capability is not AI displacement of clinical skills. It is administrative burden displacing relational capacity. Physicians spend an estimated 49% of their time on electronic health records and desk work (Sinsky et al., 2016). This is not AI-caused — it is a pre-existing compression of the irreducible relational function by administrative systems. AI could either deepen this compression (by adding more documentation requirements) or relieve it (by handling administrative tasks to free relational capacity). The direction is not yet determined.
AI-assisted legal research, contract review, and risk assessment tools are deployed across the legal profession. Algorithmic risk assessment in criminal sentencing (COMPAS, PSA) represents the most consequential deployment — automated systems making or influencing decisions about human liberty. The extractive pattern is present: judges receive algorithmic risk scores and must actively override them to exercise independent judgment.
The Stage 0–1 assessment reflects the documented automation bias problem: when an automated system provides a recommendation, human decision-makers systematically defer to it even when they have grounds for disagreement. Skitka et al. (2000) documented this in aviation; the same mechanism operates in judicial settings. The judicial override is nominally active — judges can and do deviate from algorithmic recommendations — but the practice of independent risk assessment is under structural pressure from the availability of automated scores.
AI deployment in governance is at Stage 0: beginning but not yet producing measurable practice atrophy in the irreducible governance functions. Automated benefits determination, AI-assisted policy analysis, chatbot-mediated citizen services, and predictive policing represent the current deployment frontier. The extractive potential is significant — governance involves complex judgment about competing values, contextual interpretation of rules, and democratic accountability that cannot be fully automated — but the current deployments are concentrated in routine processing rather than judgment-intensive functions.
The governance domain has a unique vulnerability: democratic legitimacy requires that consequential decisions be traceable to accountable human judgment. If governance functions are automated to the point where the human officials nominally responsible for decisions lack the capability to evaluate the automated output, the democratic accountability chain breaks. This is a Stage 2 risk specific to governance that does not have an equivalent in other domains.
Science presents a distinctive pattern. AI is accelerating hypothesis generation, literature synthesis, data analysis, and experimental design at a pace that exceeds the scientific community's capacity to evaluate, replicate, and govern the outputs. AlphaFold's protein structure predictions, AI-driven drug discovery pipelines, and automated experimental platforms represent genuine scientific capability — but the acceleration creates a governance gap when human researchers cannot independently verify AI-generated results at the rate those results are produced.
The Stage 1 assessment is based on the practice atrophy criterion applied to scientific judgment: the capacity to evaluate evidence quality, identify confounds, exercise skepticism about seemingly clean results, and maintain the social practices of peer review and replication that constitute science's error-correction mechanism. When AI generates hypotheses and results faster than human scientists can critically evaluate them, the practice of critical evaluation atrophies even if the scientists remain employed. The skill is not displaced by unemployment but by throughput — the volume of AI output exceeds human evaluative capacity.
Care — eldercare, childcare, disability support, mental health services — is the domain where the extractive deployment thesis is most clearly a thesis about human dignity rather than system efficiency. AI companion robots, therapeutic chatbots, automated monitoring systems, and virtual care platforms represent the beginning of substitutive deployment: using AI systems to perform functions previously performed by human caregivers.
The Stage 0 assessment reflects the early state of this deployment. The care domain's irreducible functions — relational attunement, emotional co-regulation, the capacity to be genuinely present with another person's suffering — are the functions most resistant to automation and most damaged by substitution. The evidence base for care capability atrophy under automation is thin because the deployment is early. But the structural logic is clear: if institutions adopt AI care systems to reduce labor costs, and if the economic pressure on human caregiving roles intensifies, the pipeline of skilled human caregivers will narrow. HC-023 (The Common Faculty Problem) identifies care as the domain where Stage 4 risk is highest precisely because the irreducible function (human relational presence) cannot be rebuilt by technical means once lost.
The critical transition in the collapse gradient is not Stage 0 to Stage 1. Practice atrophy at Stage 1 is reversible: the pre-automation practitioner generation is still available, training programs can be redesigned, practice requirements can be mandated. The FAA response to manual flight skill degradation (AC 120-111) is a Stage 1 intervention that works because the knowledge base for recovery still exists in the practitioner population.
The critical transition is Stage 2 to Stage 3. Stage 2 (Transmission Failure) means the tacit knowledge needed to train new practitioners is no longer available in the population at sufficient density. Stage 3 (Single-Point Fragility) means the automated system has become a single point of failure with no human backup capable of performing the domain's essential functions during system failure. The transition from Stage 2 to Stage 3 is the irreversibility threshold — the point beyond which normal recovery mechanisms (training, practice, mentorship) cannot restore the capability because the human infrastructure needed to run those mechanisms has itself degraded.
The question is not whether collapse can happen. The evidence base across HC-020 through HC-023 establishes that the mechanism is real and the precedents exist. The question is whether the leading indicators are being tracked, and whether the irreversibility threshold will be recognized before it is crossed.
The CISA Critical Infrastructure Resilience framework provides the structural logic for Stage 3 thresholds: a system is critically fragile when the failure of a single component (or small number of components) produces cascading effects that degrade the system's essential functions below acceptable performance levels. Applied to human capability: a domain reaches Stage 3 when the failure of its automated systems would produce performance degradation below acceptable levels because insufficient human capability exists to compensate.
The Stage 3 threshold for each domain is defined by the domain's minimum viable human capability — the capability level below which the domain cannot perform its essential functions during automated system failure. HC-024 (What Prevention Actually Requires) specifies these thresholds and the structural conditions needed to maintain them.
This paper provides the measurement framework. HC-024b (The Meaningful Work Problem) addresses a dimension the collapse gradient does not capture: what happens to human meaning-making when AI takes the tasks that gave work its dignity. The collapse gradient measures capability. HC-024b measures the human cost of capability displacement that the gradient treats as a structural variable.
HC-024 (What Prevention Actually Requires) closes the series by specifying the structural conditions — in policy, design, governance, and cultural valuation — that prevent Stage 3 and Stage 4. The Early Warning Record makes the problem measurable. The prevention conditions make it actionable.
Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.