HC-024a · The Collapse Vector · Saga XI: The Collaboration

The Early Warning Record

For each collapse stage, the observable leading indicators, the data sources that track them, and the threshold values that signal stage transition.

The Stage Indicators Saga XI: The Collaboration 19 min read Open Access CC BY-SA 4.0
5
collapse stages with specified leading indicators — each stage transition is observable before it completes
8
domains assessed for current stage position — education, finance, construction, healthcare, law, governance, science, care
1
critical transition — Stage 2→3 — the irreversibility threshold where intervention must occur

The Empirical Test

A collapse model that cannot be tested is not a model. It is rhetoric. This paper makes the collapse gradient specified in HC-020 through HC-023 empirically testable and operationally useful by defining, for each of the eight domains analyzed across this series, the current estimated collapse stage, the currently measurable leading indicators, the data sources that track them, and the threshold values that signal stage transition.

The collapse gradient describes five stages: Stage 0 (Extractive Deployment Begins), Stage 1 (Practice Atrophy), Stage 2 (Transmission Failure), Stage 3 (Single-Point Fragility), and Stage 4 (Civilizational Lock-In). Each stage has observable precursors. The precursors are not predictions — they are measurable conditions that either obtain or do not. The contribution of this paper is to specify what those measurements are, domain by domain, so that the model can be falsified, updated, or confirmed by evidence rather than argument.

The domain assessments below draw from the evidence bases established in HC-020 (the depreciation curve and practice atrophy data), HC-021 (the tacit knowledge transmission problem), HC-022 (single-point fragility precedents), and HC-023 (the common faculty problem and cross-domain generality). Each assessment is approximate, deliberately conservative, and intended as a baseline for longitudinal tracking rather than a definitive classification.

The Collapse Gradient Framework

The collapse gradient is a staging system, not a timeline. Different domains may occupy different stages simultaneously. A domain may occupy different stages in different sub-functions — routine finance may be at Stage 2 in equity trading while remaining at Stage 0 in relationship banking. The staging describes the structural condition, not the calendar date.

Collapse Gradient — Current Domain Positions (2026 estimate)
Stage 0Stage 1Stage 2Stage 3Stage 4
Extractive
Deployment
Practice
Atrophy
Transmission
Failure
Single-Point
Fragility
Cross-Domain
Compounding
Education
Stage 0–1
Finance
Stage 2 (routine)
Construction
Stage 0
Healthcare
Stage 0–1
Law
Stage 0–1
Governance
Stage 0
Science
Stage 1
Care
Stage 0
Irreversibility threshold: Stage 2→3 transition. Before Stage 3, recovery through training and practice mandates is possible. After Stage 3, the practitioner base needed for recovery has itself depreciated below the transmission threshold.

The CISA (Cybersecurity and Infrastructure Security Agency) Critical Infrastructure Resilience framework provides the structural template for Stage 3 thresholds. CISA defines critical infrastructure as systems whose incapacitation would have a debilitating effect on security, economic stability, public health, or safety. The collapse gradient applies the same logic to human capability: the Stage 3 threshold is reached when human capability in a domain has degraded to the point where its absence would produce debilitating effects on the domain's essential functions during automated system failure.

Education

Current Estimated Stage · 0–1
Content Delivery Automation Beginning, SEL Capacity Not Yet Displaced

AI-assisted content delivery is expanding rapidly — adaptive learning platforms, automated grading, AI tutoring systems. The deployment is concentrated in content transmission: delivering information, assessing factual recall, providing practice problems. This is the domain sub-function most amenable to automation and the sub-function with the weakest claim to irreducibility.

The irreducible functions of education — social-emotional learning (SEL) capacity, developmental attunement, the relational scaffolding that enables a child to tolerate frustration and persist through difficulty — are not yet directly displaced. But the structural compression is beginning: as AI handles content delivery, the institutional pressure is to reduce the teacher-to-content ratio rather than to redeploy teacher capacity toward relational functions. The extractive pattern is present in the economic logic even where it has not yet produced measurable atrophy.

Leading indicators: Teacher training program enrollment (Department of Education data). Ratio of AI-delivered to human-delivered instruction hours in K-12 (district-level data). Teacher retention rates in SEL-intensive roles vs. content-delivery roles. Student-reported quality of teacher relationship (PISA well-being module).

Finance (Routine)

Current Estimated Stage · 2
Tacit Knowledge Transmission Declining — Goldman Trading Floor Transition Complete

Routine finance is the domain furthest along the collapse gradient with documented evidence. The Goldman Sachs equity trading floor transition — from approximately 600 traders in 2000 to 2 traders plus 200 engineers and automated systems in 2017 — represents a completed Stage 0 and advanced Stage 1. DTCC straight-through processing eliminated manual reconciliation entirely.

The Stage 2 assessment is based on the tacit knowledge transmission criterion defined in HC-021: when the population of experienced practitioners drops below the threshold needed to transmit tacit knowledge to the next generation through mentorship, modeling, and situated learning. In routine equity trading, this threshold has plausibly been crossed. The people who understood market microstructure through embodied practice — who could read order flow, sense liquidity shifts, and exercise judgment under uncertainty — are largely retired or retrained. The knowledge they carried is not fully encoded in the automated systems that replaced them.

Leading indicators: Number of active human traders in major equity desks (firm disclosures). Apprenticeship-style training programs in trading (industry surveys). Flash crash frequency and severity (SEC data). Recovery time after automated system failures (market microstructure data). Human override success rate during system anomalies.

Construction

Current Estimated Stage · 0
Automation Beginning, Craft Knowledge Still Dominant

Construction remains largely pre-automation in its core craft functions. Robotic bricklaying, 3D-printed structures, and AI-assisted design exist but have not displaced the skilled trades at population scale. The domain's physical complexity, site variability, and the embodied nature of craft knowledge — the carpenter's feel for grain, the mason's sense of mortar consistency, the electrician's judgment about load distribution in non-standard configurations — create structural barriers to automation that do not exist in information-processing domains.

The Stage 0 assessment reflects the beginning of extractive deployment in construction planning, estimation, and project management — information-processing sub-functions where AI is being deployed for efficiency. The craft functions themselves remain at pre-Stage 0, but the economic pressure on apprenticeship programs is already measurable.

Leading indicators: Apprenticeship registrations in skilled trades (Department of Labor). Average age of master craftspeople in each trade (union demographic data). Ratio of pre-fabricated to site-built components (industry data). Time-to-competency in apprenticeship programs (training outcome data).

Healthcare

Current Estimated Stage · 0–1
Diagnostic AI Deployed, Physician Relational Capacity Compressed by Administrative Burden

Healthcare presents a compound picture. AI diagnostic tools — radiology image analysis, pathology slide screening, ECG interpretation — are deployed and in many cases outperform human readers on narrow metrics. This is Stage 0 in diagnostic sub-functions. The practice atrophy concern is real: if radiologists stop reading routine scans because AI handles them, the skill base for identifying the atypical cases that AI misses will erode on the depreciation curve documented in HC-020.

But the more immediate threat to healthcare capability is not AI displacement of clinical skills. It is administrative burden displacing relational capacity. Physicians spend an estimated 49% of their time on electronic health records and desk work (Sinsky et al., 2016). This is not AI-caused — it is a pre-existing compression of the irreducible relational function by administrative systems. AI could either deepen this compression (by adding more documentation requirements) or relieve it (by handling administrative tasks to free relational capacity). The direction is not yet determined.

Leading indicators: Physician time allocation (direct patient care vs. administrative tasks). Diagnostic accuracy in areas where AI is deployed vs. where it is not (controlled comparisons). Residency training hours on skills AI handles (curriculum data). Patient-reported quality of physician relationship (CAHPS surveys). Burnout and attrition rates correlated with administrative burden vs. clinical challenge.

Law

Current Estimated Stage · 0–1
Risk Assessment Tools Deployed, Judicial Override Still Nominally Active

AI-assisted legal research, contract review, and risk assessment tools are deployed across the legal profession. Algorithmic risk assessment in criminal sentencing (COMPAS, PSA) represents the most consequential deployment — automated systems making or influencing decisions about human liberty. The extractive pattern is present: judges receive algorithmic risk scores and must actively override them to exercise independent judgment.

The Stage 0–1 assessment reflects the documented automation bias problem: when an automated system provides a recommendation, human decision-makers systematically defer to it even when they have grounds for disagreement. Skitka et al. (2000) documented this in aviation; the same mechanism operates in judicial settings. The judicial override is nominally active — judges can and do deviate from algorithmic recommendations — but the practice of independent risk assessment is under structural pressure from the availability of automated scores.

Leading indicators: Judicial override rates of algorithmic recommendations (court data). Time spent on independent case assessment vs. review of algorithmic output (judicial workflow studies). Law school curriculum hours on judgment-intensive skills vs. technology-assisted skills. Pro se litigant outcomes in AI-assisted vs. traditional proceedings.

Governance

Current Estimated Stage · 0
AI in Government Services Beginning

AI deployment in governance is at Stage 0: beginning but not yet producing measurable practice atrophy in the irreducible governance functions. Automated benefits determination, AI-assisted policy analysis, chatbot-mediated citizen services, and predictive policing represent the current deployment frontier. The extractive potential is significant — governance involves complex judgment about competing values, contextual interpretation of rules, and democratic accountability that cannot be fully automated — but the current deployments are concentrated in routine processing rather than judgment-intensive functions.

The governance domain has a unique vulnerability: democratic legitimacy requires that consequential decisions be traceable to accountable human judgment. If governance functions are automated to the point where the human officials nominally responsible for decisions lack the capability to evaluate the automated output, the democratic accountability chain breaks. This is a Stage 2 risk specific to governance that does not have an equivalent in other domains.

Leading indicators: Percentage of government decisions made by or substantially influenced by automated systems (agency audit data). Civil servant training hours on judgment-intensive functions (OPM data). Citizen satisfaction with automated vs. human-mediated government services. Error rates in automated benefits determination (agency quality reviews). Legislative and regulatory staff capacity for independent technology assessment.

Science

Current Estimated Stage · 1
Hypothesis Testing Acceleration Outrunning Governance

Science presents a distinctive pattern. AI is accelerating hypothesis generation, literature synthesis, data analysis, and experimental design at a pace that exceeds the scientific community's capacity to evaluate, replicate, and govern the outputs. AlphaFold's protein structure predictions, AI-driven drug discovery pipelines, and automated experimental platforms represent genuine scientific capability — but the acceleration creates a governance gap when human researchers cannot independently verify AI-generated results at the rate those results are produced.

The Stage 1 assessment is based on the practice atrophy criterion applied to scientific judgment: the capacity to evaluate evidence quality, identify confounds, exercise skepticism about seemingly clean results, and maintain the social practices of peer review and replication that constitute science's error-correction mechanism. When AI generates hypotheses and results faster than human scientists can critically evaluate them, the practice of critical evaluation atrophies even if the scientists remain employed. The skill is not displaced by unemployment but by throughput — the volume of AI output exceeds human evaluative capacity.

Leading indicators: Ratio of AI-generated to human-evaluated scientific claims (bibliometric analysis). Replication rates for AI-assisted vs. traditional research (reproducibility studies). Peer review turnaround times and quality metrics (journal data). Graduate training hours on experimental design vs. AI tool operation (curriculum data). Retraction rates in AI-intensive vs. traditional research domains.

Care

Current Estimated Stage · 0
Substitutive Deployment Beginning

Care — eldercare, childcare, disability support, mental health services — is the domain where the extractive deployment thesis is most clearly a thesis about human dignity rather than system efficiency. AI companion robots, therapeutic chatbots, automated monitoring systems, and virtual care platforms represent the beginning of substitutive deployment: using AI systems to perform functions previously performed by human caregivers.

The Stage 0 assessment reflects the early state of this deployment. The care domain's irreducible functions — relational attunement, emotional co-regulation, the capacity to be genuinely present with another person's suffering — are the functions most resistant to automation and most damaged by substitution. The evidence base for care capability atrophy under automation is thin because the deployment is early. But the structural logic is clear: if institutions adopt AI care systems to reduce labor costs, and if the economic pressure on human caregiving roles intensifies, the pipeline of skilled human caregivers will narrow. HC-023 (The Common Faculty Problem) identifies care as the domain where Stage 4 risk is highest precisely because the irreducible function (human relational presence) cannot be rebuilt by technical means once lost.

Leading indicators: Caregiver workforce size and retention (BLS data). Nursing and social work program enrollment (Department of Education). Ratio of AI-mediated to human-mediated care interactions in institutional settings. Patient/client-reported quality of care relationship. Caregiver compensation relative to cost of living (a leading indicator of pipeline health).

The Irreversibility Threshold: Stage 2 to Stage 3

The critical transition in the collapse gradient is not Stage 0 to Stage 1. Practice atrophy at Stage 1 is reversible: the pre-automation practitioner generation is still available, training programs can be redesigned, practice requirements can be mandated. The FAA response to manual flight skill degradation (AC 120-111) is a Stage 1 intervention that works because the knowledge base for recovery still exists in the practitioner population.

The critical transition is Stage 2 to Stage 3. Stage 2 (Transmission Failure) means the tacit knowledge needed to train new practitioners is no longer available in the population at sufficient density. Stage 3 (Single-Point Fragility) means the automated system has become a single point of failure with no human backup capable of performing the domain's essential functions during system failure. The transition from Stage 2 to Stage 3 is the irreversibility threshold — the point beyond which normal recovery mechanisms (training, practice, mentorship) cannot restore the capability because the human infrastructure needed to run those mechanisms has itself degraded.

The question is not whether collapse can happen. The evidence base across HC-020 through HC-023 establishes that the mechanism is real and the precedents exist. The question is whether the leading indicators are being tracked, and whether the irreversibility threshold will be recognized before it is crossed.

The CISA Critical Infrastructure Resilience framework provides the structural logic for Stage 3 thresholds: a system is critically fragile when the failure of a single component (or small number of components) produces cascading effects that degrade the system's essential functions below acceptable performance levels. Applied to human capability: a domain reaches Stage 3 when the failure of its automated systems would produce performance degradation below acceptable levels because insufficient human capability exists to compensate.

The Stage 3 threshold for each domain is defined by the domain's minimum viable human capability — the capability level below which the domain cannot perform its essential functions during automated system failure. HC-024 (What Prevention Actually Requires) specifies these thresholds and the structural conditions needed to maintain them.

Named Condition · HC-024a
The Stage Indicators
The set of observable, measurable leading indicators that signal stage transitions in the collapse gradient — specified per domain, with data sources identified and threshold values defined. The Stage Indicators make the collapse model empirically testable: each domain's current stage position can be assessed against measurable criteria, tracked longitudinally, and updated as evidence accumulates. The critical indicator set is the Stage 2→3 transition: the irreversibility threshold where the practitioner base drops below the level needed to recover the capability through normal training mechanisms.

What Follows

This paper provides the measurement framework. HC-024b (The Meaningful Work Problem) addresses a dimension the collapse gradient does not capture: what happens to human meaning-making when AI takes the tasks that gave work its dignity. The collapse gradient measures capability. HC-024b measures the human cost of capability displacement that the gradient treats as a structural variable.

HC-024 (What Prevention Actually Requires) closes the series by specifying the structural conditions — in policy, design, governance, and cultural valuation — that prevent Stage 3 and Stage 4. The Early Warning Record makes the problem measurable. The prevention conditions make it actionable.

← Previous
HC-023: The Common Faculty Problem
Next →
HC-024b: The Meaningful Work Problem

References

Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.

External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.