Skills not practiced depreciate. The curve is documented. The leading indicators are already measurable.
The capability atrophy mechanism is not a prediction. It is a documented process with measurable indicators and historical precedents. The mechanism is simple: human capabilities not practiced depreciate. The depreciation follows a non-linear curve — slow initially, accelerating as the base of practice narrows, and approaching irreversibility when the population of skilled practitioners drops below the threshold needed for knowledge transmission.
This paper defines the first two stages of the collapse gradient — Stage 0 (Extractive Deployment Begins) and Stage 1 (Practice Atrophy) — documents the evidence for each, and specifies the leading indicators that signal the transition between stages. The subsequent papers in this series (HC-021 through HC-024) document Stages 2 through 4 and the prevention conditions.
Ward et al. (2017) demonstrated that the mere presence of a smartphone — even when turned off and face down — reduces available cognitive capacity. The mechanism is not distraction. It is cognitive offloading: when an external system is available to handle a cognitive task, the brain reduces its investment in maintaining that capability. This is the micro-level instantiation of the same mechanism that operates at population scale when AI handles domain tasks that humans previously practiced.
AI handles tasks humans previously handled. The design goal is efficiency or cost reduction. The extractive outcome is structural, not necessarily intentional — the deployment is not designed to preserve human capability because preserving human capability is not an economic objective.
The critical distinction: not all AI deployment is extractive. AI that handles administrative tasks to free human relational or judgment capacity (the deployment inversion described in HC-003) is not extractive — it increases net human capability. Extractive deployment specifically replaces human practice in the domain's irreducible functions, as defined by the Pair tables in Series 1.
Stage 0 describes the current state of most high-stakes AI deployments. The Goldman Sachs equity trading floor transition — from approximately 600 traders in 2000 to 2 traders plus 200 engineers and automated systems in 2017 — is a documented Stage 0 completion in routine finance. DTCC straight-through processing eliminated manual reconciliation roles entirely. These are not projections. They are documented transitions.
The structural feature of Stage 0 is that outcomes may improve in the short term. Automated trading reduces certain error types. AI diagnostic tools flag pathologies human readers miss. The system appears to work better. The capability loss is invisible because the pre-automation practitioner generation — the people who developed their skills before the automation was deployed — is still active. They can intervene when the system fails. They can train the next generation. The system's redundancy is hidden in the workforce it is displacing.
Humans stop practicing skills AI handles. Skills not practiced depreciate. The practitioner generation that developed pre-automation skills is still active — decline is not yet visible in outcomes because the safety net of experienced practitioners remains in place.
This is the deceptive stage. Metrics may improve (AI handles routine tasks more efficiently) while underlying human capability erodes silently. The improvement masks the loss because the loss becomes visible only when the system fails and no experienced practitioner is available to intervene.
The Programme for International Student Assessment (PISA) provides the largest longitudinal dataset on student cognitive performance across OECD countries. The arithmetic fluency trend from 2003 to 2022 shows a consistent directional signal: declining performance in populations with high calculator and technology dependence. This is a large-N, longitudinal finding with the directional signal uncontested in the assessment literature.
The PISA data does not prove that technology caused the arithmetic decline — multiple confounding variables are present. But it documents the correlation between technology-dependent learning environments and reduced performance in a skill that technology handles. This is exactly the Stage 1 pattern: the automated function (calculation) degrades in the human population as practice decreases, while the automation masks the decline because calculators are always available.
FAA Advisory Circular 120-111 documents the degradation of manual flight skills in the glass-cockpit era. As cockpit automation increased from the 1980s through the 2010s, pilots' proficiency in manual flight — hand-flying the aircraft without autopilot — measurably declined. The FAA response was to mandate periodic manual flight practice, which is itself evidence of Stage 1: the regulatory system recognized that automated operation was degrading the human capability needed for automated-system failure.
The aviation precedent is particularly instructive because aviation is the domain with the most sophisticated human-machine interface design and the most rigorous training requirements. If Stage 1 atrophy occurs in aviation — where the problem is explicitly recognized and where mandatory practice requirements exist — the atrophy is structurally inevitable in domains with less rigorous capability preservation.
UK National Literacy Trust data documents declining spelling proficiency in populations with high autocorrect usage. This is a microcosm of the Stage 1 mechanism: the automated system (autocorrect) handles a cognitive task (spelling), reducing practice, producing measurable skill depreciation. The compensatory system is always present during normal use, masking the decline.
Research on GPS usage and spatial orientation skills exists but is methodologically mixed. Some studies find reduced spatial memory in heavy GPS users; others find the effect is small or confounded. This paper uses GPS research as illustrative only, not as primary evidence. The primary evidence base is PISA (large-N, longitudinal, directional signal robust) and FAA data (domain-specific, regulatory recognition of the mechanism).
The depreciation of human capability under automation is not linear. It follows a curve with three phases:
Phase 1: Slow initial decline. The pre-automation practitioner population is large. Skills depreciate individually but the aggregate capability base remains above the functional threshold. Experienced practitioners compensate for less-experienced ones. The system appears resilient.
Phase 2: Accelerating decline. As the pre-automation generation retires and fewer new practitioners develop full capability, the aggregate skill base narrows. Each retirement removes more relative capability from the system. The curve steepens.
Phase 3: Threshold crossing. The practitioner base drops below the level needed to transmit tacit knowledge to the next generation. At this point, the skill cannot be rebuilt through normal training mechanisms because there are too few experienced practitioners to serve as mentors, models, and quality standards. This is the Stage 1 → Stage 2 transition, examined in HC-021 (The Tacit Knowledge Problem).
The depreciation curve's danger is not its endpoint. It is the point on the curve where normal recovery mechanisms — training, practice, mentorship — cease to function because the base of expertise needed to run them has itself depreciated.
The collapse gradient is operationally useful only if transitions between stages can be detected in advance. The following indicators signal the Stage 0 → Stage 1 transition:
Apprenticeship and training registrations. Department of Labor data on apprenticeship registrations in affected trades. A sustained decline in new registrations in trades where AI handles entry-level tasks is a Stage 1 leading indicator.
Deliberate practice hours. Training program curricula that reduce practice time in automated functions (e.g., medical schools reducing time on skills AI handles). The curriculum is the transmission mechanism — when it reduces, the transmission narrows.
Entry-level hiring. Reduction in entry-level positions in the specific functions AI handles. Entry-level positions are where practitioners develop their initial skill base. Eliminating these positions eliminates the on-ramp for the next generation of skilled practitioners.
The 30-day test. From the Fidelity definition in the FTP Framework: could the humans in this collaboration perform the irreducible domain functions adequately if the AI were unavailable for 30 days? A declining score on this test over time is a direct Stage 1 measurement. HC-024a (The Early Warning Record) specifies how this test is applied across all eight domains.
Stage 1 is where intervention is most effective and least costly. The pre-automation practitioner generation is still available. Training programs can be redesigned. Practice requirements can be mandated (as aviation did with AC 120-111). The curve can be arrested before it steepens.
HC-021 (The Tacit Knowledge Problem) examines Stage 2: what happens when the pre-automation generation retires and tacit knowledge — the embodied, contextual, experiential knowledge that cannot be fully documented — fails to transfer. HC-022 (The Single-Point Fragility Record) documents Stage 3: the catastrophic failures that occur when automated systems fail and no human with sufficient competence exists to intervene. HC-023 (The Common Faculty Problem) examines why the current AI wave produces a Stage 4 risk that prior automation waves did not.
Together, these four papers map the collapse gradient from extractive deployment to civilizational fragility. HC-024a (The Early Warning Record) makes the gradient empirically testable and operationally useful. HC-024 (What Prevention Actually Requires) specifies the structural conditions that prevent Stage 3 and Stage 4 — the resilience floor.
Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.