HC-001 · The Capability Pairs · Saga XI: The Collaboration

The Irreducible Human

What cannot be transferred to a machine — not because machines aren't good enough yet, but because the capability is constitutively human

The Capability Floor Saga XI: The Collaboration 18 min read Open Access CC BY-SA 4.0
8
domains examined for irreducible human capability — education, finance, construction, healthcare, law, governance, science, care
3
structural dependencies — embodiment, lived consequence, relational presence — that define the capability floor
0
of these dependencies addressable through improved processing power, data volume, or model architecture

The Claim

This paper makes a specific claim: an identified set of human capabilities is constitutively dependent on embodiment, lived consequence, or relational presence. These capabilities are not merely currently beyond AI capability. They are structurally beyond it — the dependency is on what the human is, not on what the human computes.

The distinction matters because most discussion of human-AI capability divides tasks by what AI can and cannot currently do. This framing implies a frontier — capabilities AI hasn't reached yet. The irreducibility claim is different. It identifies capabilities where the mechanism of production is not computation, pattern recognition, or information processing — and therefore cannot be replicated by a system whose fundamental operation is computation, pattern recognition, and information processing, regardless of scale.

This is not a mystical claim. It is a structural one, and it is falsifiable. If a machine can produce genuine embodied judgment, lived consequential accountability, or therapeutic relational presence — not the simulation of these, but the thing itself — then the claim falls. The evidence reviewed here suggests it will not fall, because the capabilities depend on conditions machines do not have and are not approaching.

The Distinction That Matters

Hubert Dreyfus spent four decades articulating a distinction that artificial intelligence research has spent the same four decades either ignoring or mischaracterizing. In What Computers Can't Do (1972) and its successor What Computers Still Can't Do (1992), Dreyfus argued that human expertise depends on embodied, situated engagement with the world that cannot be captured in formal rules. His 2001 On the Internet extended this to presence: certain human capacities require being bodily present in shared situations.

The Dreyfus argument was widely dismissed in the AI community as vitalist or retrograde. But the specific claims have held up across successive waves of AI capability — including deep learning, which Dreyfus did not predict. The reason is that his argument was never about computational limits. It was about structural prerequisites. A chess program that defeats every human on earth still does not have an embodied relationship to risk. A large language model that produces text indistinguishable from human writing still does not bear consequences for what it says.

The structural test
For each claimed irreducible capability, the test is not "can a machine do this?" but "does this capability depend on a condition that machines structurally lack?" Embodiment, lived consequence, and relational presence are not problems to be solved. They are conditions to be had.

Michael Polanyi's The Tacit Dimension (1966) provides the epistemological foundation. Polanyi demonstrated that human knowledge includes a substantial component that cannot be made fully explicit — "we can know more than we can tell." This tacit knowledge is not the part of knowledge we haven't gotten around to writing down. It is the part that exists in the relationship between the knower and the known, in embodied practice, in contextual sensitivity that is not separable from the practitioner who holds it.

Harry Collins and Robert Evans formalized this in their 2007 typology of expertise. They distinguished between "interactional expertise" (the ability to talk about a domain fluently) and "contributory expertise" (the ability to contribute to practice in the domain). AI systems, including the most advanced language models, demonstrate interactional expertise at high levels. They do not demonstrate contributory expertise in domains requiring embodied judgment, because contributory expertise in those domains requires being in the situation — not representing it.

Richard Sennett's The Craftsman (2008) extends this to skill: the craft knowledge that develops through ten thousand hours of embodied practice with materials produces a form of judgment that is not separable from the hands, eyes, and bodily presence of the practitioner. This is not nostalgia. It is a specific claim about where judgment lives in domains where materials, people, and environments are the medium.

The First Dependency: Embodiment

A construction foreman walking a site reads the ground, the weather, the feel of a structural member under load, the sound of a joint under stress. This is not sensory data processing. It is decades of embodied experience producing an integrated judgment that operates faster and more reliably than explicit analysis — not because it is computationally faster, but because it draws on a knowledge base that exists in the practitioner's body and cannot be fully externalized.

A surgeon operating in a living body makes continuous micro-adjustments based on tissue resistance, bleeding patterns, and the tactile quality of anatomical planes. Robotic surgical systems enhance precision along predefined axes but do not replace the surgeon's embodied judgment about which plane to develop, when tissue quality indicates a complication, or when the plan needs to change in real time.

Embodiment is not input. It is a mode of being in relation to the world that produces a kind of knowledge unavailable to systems that model the world from outside it.

"We can know more than we can tell." — Michael Polanyi, The Tacit Dimension (1966)

The Second Dependency: Lived Consequence

A judge who sentences a defendant bears that decision. Not metaphorically — the judge carries the weight of having determined another person's liberty, in a way that shapes subsequent judgment, that constitutes judicial wisdom, and that is the basis of accountability. A risk-scoring algorithm bears nothing. The absence of lived consequence is not a flaw in the algorithm. It is a structural feature that makes the algorithm categorically different from the human decision-maker — and that difference matters for certain categories of decision.

Decisions that require accountability — where someone must answer for the outcome, must carry the weight of having chosen, must face the person affected and account for why — depend on lived consequence as a structural feature of the decision-maker. This is not about liability assignment (a legal mechanism that can be designed around any technology). It is about the constitutive role of consequence in judgment: the way that knowing you will bear the outcome shapes how you decide.

This is why the "human in the loop" in many AI-assisted decisions is cosmetic. If the human cannot meaningfully intervene, does not understand the basis for the recommendation, and faces no practical consequence for rubber-stamping — the lived consequence condition is not met. The human is present but not consequential. The paper on the meaningful override (HC-017) examines this failure mode in detail.

The Third Dependency: Relational Presence

A therapist sitting with a person in crisis provides something that cannot be decomposed into the information exchanged. The therapeutic intervention is partly the content of what is said. But the documented clinical effect of therapeutic presence — of being genuinely witnessed and held by another person — depends on the fact that another person is doing the witnessing. Not a system that represents a person. A person.

Bowlby's attachment theory (1969), confirmed by decades of subsequent research including Rutter's Romanian orphan studies (1998), establishes that human development depends on relational bonds with specific other humans. Cacioppo and Patrick's loneliness research (2008) demonstrates that relational isolation produces dose-response health effects comparable to smoking — and that the mechanism operates through the quality of human connection, not through information exchange that could theoretically be provided by any system.

A teacher who notices the quietly struggling child in the back of the classroom is not processing behavioral signals. They are in a relational field with thirty children, drawing on embodied and relational knowledge about each child, and producing an intervention that depends on the child knowing that a specific adult cares about them specifically. This is the documented mechanism of protective adult relationships in developmental psychology (Resnick et al., 1997).

Relational presence is not empathy-as-feature. It is the ontological condition of being a person in relation to another person. AI systems can simulate components of this. They cannot instantiate it, because they are not persons and the other party knows they are not persons.

The Capability Floor

These three dependencies — embodiment, lived consequence, relational presence — define a floor below which human capability cannot be automated without losing the capability itself. The floor is not a prediction about AI limitations. It is a structural claim about what certain capabilities are.

The eight domain papers that follow (HC-002 through HC-010) apply this framework to specific fields. In each domain, the three-axis analysis identifies: (1) the natural human-machine pair, (2) whether current deployment satisfies Fidelity, Transparency, and Participation, and (3) the documented consequences of extractive design. The Pair table in each domain paper maps directly to these three dependencies — the left column (human irreducible) draws from embodiment, consequence, and relational presence. The right column (machine irreplaceable) draws from scale, consistency, endurance, and processing capacity.

Named Condition · HC-001
The Capability Floor
The proposed set of human capabilities constitutively dependent on embodiment, lived consequence, or relational presence — capabilities that are not merely currently beyond AI, but structurally beyond it because their production mechanism is not computation. The tripartite taxonomy (embodiment/consequence/presence) synthesizes concepts from Dreyfus and Polanyi but the three-axis structure is novel to this analysis. The Capability Floor defines the lower bound of what must remain human in any collaboration design that claims to preserve human capacity.

What Follows

If the Capability Floor holds — and the evidence reviewed here says it does — then the design question for human-AI collaboration is not "how much can we automate?" but "where is the floor, and are we designing above or below it?"

HC-002 (The Machine Complement) establishes the corresponding ceiling: what machines do that exceeds human capacity structurally, not merely in speed. Together, HC-001 and HC-002 define the complementarity — the lock and the key — that the domain papers then apply to education, finance, construction, healthcare, law, governance, science, and care.

A methodological caveat: the Pair framework is an analytical construction, not a discovered natural boundary. The division between “human-irreducible” and “machine-replaceable” is the Institute’s best current assessment — it is contestable, and the boundary will shift as capabilities evolve. AlphaFold’s protein structure predictions, for example, generate genuine scientific hypotheses — an activity the Science domain Pair table places in the human column. The framework must include an update mechanism: when evidence demonstrates that a capability previously classified as human-irreducible is reliably performed by machines with equivalent or superior outcomes, the Pair table should be revised. The falsifiability section of the keystone (I11-001) names the conditions under which the entire framework would require abandonment.

The FTP Framework (Series 2) uses the Capability Floor as its measurement standard. Fidelity — the terminal test of the FTP cascade — asks whether the humans in a collaboration are becoming more or less capable in the floor capabilities. If the floor erodes, the collaboration has failed — regardless of what efficiencies it produces.

The Collapse Vector (Series 4) documents what happens when the floor is breached — the five-stage depreciation from extractive deployment to civilizational fragility. And the keystone (The Collaboration Standard) synthesizes the floor, the framework, the collapse gradient, and the governance structure into a single operational standard.

Every paper in this saga links back here. The Capability Floor is the foundation.

← Series Hub
The Capability Pairs
Next →
HC-002: The Machine Complement

References

Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.

External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.