HC-006 · The Capability Pairs · Saga XI: The Collaboration

Healthcare: The Presence-Precision Pair

Most physician time is spent on documentation. Most AI healthcare investment concentrates on diagnostics. The current deployment is inverted.

The Therapeutic Presence Saga XI: The Collaboration 22 min read Open Access CC BY-SA 4.0
49%
of physician time spent on EHR and documentation — Sinsky et al. 2016, Annals of Internal Medicine (widely cited; exact percentage subject to verification)
deployment inversion — AI targets diagnostics while administrative burden consumes physician relational capacity
d=0.35
effect size of patient-clinician relationship on health outcomes (range: 0.22–0.48 across measured outcomes) — Kelley et al. 2014, PLoS ONE

Axis 1: The Pair

Human Irreducible Machine Irreplaceable
Bedside presence — therapeutic value of being genuinely witnessed Diagnostic pattern recognition across imaging at scale
Ethical judgment in treatment decisions Drug interaction checking across full pharmacological literature
Emotional support and trust — documented clinical effect Continuous monitoring without fatigue
Holistic patient knowledge across time Surgical precision beyond human fine-motor limits
Navigation of patient values and quality-of-life tradeoffs Literature synthesis for evidence-based protocols
Community and cultural context in care Administrative burden — documentation, coding, scheduling

The dimension pairs presented in this table are proposed analytical groupings based on theoretical affinity, not empirically derived clusters. No factor analysis, inter-rater reliability assessment, or expert panel validation (e.g., Delphi process) has been conducted. These pairings should be treated as a proposed taxonomy for organizing future empirical investigation.

The internal test for each item: Would a human or machine doing this instead produce a categorically inferior outcome — not merely a less efficient one?

The left column documents capabilities where the mechanism of action is relational presence itself. Kelley et al. (2014), in a meta-analysis published in PLoS ONE, found that the patient-clinician relationship has a small but consistent effect on health outcomes (d=0.35, range: 0.22–0.48 across measured outcomes). This is not a placebo effect in the dismissive sense. The relationship is part of the treatment. A patient who feels genuinely witnessed by their physician produces measurably different health outcomes from a patient who receives identical technical care without that relational quality.

The right column documents capabilities where machine structural advantages — tireless attention, scale across pharmacological databases, precision beyond human fine-motor limits — produce categorically superior outcomes in the specific function. Neither column is optional. The question is how current deployment allocates physician time between them.

The Human Column: Therapeutic Presence

The therapeutic relationship is not a soft variable. It is a documented clinical mechanism with measurable effects on treatment adherence, symptom reporting accuracy, diagnostic quality, and health outcomes. The Kelley et al. (2014) meta-analysis quantified this across multiple medical specialties: the physician who is present, attentive, and known to the patient produces different clinical outcomes from the physician who is technically competent but relationally absent.

Topol (2019), in Deep Medicine, argued that AI's deepest contribution to healthcare would be restoring the human relationship at the center of medicine — not by replacing physicians but by freeing them from the documentation burden that has eroded their capacity for presence. The argument is straightforward: a physician who spends 49% of their time on EHR documentation (Sinsky et al., 2016, Annals of Internal Medicine) has 49% less time for the relational work that constitutes the therapeutic mechanism of the left column.

The empathy erosion finding
Newton et al. (2008) and Hojat et al. (2009) documented a measurable decline in physician empathy during medical training. The current system already partially implements the extractive failure mode: it trains relational capacity out of physicians through overwork and administrative burden while having them perform machine-appropriate documentation work. The deployment inversion is not hypothetical. It is the status quo.

The ethical judgment component of the left column is irreducible in a different way. Treatment decisions in ambiguous cases — where evidence is incomplete, where patient values conflict with statistical optima, where quality-of-life tradeoffs have no objective answer — require a form of moral reasoning that depends on knowing the patient as a person, not as a data profile. This is not a limitation of current AI. It is a structural feature of the decision type.

The Machine Column: Precision at Scale

AI in healthcare is genuinely good at specific functions. Rajpurkar et al. (2022) documented AI performance in medical imaging that matches or exceeds radiologist accuracy in specific, well-defined diagnostic tasks. Drug interaction databases check combinations across a pharmacological literature no human physician can hold in memory. Continuous monitoring systems detect deterioration patterns that human attention, subject to fatigue and shift changes, structurally cannot maintain.

These are not marginal improvements. In their specific domains, machine capabilities are categorically superior to human performance. The right column of the Pair table represents genuine, irreplaceable contributions that would degrade healthcare if removed.

The critical entry in the right column is the last one: administrative burden. Documentation, coding, scheduling, compliance reporting, and insurance communication are machine-appropriate tasks that currently consume nearly half of physician time. This is not a capability where AI would merely assist. It is the single largest reallocation opportunity in healthcare — the mechanism by which the deployment inversion could be corrected.

The Deployment Inversion

Sinsky et al. (2016), published in Annals of Internal Medicine, conducted time-motion studies of ambulatory care physicians and found that for every hour of direct clinical face time, physicians spent nearly two additional hours on EHR and desk work, with 49% of total work time allocated to EHR and documentation. This figure is widely cited in healthcare policy literature. The exact percentage may vary by specialty and setting, but the directional finding — that documentation consumes approximately half of physician time — is robust across multiple studies.

Most AI healthcare investment concentrates on diagnostics: imaging analysis, symptom checkers, clinical decision support, and risk prediction. These are right-column functions where AI genuinely excels. But they target a domain where physicians already perform adequately in most cases, while leaving the administrative burden that destroys physician relational capacity untouched.

The deployment is doubly inverted: AI is deployed where physicians are adequate (diagnostics) rather than where physicians are drowning (documentation), and the documentation burden that AI could absorb is the very thing destroying the therapeutic relationship that AI cannot provide.

This is the sharpest FTP failure in healthcare. The corrective deployment is straightforward: AI handles documentation, coding, scheduling, compliance, and administrative communication — freeing physicians to be present with patients. Diagnostic AI supplements this by flagging patterns that merit physician attention, not by replacing physician judgment in ambiguous cases. The obstacle, as in education, is economic: diagnostic AI generates revenue (billable decision support); administrative AI reduces costs (documentation automation). The market incentive points to the wrong deployment.

Goddard et al. (2012), writing in the BMJ, documented automation bias in clinical settings: physicians who rely on automated systems show measurable degradation in independent diagnostic judgment. This is a Stage 1 indicator on the collapse gradient — the very capability AI is meant to augment begins to atrophy when AI is deployed in the diagnostic function rather than the administrative one.

Axis 2: The FTP Test

FTP Assessment · Healthcare
Fidelity FAILS
Transparency PARTIALLY SATISFIES
Participation FAILS

Fidelity: The dominant deployment design places AI in diagnostics, not administration. The therapeutic relationship — the left column — is not freed by current AI designs but is progressively eroded as documentation burden increases and physician relational capacity decreases. The deployment is inverted from what would maximize human contribution. The 30-day test: could physicians maintain therapeutic relationships if diagnostic AI were unavailable? Yes, and arguably better — automation bias (Goddard et al., 2012) suggests current deployment may be degrading independent clinical judgment.

Transparency: Partially satisfies. Diagnostic AI systems increasingly disclose their confidence levels and flagged findings (Level 1: functional). Most do not expose the training data, weighting, or decision pathways that produce recommendations (Level 2: process opacity). Clinical validation studies are published but audit access to proprietary algorithms (Level 3) remains the exception.

Participation: Fails. Patients — the population most affected by AI in healthcare — have no structured governance input into the design or deployment of clinical AI systems. Physicians have limited input through professional organizations but no direct governance role. Deployment decisions are made by hospital administrators, insurers, and technology vendors. The consent architecture is absent.

Axis 3: The Stakes

The documented consequence of the extractive design winning in healthcare is already visible: physician burnout, declining time per patient encounter, erosion of the therapeutic relationship, and measurable empathy decline through medical training (Newton et al., 2008; Hojat et al., 2009). These are not hypothetical risks. They are the current trajectory.

The Kelley et al. (2014) effect size of d=0.35 for the patient-clinician relationship on health outcomes, applied across the population receiving medical care, represents an enormous aggregate health impact. A healthcare system that systematically degrades this relationship through documentation burden — while deploying AI in diagnostics rather than administration — is producing a population-level health cost that does not appear in any economic analysis of AI healthcare investment.

The automation bias documented by Goddard et al. (2012) compounds this: physicians who increasingly defer to diagnostic AI in unambiguous cases may lose the independent judgment required for ambiguous ones. The left column's "ethical judgment in treatment decisions" depends on a diagnostic competence that atrophies under automation bias. The extractive design does not merely fail to free physicians. It degrades the very capabilities that constitute the human column.

Named Condition · HC-006
The Therapeutic Presence
The structural mechanism by which the patient-clinician relationship — documented clinical effect size d=0.35 — is systematically degraded by deploying AI in diagnostics (where it competes with physician judgment) rather than in administration (where it would free physician relational capacity). The deployment inversion is doubly destructive: it fails to relieve the documentation burden that erodes presence while introducing automation bias that degrades independent clinical judgment. The sharpest FTP failure in healthcare.

What Follows

The healthcare pair demonstrates the deployment inversion in its most consequential form: a domain where the therapeutic relationship is a documented clinical mechanism, where the administrative burden consuming that relationship is precisely the machine-appropriate work AI could absorb, and where current investment targets the wrong column. The Pair table's left column maps directly to the Capability Floor defined in HC-001. The Fidelity test measures whether patients' therapeutic relationships with their physicians are preserved or degraded under AI deployment.

HC-007 applies the same three-axis analysis to law, where the deployment inversion takes a different form: algorithmic risk assessment deployed in sentencing, where the concept of algorithmic fairness is mathematically incoherent, while legal research — the genuinely machine-appropriate function — receives comparatively less attention. The healthcare and law pairs together reveal a pattern: AI is consistently deployed where it competes with human judgment rather than where it would free human capacity.

← Previous
HC-005: Construction — The Craft-Endurance Pair
Next →
HC-007: Law — The Judgment-Research Pair

References

Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.

External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.