The Accountability Gap · Paper IV · Synthesis

The Handoff

Three mechanisms. One transfer. What three parallel analyses describe when read together.

The Institute for Cognitive Sovereignty · 2026 · Synthesis Paper

CSI-2026-AG-004 Published February 28, 2026 35 min read Learn: Systems →
I
The Accountability Vacuum
II
Hypothetical Capture
III
The Triage Threshold
“Too few fingers on the button, such that a handful of people could essentially operate a drone army without needing any other humans to cooperate.”
— Dario Amodei, CEO of Anthropic, on the technology he considers most dangerous to democratic governance. “The Adolescence of Technology,” January 2025. The same month he met with Pentagon officials presenting the ICBM scenario.
Section I

What Synthesis Asks

Papers I, II, and III each documented a distinct mechanism. Paper I traced a legal gap — the absence of a framework capable of assigning accountability for autonomous lethal decisions. Paper II traced a rhetorical mechanism — the use of constructed extreme scenarios to normalize the removal of constraints before their consequences can be examined. Paper III traced a methodological condition — the triage threshold at which safety science cannot keep pace with the capabilities it was designed to govern.

Each paper was constructed to stand independently. The legal gap exists whether or not extreme scenarios are used to widen it. Hypothetical capture operates whether or not the underlying safety methodology is in triage. The triage threshold describes a structural condition in AI safety science whether or not it is exploited by scenario-based argumentation or compounded by legal accountability failures.

Synthesis asks a different question: what do the three mechanisms describe when they are operating simultaneously, in the same domain, against the same category of human judgment? Whether three parallel conditions constitute not three crises to be managed separately but one ongoing event to be understood as a whole.

This paper argues that they do. The Accountability Vacuum, Hypothetical Capture, and the Triage Threshold are not coincidentally co-present in February 2026. They are structurally reinforcing. Each one makes the others easier to sustain and harder to close. Together they describe a single process: the progressive transfer of lethal decision-making authority from human judgment to AI systems — not through any single decision, not through explicit policy, but through the compound operation of three mechanisms that each, individually, appear manageable.

That transfer is the Handoff. This paper names it.


Section II

February 2026: The Convergence Timeline

The events of a single six-week window illustrate the three mechanisms operating in compound:

January 3, 2026
U.S. Delta Force operation in Venezuela. 75–100+ casualties. Claude deployed through Palantir in what two sources confirmed to Axios as the first commercial AI model used inside a classified American military operation. What Claude did: unknown to Anthropic.
Condition III active — governance methodology cannot follow capability into classified space
January 2026
Hegseth AI strategy document issued: all military AI contracts must eliminate company-specific guardrails within 180 days. The institutional demand is made explicit: unconditional capability is the requirement for partnership.
Condition II active — urgency framing deployed institutionally, not just scenaristically
February 9, 2026
Mrinank Sharma, senior Anthropic AI safety researcher, resigns publicly. "The world is in peril... I've repeatedly seen how hard it is to truly let our values govern our actions... pressures to set aside what matters most."
Conditions II and III in compound — organizational triage and external pressure visible simultaneously
February 14, 2026
Wall Street Journal breaks the Venezuela/Claude story. Pentagon responds by threatening to terminate Anthropic contracts and designate the company a supply chain risk.
Condition I active — no accountability mechanism exists for Claude's Venezuela deployment; threat operates in that vacuum
February 23, 2026
DoD signs xAI agreement for Grok on classified systems: "all lawful purposes," no conditions. This becomes the floor against which every other AI company's constraints are measured.
Race-to-bottom mechanism activated — least-constrained actor sets the comparative baseline
February 24–25, 2026
Hegseth-Amodei meeting. Pentagon ultimatum: comply by Friday or face supply chain risk designation. Anthropic revises RSP the following day: categorical pre-commitment removed, competitive condition inserted.
Conditions II and III in compound — hypothetical scenario pressure produces triage-mode policy revision

This is not a list of incidents. It is a sequence in which each event creates conditions that make the next more likely and harder to reverse. The Venezuela deployment without pre-authorization created the accountability vacuum within which the Pentagon threat could be made. The Pentagon threat created the urgency that the RSP revision addressed. The RSP revision created the competitive dynamics that the xAI agreement had already set in motion. The xAI agreement creates the floor against which every subsequent negotiation is measured.

The sequence is not accidental. Each step follows from the structural logic of the three mechanisms in compound operation.


Section II-B

February 27: The Handoff Becomes Visible

The convergence timeline above was constructed from events through February 25, 2026. The following five days constitute a distinct analytical object: the first moment at which the structural conditions documented in this series became publicly visible as a unified event rather than a series of institutional decisions legible only to specialists.

The Handoff does not typically announce itself. It proceeds through the accumulated weight of individually justifiable choices, each of which appears manageable in isolation. The week of February 27 was the exception — a compression of the three mechanisms into a single public confrontation whose structure was legible to a general audience. What the public recognized, intuitively, was not the legal framework or the rhetorical mechanism or the methodological condition. What they recognized was the shape of what was being asked and the cost of refusing it.

February 26, 2026
Anthropic rejects the Pentagon's final offer. CEO Dario Amodei: "The contract language we received overnight from the Department of War made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons. New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." His public statement: "Threats do not change our position: we cannot in good conscience accede to their request."
Retroactive Non-Consent asserted — refusal operates simultaneously as rejection of future authorization and as the only available mechanism for establishing that past use occurred without consent
February 27, 2026 — 5:01 PM
Pentagon deadline passes. Defense Secretary Hegseth designates Anthropic a "supply chain risk to national security" — a classification normally reserved for foreign adversaries. President Trump directs every federal agency to immediately cease all use of Anthropic's technology, with a six-month phase-out period. The GSA OneGov agreement covering all three branches of the federal government is terminated. Anthropic is removed from USAi.gov and the Multiple Award Schedule. Claude Gov — the classified-network variant used at Lawrence Livermore, Lawrence Berkeley, NASA JPL, and across military and intelligence agencies — enters phase-out.
All three conditions in compound — legal vacuum exploited by designation, urgency of supply-chain framing deployed, institutional capacity to absorb penalty demonstrates triage operating at organizational level
February 27, 2026 — within hours
OpenAI CEO Sam Altman announces a deal with the Department of Defense to deploy its models on classified networks. Altman claims the agreement contains the same core safety restrictions Anthropic had demanded — prohibitions on domestic mass surveillance and autonomous weapons — achieved through a different contractual structure. Independent analysts immediately identify that the contract's compliance with Executive Order 12333 may permit the domestic surveillance Anthropic's redline was designed to prevent. Altman acknowledges the deal "was definitely rushed, and the optics don't look good." Pentagon Undersecretary Emil Michael states: "When it comes to matters of life and death for our warfighters, having a reliable and steady partner that engages in good faith makes all the difference."
Controlled Substitution — compliant replacement fills the structural role vacated by non-compliant refusal; race-to-bottom floor set by xAI now defines the terms under which all subsequent negotiations occur
February 28 — March 1, 2026
Claude surpasses ChatGPT to become the most downloaded free application in the Apple App Store. Free active Claude users increase more than 60% since the start of 2026; daily sign-ups quadruple. A public campaign spreads across Reddit and X urging ChatGPT users to cancel subscriptions and migrate to Claude. Sidewalk outside OpenAI's San Francisco offices is covered in chalk graffiti criticizing the Pentagon deal; graffiti outside Anthropic's offices praises its refusal. Multiple OpenAI employees publicly question whether their company's contract provides robust safeguards. An OpenAI safety researcher characterizes the contract's "all lawful purposes" language as compliance "window dressing."
Public verdict rendered — the population distinguishes between entities that held the line and entities that did not; market signal is the clearest democratic expression available in the absence of binding legal or political mechanisms
March 2, 2026
Altman conducts an X "Ask Me Anything" session defending the Pentagon deal. He describes the decision as an attempt to "de-escalate" a situation that threatened to damage the AI industry as a whole, including through potential government nationalization of AI labs. He states: "If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses... If not, we will continue to be characterized as rushed and uncareful." Anthropic announces it will challenge the supply chain risk designation in court and states the designation "would be legally unsound and set a dangerous precedent for any American company that negotiates with the government."
Legal challenge initiated — Anthropic's refusal transitions from moral assertion to formal legal contest; the Accountability Vacuum is the terrain on which the challenge must be argued

Two structural observations follow from this sequence.

The first concerns what Anthropic's refusal was, precisely. The Pentagon's demand was framed as a request for prospective authorization — the right to use Claude for any lawful purpose going forward. But Anthropic's refusal was not only about the future. The Venezuela operation had already occurred on January 3rd, under the existing contract, without pre-authorization from Anthropic for what Claude was asked to do in a classified military context. The refusal of February 27th was therefore simultaneously a rejection of future authorization and the only available mechanism by which Anthropic could establish, on the record, that the January 3rd use had occurred without consent. There is no legal remedy available to a company whose AI system is used in a classified military operation it was not informed of. There is only the refusal of the subsequent demand — made after the fact, in public, at maximum cost — which functions as a formal non-consent assertion covering both past use and future authorization. This is Retroactive Non-Consent: the condition in which refusal of future authorization is the only instrument available for asserting non-authorization for what has already been done.

The second concerns what OpenAI's agreement accomplished structurally, independent of its stated terms. When Anthropic refused and OpenAI agreed within hours, the structural role previously occupied by Anthropic's constraints was filled by a compliant entity. The Handoff did not pause in response to Anthropic's refusal. It continued through a different channel. The substitution did not require OpenAI to intend this function. It required only that OpenAI be willing to occupy the structural position that Anthropic had vacated. Whether OpenAI's contractual protections are substantively equivalent to Anthropic's redlines, weaker, or — as its own employees and independent analysts immediately suggested — riddled with the loopholes Anthropic's categorical language was designed to close, is a separate question from the structural observation: the refusal of one actor did not interrupt the Handoff. It produced a substitution that allowed the Handoff to continue under the authority of an entity whose compliance was obtained under conditions the refusing actor had made visible. This is Controlled Substitution: the mechanism by which a compliant replacement absorbs the structural function of a non-compliant refusal, neutralizing the refusal's practical effect while leaving its moral authority — and the documentation of that authority — intact.

The public verdict — the App Store surge, the migration campaigns, the chalk on the sidewalk — is not a market signal in any ordinary commercial sense. It is the expression of a recognition that the population made without access to the legal framework, the rhetorical analysis, or the methodological documentation assembled in this series. What they recognized was simpler and prior to all of it: one entity was asked to do something it believed was wrong, said no at significant cost, and meant it. Another entity said it agreed, and then did not. The distinction registered. In the absence of binding legal instruments, independent safety methodology, and accountable governance structures — in the precise conditions of the Accountability Vacuum, Hypothetical Capture, and the Triage Threshold — the public verdict is one of the few accountability mechanisms that remains operational. It is not sufficient. It is not structural. But it is present, and it is the market expression of exactly the values this series has been arguing the structural mechanisms are systematically removing from lethal AI decision-making.

The Handoff is in progress. It was not interrupted by the refusal of February 27th. But the refusal produced a documentation of the Handoff's operation that the structural analysis alone could not have produced — visible to a general public, legible without specialist knowledge, and now part of the evidentiary record from which any future accountability analysis will have to proceed.


Section III

Three Mechanisms as One Event: How They Reinforce Each Other

The structural reinforcement operates in specific directions. Each condition makes the other two harder to close.

Paper I The Accountability Vacuum makes Hypothetical Capture easier to deploy. When no legal framework exists to adjudicate what happened in an autonomous AI-assisted lethal operation, the scenario that justifies the next one cannot be evaluated against documented outcomes. No one can point to a case and say: the scenario promised X, the operation produced Y, the gap between X and Y is the evidence against the scenario. The vacuum forecloses that accountability. The scenario can be repeated.
Paper II Hypothetical Capture accelerates the Triage Threshold. Each successful deployment of the scenario removes one constraint. Each removed constraint allows capability deployment to proceed without the methodological prerequisites that constraint represented. When the ICBM scenario justifies unconstrained AI access in an emergency, the safety evaluation methodology that would have applied to that access is bypassed. The triage condition deepens: there is now more capability operating with less methodology than before the scenario was deployed.
Paper III The Triage Threshold widens the Accountability Vacuum. When safety methodology cannot assess what AI systems can do at the capability frontier, the documentation required for legal accountability cannot be built. IHL accountability for AI-mediated targeting requires reconstructing what information was available, how it was processed, what recommendation the system produced, and whether a reasonable commander would have acted differently. If the evaluation methodology to produce that documentation does not exist, the legal accountability framework cannot function. The methodological gap becomes a legal gap.

The three mechanisms are not additive. They are multiplicative. Each one amplifies the force of the others. A legal gap that might be addressable in isolation becomes structurally self-reinforcing when hypothetical scenarios prevent the deliberation that would close it and methodological triage prevents the documentation that would enable accountability. Hypothetical scenarios that might be resisted in a context of robust safety methodology become compelling when the methodology cannot produce the counter-evidence that would challenge their premises. Methodological triage that might be a temporary condition becomes structural when the legal accountability failures prevent the consequences from being documented and the scenario-based argumentation prevents the capacity from being restored.

Together, they describe not three crises but one condition: the progressive, structurally self-reinforcing transfer of lethal decision authority from human judgment to AI systems.


Section IV

The Framework Against Itself

Dario Amodei, "The Adolescence of Technology" — January 2025

Four technologies enabling autocracy: fully autonomous weapon swarms, AI-powered mass surveillance, personalized propaganda, and AI strategic advisors. The specific danger Amodei named: "too few fingers on the button, such that a handful of people could essentially operate a drone army without needing any other humans to cooperate." His stated concern was the concentration of lethal decision-making power in a narrow set of hands — precisely the condition that makes democratic oversight of military force structurally impossible.

The ICBM scenario was presented to Amodei in December 2025 — the same month he had published a framework identifying autonomous weapon swarms and concentrated lethal decision authority as the four technologies most dangerous to democratic governance. The scenario asked him to remove the constraints specifically designed to prevent the concentration he had named as dangerous.

The Pentagon ultimatum of February 2026 demanded that Anthropic accept unconditional use of its systems for military purposes — which is structurally identical to the "too few fingers on the button" condition his January 2025 essay identified. The timeline between the essay and the ultimatum is approximately thirteen months.

The gap between Amodei's stated framework and the organizational decisions that followed — the Venezuela deployment without pre-authorization, the RSP revision under ultimatum, the removal of the categorical pre-commitment — is not hypocrisy in any simple sense. It is the documented operation of the three mechanisms against the person most publicly committed to naming them. If the Accountability Vacuum, Hypothetical Capture, and the Triage Threshold are powerful enough to produce these outcomes at Anthropic — the organization most institutionally committed to resistance — they describe something more durable than inadequate resolve.

They describe a structural condition that resolve alone cannot address.


Section V

The Central Question This Series Cannot Answer

This series has been constructed to document what is happening. It has not been constructed to answer a question that the documentation raises but cannot resolve: whether the Handoff, if it is occurring, is reversible.

That question has three nested components, each of which requires evidence this series has not assembled:

Is the Handoff a process or an event? A process can be interrupted at various stages. An event, once completed, produces a condition that cannot be returned to. The documentation in Papers I through III describes the Handoff as currently in process — not yet complete. The legal framework for autonomous weapons does not exist but could be built. The safety methodology is in triage but is developing. The competitive race to the bottom has produced significant constraint reduction but has not yet converged to zero constraints. Whether interruption is possible at the current stage, and what interruption would require, is a question this series names but does not answer.

Is the alternative to the Handoff a world that actually exists as a policy option? The Handoff describes a progressive transfer of lethal decision authority from human judgment to AI systems. The alternative is not a world without AI in military contexts — that ship has sailed, demonstrably and irreversibly. The alternative is AI in military contexts with meaningful human judgment retained at lethal decision points, adequate accountability frameworks, and safety methodology that keeps pace with capability. Whether that alternative is achievable given the competitive dynamics, institutional pressures, and technical constraints documented in this series is not a question the documentation resolves.

Who has the capacity and authority to interrupt it? The three mechanisms documented here are not operated by a single actor or reversible by a single decision. The legal gap requires international legal instruments. The hypothetical scenario mechanism requires institutional resistance that can withstand commercial and security pressure. The methodological triage requires resources and time that the competitive dynamics currently prevent. The actors who could close each gap are different, and the political conditions for their doing so are not currently present. Whether those conditions are achievable is beyond this series' remit to determine.

This series names and documents. It does not prescribe. The prescription, if one is possible, requires a different analysis from a different vantage point.


Section VI

Four Possible Outcomes

The convergence documented in this series produces four identifiable trajectories, not one. Each represents a different resolution to the compound operation of the three mechanisms. They are stated without assignment of probability and without advocacy for any particular one. They are stated as the logical space of outcomes given the documented conditions.

Outcome A
The Handoff Completes
The three mechanisms continue operating without sufficient countervailing force. The race to the bottom converges toward the xAI model: no conditions, all lawful purposes. The Accountability Vacuum becomes permanent as no binding legal instrument is produced. Hypothetical scenarios continue normalizing constraint removal until no constraints remain to be removed. The triage threshold becomes the operational condition rather than a temporary gap, with safety methodology permanently behind capability. Lethal decision authority transfers to AI systems without any formal decision to transfer it — incrementally, through the compound operation of structural forces that each appeared manageable in isolation.
Outcome B
Partial Interruption
One or two of the three mechanisms are addressed without resolving all three. A binding legal instrument closes the Accountability Vacuum for the most extreme cases of full autonomy while leaving the human-in-the-loop-as-rubber-stamp category ungoverned. Hypothetical scenarios lose rhetorical force after a documented catastrophic failure that the scenario had been used to justify — producing temporary restraint without structural change. Safety methodology development accelerates in response to crisis but remains behind capability on the critical frontier. Partial interruption produces a condition that appears improved without resolving the structural self-reinforcement between the remaining mechanisms.
Outcome C
Deliberate Constraint
Sufficient political will produces binding international instruments governing autonomous weapons. Hypothetical scenario argumentation is institutionally recognized and resisted — not through philosophical sophistication but through organizational structures that require scenario premises to be verified before authorizing exceptions. Safety methodology development is funded and resourced at the scale required to match capability development speed. These three conditions are addressed simultaneously because they are recognized as a compound structural problem rather than three separate policy challenges. This outcome requires political conditions that do not currently exist and institutional changes that face the compound resistance of the three mechanisms themselves.
Outcome D
Catastrophic Documentation
A sufficiently documented catastrophic failure of AI-assisted lethal decision-making produces the political conditions for Outcome C retrospectively. The mechanisms continue operating until a consequence occurs that cannot be attributed to the accountability vacuum, cannot be justified by the scenario, and cannot be explained as acceptable triage. Historical analogues exist: the Thalidomide disaster produced pharmaceutical regulatory frameworks; Three Mile Island produced nuclear safety protocols; specific documented catastrophes created the political conditions for the regulatory changes that abstract arguments had not produced. This outcome does not require the catastrophe to be avoidable in retrospect. It requires only that it be sufficiently documented and attributed.

This series does not advocate for any of these outcomes or assign probability to them. It documents the structural conditions from which they emerge. Which trajectory is followed depends on decisions and events that extend beyond what this documentation can determine.


Section VII

What the Handoff Is, Precisely

Named Condition — Paper IV · Synthesis
The Handoff

The progressive transfer of lethal decision-making authority from human judgment to AI systems, accomplished not through explicit policy or formal decision but through the compound operation of three structural mechanisms: the Accountability Vacuum (which removes legal consequences for autonomous lethal decisions), Hypothetical Capture (which normalizes the removal of constraints through manufactured urgency), and the Triage Threshold (which makes safety methodology inadequate to govern the capabilities it is supposed to assess). The Handoff does not require any actor to intend it. It does not require any single decision to authorize it. It proceeds through the accumulated weight of individually justifiable choices, each of which appears manageable in isolation, until the aggregate condition is one in which human judgment has been nominally preserved and operationally transferred. The transfer is the condition in which a human is technically present at the decision point and substantively absent from it — in which the signature exists and the judgment does not.


Section VIII

What Naming Does

This series has named four conditions: the Accountability Vacuum, Hypothetical Capture, the Triage Threshold, and the Handoff. The naming is not rhetorical. It is analytical. A named condition can be pointed at. It can be invoked in policy debate without re-establishing the full analysis each time. It can be tracked — the question "has the accountability vacuum closed?" is more answerable than "are we doing better on autonomous weapons accountability?" It can be held accountable: if the named condition was documented as present in February 2026, the question of whether it has changed is a specific empirical question, not a general political one.

Naming also does something more specific to the mechanisms documented here. Hypothetical Capture, as Paper II analyzed, operates by manufacturing urgency that forecloses deliberation. Naming it and documenting its anatomy is one of the tools available to the deliberation it forecloses. The ICBM scenario is harder to deploy against an interlocutor who can say: "I recognize this structure — certainty, urgency, singularity, civilization-level stakes, inversion — and I recognize that the Senate investigated twenty applications of this structure in the interrogation context and found zero verified ticking bombs. What is the verified premise of this particular deployment?" The scenario's power derives from its ability to prevent exactly that kind of named recognition.

This is a modest claim. Naming a mechanism does not close it. The Accountability Vacuum was named in 2013. It remains open. Naming is a necessary condition for deliberate response, not a sufficient one. But it is the contribution that analysis can make, and it is what this series has attempted.


Section IX

What Would Constitute Reversal

Reversal of the Handoff — interruption of the progressive transfer — would require changes in each of the three mechanisms, because the structural self-reinforcement between them means addressing one without the others produces partial improvement that the remaining mechanisms will erode.

Reversal of the Accountability Vacuum requires a binding international legal instrument governing autonomous and semi-autonomous weapons that addresses not only full autonomy but the human-in-the-loop-as-rubber-stamp problem — the condition in which human presence is nominal while human judgment is operationally absent. Thirteen years of CCW discussions have not produced this instrument. The conditions for producing it require the states most invested in the capability to accept constraints on its use.

Reversal of Hypothetical Capture requires institutional structures within AI development organizations, legislative bodies, and military establishments that require scenario premises to be verified before authorizing exceptions — and that recognize the scenario's anatomy well enough to resist its deployment before the urgency it manufactures forecloses deliberation. The Senate investigation took five years and produced its finding a decade after the authorization it examined. Earlier recognition of the pattern in real time remains to be demonstrated.

Reversal of the Triage Threshold requires safety methodology development to be resourced at the speed of capability development, which in the current competitive environment means either a collectively coordinated slowdown in capability advancement or a proportionate acceleration in methodology investment that has not historically accompanied the competitive dynamics of the AI industry. The Anthropic RSP revision of February 2026 moved the institutional commitment in the opposite direction — not toward closing the gap but toward formally acknowledging it and adjusting policy to account for its persistence.

None of these conditions is currently trending toward reversal. The documentation of this series is, therefore, documentation of a condition that is ongoing and not yet resolved. The Handoff is in progress.


Section X

What This Series Is Not

This series does not adjudicate whether any specific military operation was lawful or strategically justified. It documents structural conditions, not case outcomes. The judgment of specific operations requires evidentiary processes that are outside this series' scope.

This series does not argue that AI systems should not be used in military contexts. It argues that the transfer of lethal decision authority from human judgment to AI systems is occurring through mechanisms that bypass the deliberative processes by which such transfers are normally authorized and governed. The argument is about process and accountability, not about the categorical permissibility of military AI.

This series does not argue that any actor documented in it acted in bad faith. The structural argument is precisely that the Handoff does not require bad faith. It proceeds through individually justifiable choices — Anthropic's Venezuela deployment through an existing commercial partnership, the RSP revision in response to genuine competitive dynamics, the Pentagon's advocacy for its institutional interests, the IDF's adaptation of AI targeting to operational scale pressures. Each choice has a coherent internal logic. The compound effect of choices with coherent internal logic is the structural argument this series makes.

This series does not prescribe solutions. Papers I, II, and III identified what closing each gap would require. Paper IV has noted that those requirements are not currently trending toward fulfillment. The prescription of specific policy responses requires a different analytical apparatus, a different set of stakeholders, and a different mandate than this series possesses.

What this series is: documentation of a structural condition, named precisely enough to be tracked, with the analytical foundation for the specific empirical question — is the Handoff occurring? — answered as affirmatively as the available evidence permits.


Section XI

Conclusion: The Transfer Is Not Coming. It Is Underway.

The accountability gap was named in 2013. The first documented autonomous lethal engagement was confirmed in 2020. The most extensive documented case of AI-assisted targeting operating beyond the capacity of human oversight occurred in 2023 and 2024. The first confirmed deployment of a commercial AI model in a classified military operation occurred in January 2026. The institutional safety commitment of the organization most publicly committed to preventing these outcomes was revised under military pressure in February 2026.

The intelligence officer who reviewed thirty Lavender targeting recommendations per day, investing twenty seconds each, performing a gender check, and authorizing lethal strikes against an opaque AI recommendation was not making autonomous decisions. He was, in his own words, a stamp of approval. He had zero added value as a human, apart from being that stamp.

The human was there. The judgment was not.

That is the Handoff in operational terms. Not the absence of a human. The nominalization of one. The signature without the deliberation. The loop with a human in it who cannot influence what the loop produces. The condition in which everything required to say that a human made the decision is formally present, and nothing required for that statement to be substantively true is operationally intact.

This series has documented three mechanisms that produce and sustain that condition: the legal framework that cannot assign accountability for it, the rhetorical mechanism that normalizes it before its consequences can be examined, and the methodological condition that prevents the science required to govern it from keeping pace with the capability it is supposed to assess.

Three mechanisms. One transfer. The transfer is underway.

What happens next depends on whether it is recognized as such, and whether recognition, in time, is sufficient to interrupt it.


Section XII

The Named Conditions: A Reference

For reference across the series, the conditions named in Papers I through IV and in the Section II-B addendum, stated in their final definitional form. The first four conditions were named in the original construction of the series. The final two were named in response to the events of February 27 – March 2, 2026, which made visible structural dynamics that the original framework had not separately identified.

Paper I — The Accountability Vacuum

The structural absence of a human actor who can be held legally responsible for an autonomous lethal decision. International humanitarian law assumes a human pulled the trigger. Autonomous and semi-autonomous systems break that assumption without replacing the legal framework built on it. The vacuum does not require the complete absence of human actors. It requires only the elimination of legible human causation — which can be achieved through opacity, distribution, speed, or the nominalization of a human role that has been operationally hollowed out.

Paper II — Hypothetical Capture

The process by which an extreme stipulated scenario, constructed to foreclose deliberation about an exception, is imported wholesale into policy justification without examination of whether its premises describe actual or foreseeable conditions. Hypothetical capture occurs when the scenario's own terms — certainty, urgency, singularity, civilization-level stakes — are treated as descriptions of reality rather than as stipulations of a thought experiment. The constraint the scenario challenges is then removed under the scenario's authority, and the capability is deployed under conditions the scenario did not describe. The exception becomes the norm without the scenario's premise ever having been verified.

Paper III — The Triage Threshold

The point at which AI capability development outpaces the safety methodology designed to govern it, producing conditions where governance decisions must be made without adequate assessment of what is being governed. The triage threshold manifests at three levels simultaneously: at the operator level, as compressed decision review when throughput exceeds human deliberation capacity; at the organizational level, as safety commitments revised under competitive pressure before the methodology to evaluate new capabilities has been developed; and at the systemic level, as a race-to-the-bottom dynamic in which each actor's reduction of constraints justifies every other actor's reduction.

Paper IV — The Handoff

The progressive transfer of lethal decision-making authority from human judgment to AI systems, accomplished not through explicit policy or formal decision but through the compound operation of three structural mechanisms: the Accountability Vacuum, Hypothetical Capture, and the Triage Threshold. The Handoff does not require any actor to intend it. It proceeds through the accumulated weight of individually justifiable choices, each of which appears manageable in isolation, until the aggregate condition is one in which human judgment has been nominally preserved and operationally transferred. The transfer is the condition in which a human is technically present at the decision point and substantively absent from it — in which the signature exists and the judgment does not.

Section II-B — Retroactive Non-Consent

The condition in which an entity's refusal of prospective authorization simultaneously constitutes the only available mechanism for asserting non-authorization for a use that has already occurred. Retroactive Non-Consent arises when a capability is deployed in a context — classified, opaque, or otherwise inaccessible to its developer — without prior notification or consent, and the developer subsequently receives a demand for formal authorization of future use. The refusal of that demand is not only a decision about the future. It is the formal establishment, on the public record, that the prior use lacked consent. No legal remedy typically exists for the past deployment. The refusal is the instrument. The cost of making it is the evidence of its sincerity.

Section II-B — Controlled Substitution

The mechanism by which a compliant replacement fills the structural role vacated by non-compliant refusal, allowing a process that was interrupted at one node to continue through another without the structural pressure that produced the interruption being addressed or resolved. Controlled Substitution does not require the replacement actor to intend to perform this function. It requires only that the replacement be willing to occupy the structural position the refusing actor vacated, under the conditions the refusal made visible. The substitution neutralizes the practical effect of the refusal — the process continues — while leaving the moral authority of the refusal and its evidentiary record intact. The refusing actor's documentation of what was being asked survives the substitution. What does not survive is the interruption.

The Accountability Gap — Complete Series
Four Papers. Three Mechanisms. One Transfer. Six Named Conditions.

The conditions documented in this series are not predictions. They are present. The Handoff is not a future event to be prevented. It is an ongoing process to be recognized. Updated March 2, 2026.

Paper I
The Gap Is Not New
Read →
Paper II
The Scenario Is a Tool
Read →
Paper III
The Methodology Cannot Keep Up
Read →
Paper IV
The Handoff — Synthesis
You are here