When AI mediates a decision that causes harm, existing accountability structures fail to assign responsibility — structurally, not incidentally
Doshi-Velez, Kortz, and colleagues at the Berkman Klein Center documented it in 2017: when an algorithmic system makes or mediates a decision that causes harm, existing legal and governance frameworks cannot reliably assign accountability. The problem is not that no one is responsible. The problem is that everyone has a plausible claim to not be responsible — and the frameworks that should resolve this ambiguity do not.
The developer says: "We built the model. We did not deploy it in this context. The deployer made that decision." The deployer says: "We followed the developer's specifications. The system behaved as designed. If it caused harm, the design is at fault." The operator says: "I was told to use this system. I followed the protocol. The system made the recommendation." The user says: "I had no choice. The system was mandatory. I was not consulted about its adoption."
Responsibility diffuses across developer, deployer, operator, and user until no single party bears meaningful accountability. This is not a failure of the system. It is the structural outcome of the current legal and governance framework applied to AI-mediated decisions.
On March 18, 2018, an Uber autonomous test vehicle struck and killed Elaine Herzberg in Tempe, Arizona. It was the first documented pedestrian fatality involving an autonomous vehicle. The accountability question it raised remains unresolved.
The vehicle's automated driving system detected Herzberg approximately 6 seconds before impact. It classified her variously as an unknown object, a vehicle, and a bicycle — cycling through classifications without settling. The system did not initiate emergency braking. The safety driver, Rafaela Vasquez, was monitoring the system. Dashcam footage showed she was looking at her phone in the seconds before impact.
Vasquez was charged with negligent homicide. She pleaded guilty to a lesser charge. But the deeper accountability question was never addressed: Vasquez was hired to monitor a system that, by design, required sustained attention to a process that provided no structural mechanism for maintaining that attention. The Complacency Paradox (HC-018) predicts exactly this outcome. The system design produced the conditions for the failure, but the system designer was not charged.
The safety driver was charged with failing to monitor a system designed in a way that made sustained monitoring structurally impossible. The system that required impossible vigilance was not held accountable. The person who failed at the impossible task was.
Uber paid a settlement to Herzberg's family. No criminal charges were filed against Uber. The National Transportation Safety Board found that Uber's "inadequate safety culture" contributed to the crash, but "inadequate safety culture" is not a criminal offense. The accountability vanishing point was reached: everyone was partially responsible, no one was fully accountable, and the structural design that produced the failure continued to operate in modified form.
The ABA Foundation's 2023 analysis of product liability for AI-mediated decisions identified four structural gaps in the existing accountability framework. Each gap has a documented escape route.
The developer builds the model and releases it. Product liability traditionally applies to manufacturers. But AI models are not products in the traditional sense — they are general-purpose tools whose behavior depends on training data, fine-tuning, and deployment context. The developer's escape: "The model performed as specified. The harm arose from the deployment context, which we did not control."
The deployer integrates the model into a product or service and releases it to users. The deployer's escape: "We used the model as the developer specified. The harm arose from the model's behavior, which we did not design. We are a downstream user, not a manufacturer."
The user operates the system or is subject to its decisions. The user's escape: "I used the system as instructed. I was told it was safe/accurate/approved. I had no meaningful choice about whether to use it — it was mandated by my employer/court/institution."
The regulator approved or failed to regulate the system. The regulatory escape: "No specific regulation required pre-deployment approval for this application. The system was not within our jurisdiction. We provided guidance, not mandates."
The four gaps are not independent. They form a system in which each party's escape route depends on redirecting responsibility to another party, and each receiving party has its own escape route. The result is a closed loop of deflection with no terminal point of accountability.
The European Union's General Data Protection Regulation includes Article 22: the right not to be subject to a decision based solely on automated processing. It is the most specific legal instrument addressing algorithmic accountability. Its limitations are documented and structural.
The "solely" qualifier means that any human involvement — however cosmetic — removes the decision from Article 22's scope. A rubber-stamp approval by a human operator, the kind documented in HC-017 where 93% of recommendations are accepted without modification, is sufficient to convert an automated decision into a human decision with automated assistance. Article 22 does not distinguish between meaningful and cosmetic human involvement.
The ABA Foundation analysis and subsequent legal scholarship have documented this gap: GDPR Article 22 created a right that is structurally circumventable. Any organization that adds a human review step — regardless of whether that review is meaningful — has removed its automated decisions from the scope of the regulation. The regulation incentivizes cosmetic oversight rather than substantive accountability.
Facebook/Meta's content moderation system provides a case study at scale. AI systems flag content. Human moderators review flagged content. The moderators are exposed to graphic violence, child exploitation, and extremist content as a condition of employment. Multiple investigations have documented PTSD, anxiety disorders, and other psychological harm among content moderators.
The accountability question: who is responsible for the harm to moderators? Meta says the moderators are contractors, employed by third-party firms. The contracting firms say they follow Meta's specifications. The moderators say they had no meaningful choice — the work was available and they needed employment. The regulatory framework provides no mechanism for assigning accountability for psychological harm caused by a system design that requires humans to process content that AI has flagged as potentially harmful.
The harm to content subjects is similarly unaddressed. When AI-mediated content moderation fails — when harmful content is not removed, or when legitimate content is wrongly removed — the affected users have no clear accountability mechanism. Meta's terms of service disclaim liability. The AI system has no legal personhood. The human moderators who made the final decision were processing hundreds of items per hour under conditions that HC-017's three requirements for meaningful override would classify as structurally incapable of producing meaningful decisions.
"The solution is better regulation. Create clear liability frameworks for AI-mediated decisions." This is necessary but not sufficient. Regulation can assign liability. But if the assigned party can demonstrate that the harm arose from a system interaction — between developer, deployer, operator, and user — rather than from any single party's action, the liability assignment becomes a legal contest rather than a governance mechanism. The deeper problem is structural: AI-mediated decisions distribute agency across multiple parties in ways that existing accountability frameworks were not designed to address. New regulation must address the distribution of agency, not merely assign liability to the most convenient party.
HC-019 completes Series 3: The Loop Architecture. The series establishes four connected findings. The loop is a structural requirement, not a feature (HC-016). Most override designs produce cosmetic rather than meaningful oversight (HC-017). Human cognitive architecture degrades oversight quality under conditions of high automation accuracy (HC-018). And when the degraded oversight produces harm, existing accountability structures cannot assign responsibility (HC-019).
These four findings form a cascade: architectural design determines override quality, which determines oversight capacity, which determines accountability when things go wrong. The cascade is not theoretical. It is documented in aviation disasters, autonomous vehicle fatalities, clinical AI errors, criminal justice algorithmic failures, and content moderation harm at scale.
The series returns to the question posed in HC-016: is the loop a feature or a structural requirement? The evidence across four papers and multiple domains is consistent. The loop is structural. When it is treated as a feature — added after design for compliance rather than built into the architecture for function — the predictable outcome is cosmetic oversight, degraded human capacity, and accountability that vanishes when it is most needed.
Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.