AI is transforming clinical decisions. Nothing documents what happens next.
More than 1,300 AI-enabled medical devices have been cleared by the FDA. Most are used in radiology, where AI now flags findings, prioritizes worklists, and suggests diagnoses. But the medical record captures only the final report — not whether the clinician assessed the case independently before seeing AI output.
That silence creates a problem. If AI is wrong and the clinician follows it, there’s no record that they relied on the machine. If AI is right and the clinician overrides it, there’s no record of why. Both paths carry liability, and neither is currently documented.
Automation bias is not hypothetical. It has been measured across experience levels, specialties, and countries. AI explanations do not reliably prevent it. And the regulatory environment is tightening: the EU AI Act, UK MHRA, and U.S. FDA are independently converging on the same requirement — documented, verifiable human oversight for clinical AI.
“AI safety depends on interactions among the model, interface, workflow, and human judgment.”
BCS, The Chartered Institute for IT — submission to MHRA National Commission on AI in Healthcare, February 2026The infrastructure to document those interactions does not exist. The medical record was not designed for it. Vendor logs are self-attested. And “the clinician is responsible for the final decision” is a policy — not a governance structure.
No medical malpractice case involving a clinical AI system has reached verdict in any jurisdiction. But the legal trajectory is clear. A Tesla Autopilot jury assigned $243 million in damages after finding the driver over-relied on automation — with vehicle telemetry as central evidence. Boeing paid $2.5 billion after flight data recorders proved its automation assumed pilots would correct errors in three seconds. In both cases, documentation of the human-automation interaction sequence was the decisive evidence. Healthcare has no equivalent record.