When clinicians work alongside AI, the medical record goes silent on the most critical part: how the decision was actually made. Evidify captures the full decision trajectory — independent judgment, AI disclosure, comprehension verification, and documented override reasoning — in a tamper-evident, publication-grade evidence package.
When a radiologist reads a study with AI assistance, the medical record does not capture whether they saw the AI output, agreed with it, overrode it, or why. The decision sequence is invisible.
Clinicians face potential liability for following an incorrect AI recommendation and for overriding a correct one. Structured documentation is the only available defense for either scenario. No standard exists.
The EU AI Act mandates human oversight documentation for high-risk AI systems by August 2026. ACR has called for payment structures recognizing AI review workload. No compliance-ready documentation standard exists.
The clinician's diagnostic impression is captured and cryptographically locked before any AI output is revealed. This proves clinical judgment preceded AI influence — not by policy, but by architecture.
The AI recommendation appears through a gate-enforced protocol. False discovery rate and false omission rate are disclosed, and the reader must demonstrate calibrated comprehension of the AI's operational error characteristics before proceeding.
When the clinician's final decision differs from the AI, structured reason codes and free-text rationale create a defensible record of deliberate clinical reasoning. Agreement is documented with equal rigor.
Every session produces a self-contained export with hash-chained audit trail, decision trajectories, double bind records, data quality scoring, iMRMC-compatible analysis files, and RFC 3161 trusted timestamps from an independent timestamping authority.
Each case is automatically classified into outcome types (maintained, capitulated, partial shift, contrary shift) and automation bias patterns (deliberate capitulation, partial anchoring, confident resistance). Millisecond-resolution phase timing included.
Four-pillar accountability framework per case: independent judgment documented, AI considered with comprehension verified, deliberate decision with override reasoning, and tamper evidence confirmed via hash chain.
SHA-256 hash chain with sequential event numbering, content hashing, and chain linking. RFC 3161 trusted timestamps from an independent timestamping authority. Self-contained verifier any third party can run.
Automatic compliance manifest mapping each session to HIPAA audit controls (§164.312), EU AI Act Articles 12 and 14, GDPR, and 21 CFR Part 11. Architecture exceeds Part 11 requirements for electronic records.
5-condition dismantling study design implemented. Full reader study protocol with Latin Square counterbalancing, configurable washout enforcement, and 50-case queuing. 18 end-to-end Playwright tests passing. IRB preparation underway.
System demonstration and poster presentation submitted to AMIA 2026 Annual Symposium. RSNA 2026 abstract in preparation.
Provisional patent filed covering sequential disclosure methodology, policy-as-code gates, cryptographic hash chain architecture, cross-domain kernel, and MRMC export. U.S. App. No. 63/987,880.
Platform architecture maps to HIPAA audit controls, EU AI Act Articles 12 and 14, GDPR consent documentation, and 21 CFR Part 11 electronic records requirements. Compliance manifests generated automatically per session.
Evidify is working with academic researchers running MRMC reader studies, malpractice insurers developing AI risk frameworks, and health system risk management teams navigating AI documentation requirements.
If you're working on AI governance, documentation standards, or clinician-AI interaction research, I'd welcome a conversation.