Healthcare AI occupies the most consequential position in the EU AI Act risk framework. Most clinical AI systems — diagnostic support, treatment planning tools, patient triage algorithms — are classified as high-risk under Annex III. That means full conformity assessment requirements, extensive technical documentation, and CE marking before any EU deployment.
This article explains what the AI Act requires from healthcare AI companies specifically and what you need to do before the August 2026 compliance deadline.
Why Healthcare AI Is Almost Always High-Risk
The EU AI Act defines high-risk AI systems by reference to Annex III. For healthcare, the relevant categories are:
Annex III, Section 5: Employment, workers management and access to self-employment — If your AI system supports clinical staffing or workforce decisions, this may apply.
Annex III, Section 5b and related: Safety components in regulated products — AI systems embedded in medical devices or in vitro diagnostic devices (IVDs) are explicitly high-risk. If your AI system is a safety component in a CE-marked device, it is high-risk under the AI Act.
Annex III, Section 5 and 6 (new): Medical and health AI systems — AI systems intended to support clinical decision-making, diagnosis, prognosis, or treatment recommendations for individual patients are high-risk.
The practical test: If your AI system could directly influence a clinical decision affecting patient health, it is almost certainly high-risk. This includes:
- Radiology AI (image analysis, lesion detection)
- Pathology AI (histology, cytology analysis)
- Clinical decision support (diagnosis recommendations)
- Triage scoring systems
- Mental health risk assessors
- Drug interaction checks driven by AI
- AI in remote patient monitoring
What High-Risk Classification Means in Practice
If your system is high-risk, you face the full compliance stack before EU deployment:
1. Technical Documentation (Article 11)
You must prepare and maintain comprehensive technical documentation covering:
- System purpose and intended use
- Risk classification and justification
- Training, validation, and testing datasets
- System performance metrics (accuracy, sensitivity, specificity — with confidence intervals)
- Limitations and known failure modes
- Post-market monitoring plan
For medical AI, regulators and notified bodies will look hard at training data representativeness. A model trained on US hospital data with underrepresentation of European patient demographics will face scrutiny.
2. Data Governance (Article 10)
Training and validation datasets must be documented for:
- Geographic and demographic representativeness
- Data collection methodology
- Labelling procedures and quality assurance
- Bias identification and mitigation steps
Clinical datasets have patient privacy obligations layered on top. Your data governance must satisfy both the AI Act and GDPR — these are not redundant; they require separate documentation tracks.
3. Transparency and Human Oversight (Articles 13–14)
High-risk AI systems must:
- Explain their outputs in a way clinicians can interpret
- Enable human review of AI-generated recommendations
- Log all outputs that influence clinical decisions
- Allow override of AI recommendations without friction
The "human in the loop" requirement is particularly significant for clinical AI. Systems that present AI output in a way that psychologically discourages override (high-confidence scores, one-click accept) may be considered non-compliant with the human oversight obligation.
4. Conformity Assessment
Healthcare AI systems that are also CE-marked medical devices undergo conformity assessment through a notified body (under MDR or IVDR). This process will be extended to cover AI Act requirements — notified bodies are being updated to assess AI Act compliance in parallel with device regulation.
If your system is not an MDR/IVDR device but is still high-risk under the AI Act, you undertake a self-assessment conformity process using EU harmonised standards (once published) and register the system in the EU AI database.
MDR/IVDR Overlap: AI in Medical Devices
Many healthcare AI companies are already navigating CE marking under MDR (Medical Device Regulation) or IVDR (In Vitro Diagnostic Regulation). The AI Act creates an additional compliance layer, not a replacement.
The relationship:
- MDR/IVDR: Governs the medical device itself — clinical safety and performance
- AI Act: Governs the AI component specifically — model quality, transparency, documentation
A diagnostic AI system is subject to both. The good news: the EU has committed to alignment between frameworks — the conformity assessment for AI within medical devices will eventually run through the same notified body process.
For companies already through MDR conformity assessment, the primary additional work is:
- AI Act-specific technical documentation
- Data governance documentation in the AI Act format
- EU AI database registration
- Ongoing logging and post-market surveillance per AI Act requirements
Prohibited Uses Relevant to Healthcare
The AI Act prohibits AI systems that exploit vulnerable populations. Healthcare patients are explicitly a protected group. This means:
- AI systems that exploit cognitive or physical vulnerability to influence patient behaviour (e.g., pushing specific treatment pathways that serve commercial interests over clinical outcomes) are prohibited
- Social scoring systems applied to patients are prohibited
- Emotion recognition in clinical settings that triggers differential treatment is tightly regulated
Minimum Compliance Timeline for Healthcare AI Companies
| Deadline | Obligation |
|---|---|
| Now | Begin technical documentation; audit training data |
| August 2026 | High-risk system requirements fully in force |
| August 2026 | EU AI database registration required |
| Ongoing | Post-market monitoring, incident reporting |
Do not wait for notified body guidance before starting. The documentation requirements are clear in the Act itself. Companies that begin now have 15 months to build compliant systems. Companies that wait for final guidance risk missing the deadline entirely.
Key Compliance Actions for Healthcare AI Companies
- Classify all your AI systems against Annex III — most clinical AI will be high-risk
- Begin technical documentation for all high-risk systems
- Audit training datasets for representativeness, provenance, and labelling quality
- Implement logging for all AI-generated clinical outputs
- Build human oversight mechanisms into clinical workflows — not as afterthoughts
- If MDR/IVDR-regulated: map AI Act requirements against your existing technical file
- Register a point of contact for EU AI Act compliance