Annex III is the section of the EU AI Act that defines which AI systems are considered high-risk. If your product falls within it, you face the heaviest compliance requirements in the legislation — full conformity assessments, technical documentation, human oversight mechanisms, and EU database registration, all required before 2 August 2026.
This article explains every Annex III category, what counts, and the practical implications for SaaS companies.
Why Annex III Matters
The EU AI Act distinguishes between AI systems that pose systemic risk to people's rights and wellbeing, and those that do not. Annex III is the concrete list of use cases where the EU has determined the risk is serious enough to warrant mandatory compliance.
Being in Annex III does not mean your product is dangerous. It means it operates in an area where the consequences of AI failure — biased hiring decisions, incorrect credit denials, flawed medical recommendations — can significantly harm real people.
The compliance burden reflects that stakes, not a judgment on your company.
The Eight Annex III Categories
1. Biometric Identification and Categorisation
In scope:
- Remote biometric identification systems (facial recognition to identify people in public or private spaces)
- Biometric categorisation systems that assign people to groups based on sensitive characteristics
Out of scope:
- Authentication systems (verifying a claimed identity, not identifying an unknown person from a database)
- Biometric verification used purely for security access (logging into a device)
Relevant to: Identity verification SaaS, security and access control platforms, KYC/AML tools that use facial recognition.
2. Critical Infrastructure Management
In scope:
- AI used in the management and operation of critical infrastructure: energy (electricity, gas, oil), water supply, wastewater, transport (road, rail, aviation, maritime), digital infrastructure, financial infrastructure
Out of scope:
- General monitoring tools not directly managing operational decisions
- Business intelligence tools in these sectors (unless they feed directly into operational decisions)
Relevant to: Infrastructure management SaaS, SCADA integrations, energy management platforms, fintech infrastructure tooling.
3. Education and Vocational Training
In scope:
- AI determining access to educational institutions
- AI evaluating learning outcomes in ways that affect academic progression
- AI monitoring students during assessments (proctoring software)
- AI used to assess appropriate level of education for a person
Out of scope:
- Learning management systems without AI-driven assessment or gatekeeping
- Content recommendation tools in e-learning
- Administrative tools (scheduling, communications)
Relevant to: EdTech SaaS with AI-powered assessment, online examination platforms, credential verification.
4. Employment, Worker Management, and Access to Self-Employment
In scope:
- CV screening and candidate ranking or filtering
- AI used in hiring, promotion, and termination decisions
- Performance and behaviour monitoring of employees
- Task allocation systems that significantly affect working conditions
- AI assessing creditworthiness of self-employed individuals
Out of scope:
- HR analytics without individual-level decision influence
- Scheduling tools without performance-linked outputs
- General productivity monitoring without individual consequence
Relevant to: HR tech and people analytics platforms, recruitment SaaS, workforce management tools, gig economy platforms.
This is one of the most commonly misclassified categories. If your product automates or scores any part of the hiring or performance management process, you are almost certainly high-risk.
5. Access to Essential Private Services and Public Benefits
In scope:
- Credit scoring and creditworthiness assessment
- Life and health insurance risk evaluation
- Eligibility assessment for public benefits and emergency services
- AI in the dispatch of emergency services
Out of scope:
- General financial analytics not used for individual lending decisions
- Insurance pricing tools based on aggregate (not individual) data without decision-linked output
Relevant to: Fintech lending platforms, insurtech underwriting tools, open banking SaaS with credit components.
6. Law Enforcement
In scope:
- Polygraph testing or similar reliability assessment of individuals
- Risk assessment of natural persons for the purpose of assessing criminal risk
- AI detecting emotional states of individuals in the context of investigations
- Analysis of crime analytics data used for profiling
Out of scope:
- General data analytics not used in law enforcement context
- Fraud detection tools in commercial (non-law-enforcement) contexts
Relevant to: Fraud detection platforms used by police or public prosecutors, predictive analytics sold to law enforcement.
7. Migration, Asylum, and Border Control
In scope:
- Risk assessment of persons for irregular migration
- Examination of applications for asylum, visa, and residence permits
- Detection of forged documents
- Lie detection systems used at borders
Out of scope:
- General document processing software without border/asylum application context
- Logistics tools used at ports of entry without individual assessment
Relevant to: Immigration SaaS sold to government agencies or border authorities.
8. Administration of Justice and Democratic Processes
In scope:
- AI assisting courts in researching or interpreting facts and law
- AI in alternative dispute resolution
- AI influencing elections or voter behaviour
Out of scope:
- Legal research tools used by lawyers (not courts)
- Contract analysis tools used in commercial contexts
Relevant to: LegalTech sold to courts or public bodies, civic tech platforms.
The "Reasonably Foreseeable Use" Rule
Even if your intended use is not in Annex III, you may still bear obligations if your AI system is reasonably foreseeable to be used in a high-risk context.
This matters especially for:
- General-purpose platforms (data analytics, workflow automation) where some customers will use them for high-risk purposes
- API-first products where you cannot fully control downstream use
Acceptable use policies, contractual prohibitions on high-risk use, and technical controls (rate limiting, use-case restrictions) are part of your compliance picture — not just a legal formality.
What High-Risk Classification Requires
If any of your AI systems fall within Annex III, you must:
| Requirement | Detail |
|---|---|
| Risk management system | Ongoing identification, evaluation, and mitigation of risks |
| Data governance | Training data must be relevant, representative, and bias-evaluated |
| Technical documentation | Sufficient for a conformity assessment — see Article 11 |
| Record-keeping | Automatic logging sufficient to trace decisions post-deployment |
| Transparency | Users and deployers must understand the system's capabilities and limitations |
| Human oversight | The system must permit human review and intervention |
| Accuracy and robustness | Defined performance metrics and failure mode analysis |
| Conformity assessment | Self-assessment (most Annex III) or third-party (biometric + critical infrastructure) |
| EU database registration | Register before placing on the EU market |
| CE marking | Required before market placement |
Timeline for High-Risk Compliance
These requirements apply from 2 August 2026. For AI systems already on the market before the Act came into force in August 2024, there is a transition period until 2 August 2027.
Given the documentation requirements, starting in 2026 is too late. Companies with confirmed high-risk systems should begin the conformity assessment process now.