One of the first things every company needs to do under the EU AI Act is classify their AI systems by risk level. Get it wrong in either direction — assuming you're lower risk than you are, or over-engineering compliance for minimal-risk tools — and you waste time or create legal exposure.
This guide explains how the classification works, what each tier requires, and how to document your decision.
The Four-Tier Framework
The EU AI Act uses a risk-based approach. Your obligations scale with the potential harm your AI system could cause to people.
| Tier | Risk level | Compliance burden |
|---|---|---|
| Tier 1 | Prohibited | Complete ban — cannot deploy |
| Tier 2 | High-risk | Full conformity assessment required |
| Tier 3 | Limited transparency | Disclosure obligations only |
| Tier 4 | Minimal risk | No mandatory requirements |
Most enterprise SaaS features fall into Tier 3 or Tier 4. But you need to document that conclusion — "we assumed we were fine" is not a defensible position under an audit.
Tier 1 — Prohibited AI Practices
These are banned outright as of 2 February 2025. If your system does any of the following, it cannot be deployed in the EU:
- Social scoring by public authorities based on behaviour or personal characteristics
- Real-time remote biometric identification in publicly accessible spaces (with very narrow law enforcement exceptions)
- Subliminal manipulation that distorts a person's behaviour in ways they're unaware of, causing harm
- Exploitation of vulnerabilities — systems targeting children, elderly, or people with disabilities to distort their behaviour
- Predictive policing based solely on profiling, without objective individual assessment
- Emotion recognition in workplaces or educational institutions (with narrow exceptions)
- Biometric categorisation to infer race, political opinions, religious beliefs, sexual orientation, or trade union membership
If any feature of your product touches these areas, remove it or seek legal counsel immediately. The deadline for removal has already passed.
Tier 2 — High-Risk AI Systems
High-risk classification is triggered by the AI system's use case, not the technology itself. An AI system is high-risk if it falls into one of the Annex III categories:
Annex III — High-Risk Use Cases
| Category | Examples relevant to SaaS |
|---|---|
| Biometric systems | Facial recognition for identity verification, emotion detection |
| Critical infrastructure | AI managing energy grids, water systems, transport networks |
| Education | AI scoring or evaluating students, determining access to education |
| Employment | CV screening, candidate ranking, performance monitoring, promotion decisions |
| Essential services | Credit scoring, insurance risk assessment, benefits eligibility |
| Law enforcement | Risk profiling of individuals, evidence reliability assessment |
| Migration and border control | Visa risk assessment, lie detection at borders |
| Justice and democracy | AI assisting judges, AI influencing elections |
The key question is not what your product does — it's what your customer uses it for. If you sell a general-purpose data platform and a customer uses it for credit scoring, that use is high-risk even if your product is general-purpose. Your contractual terms, acceptable use policies, and technical safeguards all matter here.
Tier 3 — Limited Transparency Risk
AI systems in this tier do not require a conformity assessment but must be transparent about their AI nature.
Systems in this tier include:
- AI chatbots and virtual assistants
- AI that generates text, audio, images, or video users might mistake for human-created content
- Emotion recognition systems used outside prohibited contexts
- Deepfake or synthetic media generators
What you must do:
- Inform users they are interacting with an AI system (at the point of first interaction)
- Label AI-generated content as such when it could be mistaken for real
- For deepfakes used legitimately (art, satire), disclose the artificial nature
Tier 4 — Minimal Risk
The vast majority of AI applications fall here. Examples include:
- Spam filters
- AI-powered search and recommendation engines
- Predictive text
- AI used in software development (code completion tools)
- Customer segmentation tools
- General-purpose analytics with AI components
No mandatory requirements. The Act encourages companies to voluntarily adopt codes of conduct and best practices, but nothing is legally required.
How to Classify Your AI Systems: Step-by-Step
Step 1 — Inventory your AI systems List every AI feature, model, or automated decision-making component in your product. Include third-party AI APIs you call.
Step 2 — Identify the use context For each system, document: what decision does it influence? Who is the end user? What sector does the customer operate in?
Step 3 — Check against the prohibited list Does the system do anything on the prohibited list? If yes, stop — it cannot be deployed.
Step 4 — Check against Annex III Is the intended use case (or a reasonably foreseeable use case) in any of the eight Annex III categories? If yes, it is high-risk.
Step 5 — Check for transparency obligations Does the system interact with users who might believe they are talking to a human? Does it generate content users might believe is human-created?
Step 6 — Document the classification Write a short classification memo for each AI system. Include: the system description, the intended use case, which categories you checked, and the conclusion. This is the minimum defensible record.
Common Misclassifications to Avoid
"We use AI but it doesn't make decisions" AI that influences decisions is treated the same as AI that makes them if the influence is substantial. A CV ranking system that outputs a score a recruiter uses to shortlist candidates is influencing the hiring decision.
"Our customers decide how to use it" You are still liable as a provider if high-risk use is foreseeable. Acceptable use policies are part of your compliance, not a complete defence.
"It's just a recommendation engine" Recommendation engines used for credit, insurance, or essential services access are high-risk regardless of how you describe them.
"We use OpenAI's API so they're responsible" GPAI providers (OpenAI, Anthropic, Google) have separate obligations. You, as the application builder, retain obligations around how you deploy the model and what use cases you enable.
Classification Documentation Template
For each AI system, document the following:
System name: [name]
Description: [what it does]
Intended use: [specific use case in production]
End users: [who interacts with it]
Customer sectors: [what industries your customers operate in]
Prohibited check: No prohibited practices identified / [describe if relevant]
Annex III check: Not in scope / [identify category if applicable]
Transparency check: [does it interact with or generate content for end users?]
Classification: Prohibited / High-risk / Limited transparency / Minimal risk
Rationale: [2–3 sentences explaining the conclusion]
Date of assessment: [date]
Reviewed by: [name/role]
What Happens After Classification
- Prohibited: Remove the feature or seek legal advice
- High-risk: Begin conformity assessment process — you need this complete before 2 August 2026
- Limited transparency: Add disclosure language to your product and terms
- Minimal risk: Document the classification, no further action required