August 2, 2026 is the date the EU AI Act's high-risk AI obligations become enforceable. If your company builds or deploys AI systems in the EU, that deadline is the line you need to be ready for.
This checklist covers every action a SaaS company needs to complete. Use it to assess your current state and identify gaps.
Step 1 — Inventory and Classify
Before you can comply, you need to know what you have.
- List every AI feature in your product. Include ML models, scoring systems, recommendation engines, chatbots, AI-generated content, and any third-party AI APIs you call.
- List every AI tool used internally. Hiring tools, performance monitoring, customer risk assessment — any AI used in operations that affects people.
- Classify each system against the four AI Act tiers: prohibited, high-risk, limited transparency, minimal risk.
- Document your classification rationale for each system in writing — not just the conclusion, but why.
- Check against the prohibited list (Article 5). Remove any prohibited features immediately. The deadline for this was February 2, 2025.
- Check against Annex III for each system. If you operate in or sell to HR, credit, insurance, education, or essential services sectors, assess carefully.
Step 2 — Address Prohibited Practices Immediately
If you have not already done this, it is urgent.
- Review your product for any features that could constitute social scoring, real-time mass biometric surveillance, subliminal manipulation, exploitation of vulnerable groups, or prohibited emotion recognition.
- Remove or redesign any feature that meets the prohibited definition.
- Document the removal — if you modified a feature, record what changed and why.
Step 3 — High-Risk AI: Full Compliance Programme
For each system classified as high-risk:
Risk Management
- Implement a risk management system covering the system's lifecycle (Article 9)
- Identify, analyse, and evaluate risks to health, safety, and fundamental rights
- Define risk mitigation measures and document them
- Schedule regular risk management reviews — at least annually and after significant system changes
Data Governance
- Document the provenance of all training, validation, and testing datasets (Article 10)
- Evaluate training data for representativeness — does it reflect the population your system will be used on?
- Conduct a bias evaluation across relevant protected characteristics (gender, ethnicity, age, etc.)
- Document any known data gaps and the mitigations applied
- Confirm GDPR lawful basis for any personal data in training sets
- Implement ongoing monitoring for data drift post-deployment
Technical Documentation
- Produce full Article 11 technical documentation for each high-risk system
- Include: system description, intended purpose, design methodology, architecture, training data, performance metrics, known limitations
- Ensure documentation is sufficient for a conformity assessor to evaluate the system
- Establish version control — documentation must track to specific model versions
Record-Keeping and Logging
- Implement automatic event logging sufficient to identify risks and enable post-deployment monitoring (Article 12)
- Ensure logs capture: when the system was used, inputs, outputs, and any flags or errors
- Establish log retention period and access controls
Transparency to Deployers
- Produce Article 13 instructions for deployers — covering intended purpose, performance, limitations, oversight requirements
- Ensure deployer documentation is provided at product onboarding
- Update documentation when the system changes significantly
Human Oversight
- Design the product to enable human oversight — deployers must be able to review, interpret, and override AI outputs (Article 14)
- Ensure the UI does not discourage override or present AI outputs as final
- Confirm the system can be stopped or paused by a human operator
Accuracy, Robustness, and Cybersecurity
- Define and document accuracy and performance metrics for the system
- Test robustness against adversarial inputs, errors, and inconsistencies (Article 15)
- Assess cybersecurity of the model against manipulation attempts
Conformity Assessment
- Complete a self-assessment conformity process (most Annex III systems)
- For biometric identification and critical infrastructure AI, engage a notified body for third-party assessment
- Produce a Declaration of Conformity (Article 47)
- Affix CE marking to relevant documentation
EU AI Database Registration
- Register each high-risk AI system in the EU AI database before placing on the EU market (Article 49)
- Keep registration current — update when significant changes are made
Step 4 — Limited Transparency AI (All Interactive and Generative AI)
For chatbots, AI assistants, generative AI features, and emotion recognition:
- Add a clear disclosure at the start of every AI interaction informing users they are interacting with an AI system (Article 50)
- Implement labelling for AI-generated content (text, images, audio, video) presented to users
- For emotion recognition: inform individuals before assessment occurs
- Review your product UI — disclosures must be visible and timely, not buried in settings
Step 5 — General Purpose AI (If Applicable)
If you build or fine-tune foundation models (GPAI):
- Classify the model — is it a GPAI model with systemic risk?
- Produce technical documentation and a copyright policy (Article 53)
- Make a summary of training content publicly available
- For systemic risk GPAI (10^25 FLOPs threshold): complete adversarial testing, incident reporting setup, cybersecurity measures
GPAI obligations apply from August 2, 2025 — earlier than the high-risk deadline.
Step 6 — Post-Market Monitoring Plan
High-risk AI compliance does not end at deployment.
- Implement a post-market monitoring system to collect and analyse performance data (Article 72)
- Define KPIs for ongoing monitoring — accuracy, error rates, bias metrics
- Establish a process for reporting serious incidents to national authorities (Article 73)
- Set a review cadence — when will you re-assess the system after deployment?
Step 7 — GDPR Overlap
Where your AI system processes personal data:
- Confirm GDPR lawful basis for all processing activities in the AI system
- Update privacy notices to include AI processing disclosure (Articles 13/14 GDPR)
- Conduct DPIA if not already done for high-risk processing
- Ensure GDPR Article 22 (automated decisions) rights are addressed alongside AI Act human oversight requirements
- Review data processing agreements with any AI sub-processors
Timeline Summary
| Deadline | Action required |
|---|---|
| Now (overdue) | Remove prohibited AI practices |
| 2 August 2025 | GPAI model obligations — documentation and transparency |
| 2 August 2026 | All high-risk AI fully compliant — conformity assessment, EU database registration, human oversight, technical documentation |
| 2 August 2027 | Legacy systems (on market before Aug 2024) must comply |
Where to Start If You're Behind
If you have not started:
- Classify first — without a clear inventory and classification, you cannot plan
- Focus on prohibited items — any prohibited AI must be removed, no exception
- Identify your one or two most clearly high-risk systems — get documentation started on those before addressing everything else
- Treat the conformity assessment as a project, not a document — it requires input from engineering, legal, and product
Most startups can reach a defensible compliance position with 20–40 hours of structured work. The documentation is not technically complex — it requires organisational effort more than legal expertise.