Skip to content
EU AI Act

Does the EU AI Act Apply to Your SaaS Product?

7 min readUpdated 2 May 2026

If your software uses any form of machine learning, automated decision-making, or AI-generated output — and you operate in or sell into the EU — the EU AI Act almost certainly applies to you. The question is how, and at what compliance level.

This guide answers the three questions every SaaS founder needs to answer before August 2026.


Who the AI Act Covers

The EU AI Act applies to:

  • Providers — companies that develop and place AI systems on the EU market
  • Deployers — companies that use AI systems in a professional context within the EU
  • Importers and distributors — if you bring an AI system from outside the EU into the EU market

If you build a SaaS product that includes AI features and sell it to EU customers, you are a provider. If you use a third-party AI tool in your operations (e.g., using an AI hiring tool for your own recruitment), you are a deployer. Many SaaS companies are both.

Critically, geographic location of the company does not matter. A US startup selling AI-powered software to EU businesses is covered.


The Four Risk Tiers

The AI Act classifies AI systems into four categories. Your compliance obligations depend entirely on which category your system falls into.

Risk tierExamplesWhat's required
ProhibitedSocial scoring, real-time mass biometric surveillance, subliminal manipulationBanned outright — cannot be deployed
High-riskAI in hiring, credit scoring, medical devices, law enforcement, critical infrastructureExtensive documentation, testing, registration, ongoing monitoring
Limited transparencyChatbots, deepfake generatorsMust disclose AI interaction to users
Minimal riskSpam filters, recommendation engines, most general SaaS AI featuresNo mandatory requirements (voluntary codes of conduct)

The majority of SaaS companies fall into limited transparency or minimal risk. But "we don't think we're high-risk" is not a compliance strategy — you need to assess and document it.


The High-Risk Trigger: Annex III

High-risk classification is triggered when your AI system is used in one of these eight areas (Annex III of the Act):

  1. Biometric identification and categorisation
  2. Critical infrastructure management (energy, water, transport)
  3. Education and vocational training
  4. Employment, worker management, access to self-employment
  5. Access to essential services (credit, insurance, public benefits)
  6. Law enforcement
  7. Migration, asylum, and border control
  8. Administration of justice and democratic processes

HR software that automates or significantly influences hiring, promotion, or performance evaluation is high-risk. Credit scoring tools used by fintech platforms are high-risk. Recruitment chatbots that screen CVs and rank candidates may be high-risk.

If you sell AI features to customers in these verticals — even if you don't operate in the sector yourself — you may still have obligations as a provider.


General Purpose AI (GPAI) Models

If you build on top of foundation models (GPT-4, Claude, Gemini, Mistral), the AI Act distinguishes between the model provider and the deployer.

  • The model provider (OpenAI, Anthropic, Google) has obligations around technical documentation and compliance with usage policies
  • You, as the application builder, have obligations around how you deploy the model — particularly transparency and ensuring the model isn't used for prohibited or high-risk purposes without appropriate safeguards

Using an API does not transfer all liability to the API provider.


Key Deadlines

DateWhat changes
2 August 2024AI Act entered into force
2 February 2025Prohibited AI practices banned — must be removed immediately
2 August 2025GPAI model obligations apply
2 August 2026High-risk AI system obligations fully apply
2 August 2027AI systems already on the market before 2024 must comply

The 2 August 2026 deadline is the most important for SaaS companies. If you have any features that could be classified as high-risk, documentation and conformity requirements must be in place by that date.


Quick Self-Assessment

Answer these questions:

1. Do you operate in or sell into the EU? If no → AI Act does not apply to you (yet). If yes → continue.

2. Does your product include any AI or machine learning features? If no → not covered. If yes → continue.

3. Do any of your AI features make or influence decisions in the Annex III areas above? If no → you are likely in limited transparency or minimal risk. If yes → you need a formal high-risk assessment.

4. Do you use AI to interact with end users (chatbots, automated responses)? If yes → transparency disclosure requirements apply regardless of risk tier.


What Minimal and Limited Risk Companies Must Do

Even if you are not high-risk, you have baseline obligations:

  • Disclose AI interactions: If users interact with an AI system (chatbot, AI-generated content), they must be told they are interacting with AI
  • Mark AI-generated content: Deepfakes and synthetic media must be labelled
  • Keep internal records: Document what AI systems you use and what they do — this is good practice even where not legally mandated

What High-Risk Companies Must Do (Summary)

If you are classified as high-risk, the requirements are substantial:

  • Implement a risk management system covering the AI system's lifecycle
  • Ensure training data is relevant, accurate, and free of inappropriate bias
  • Maintain technical documentation sufficient for a conformity assessment
  • Enable human oversight — the AI system must allow human intervention
  • Achieve accuracy, robustness, and cybersecurity standards
  • Register the AI system in the EU database before deployment
  • Draw up an EU Declaration of Conformity
  • Affix a CE marking

The Bottom Line

For most SaaS companies, the AI Act means:

  1. Conducting a documented risk classification for every AI feature
  2. Adding disclosure language where users interact with AI
  3. If any feature is high-risk, starting the conformity assessment process now — there is not enough time left to do it after August 2025

Compliance is not a legal department problem. For a 50-person AI startup, this sits with the CTO and the founder.

ComplyOne classifies your AI systems against the EU AI Act risk tiers and generates the required documentation automatically.

Run your AI Act risk assessment →