The EU AI Act has the highest penalty ceiling of any technology regulation in the EU — higher than GDPR. A startup that deploys prohibited AI or fails to comply with high-risk obligations can face fines of up to 7% of global annual turnover. Understanding the penalty structure is not just legal knowledge — it is risk management.
The Penalty Structure
The AI Act creates three tiers of financial penalties, each calibrated to the severity of the infringement.
| Violation | Maximum fine |
|---|---|
| Deploying prohibited AI | €35,000,000 or 7% of global annual turnover (whichever is higher) |
| Infringement of high-risk AI obligations | €15,000,000 or 3% of global annual turnover |
| Providing incorrect, incomplete, or misleading information | €7,500,000 or 1.5% of global annual turnover |
For SMEs and startups, penalties are proportionate to annual turnover, not absolute caps. The regulation explicitly requires authorities to take into account the size of the company — including whether the company is a startup, SME, or individual operator — as a mitigating factor.
Tier 1: Prohibited AI — €35m or 7%
The highest fine applies to violations of Article 5 — deploying AI practices that are banned outright. These include:
- Social scoring systems
- Real-time mass biometric surveillance in public spaces
- Subliminal manipulation causing harm
- Exploiting vulnerabilities of protected groups
- Predictive policing based purely on profiling
- Emotion recognition in workplaces or educational institutions
- Biometric categorisation inferring race, religion, political opinions, or sexual orientation
Any company deploying one of these systems in the EU after 2 February 2025 is exposed to the maximum fine. The prohibited list has been in force since February 2025 — there is no grace period remaining.
For a startup with €2m annual turnover: 7% = €140,000 For a scaleup with €20m annual turnover: 7% = €1,400,000 For a company with €500m annual turnover: 7% = €35,000,000 (capped)
Tier 2: High-Risk Non-Compliance — €15m or 3%
This applies to failures against the main compliance requirements for high-risk AI systems — the obligations in Articles 8–15 and Article 25 that cover:
- Data governance failures (Article 10)
- Missing technical documentation (Article 11)
- No conformity assessment completed
- Failure to register in the EU AI database
- Missing human oversight mechanisms
- Insufficient accuracy or robustness measures
- Non-compliance with post-market monitoring obligations
This is where most enforcement exposure sits for SaaS companies. A company that deploys an AI hiring tool without completing a conformity assessment, registering in the EU database, or implementing human oversight is exposed at this tier.
For a startup with €5m annual turnover: 3% = €150,000 For a scaleup with €50m annual turnover: 3% = €1,500,000
Tier 3: Misleading Information — €7.5m or 1.5%
This applies to providing incorrect or incomplete information to national authorities or notified bodies in the course of a conformity assessment, market surveillance, or investigation. Companies that misrepresent their AI system's capabilities, risk classification, or compliance status to regulators face this penalty — separately from any underlying compliance failure.
Enforcement: Who Is Responsible
Each EU member state is required to designate a national competent authority responsible for AI Act enforcement. These authorities have powers to:
- Request access to technical documentation and compliance records
- Conduct audits and inspections
- Require corrective actions
- Withdraw high-risk AI systems from the market
- Impose fines
The European AI Office (established within the European Commission) has enforcement responsibility for General Purpose AI (GPAI) model providers and coordinates enforcement across member states for systemic risks.
Timeline: National competent authorities must be operational by 2 August 2025. Market surveillance powers kick in from 2 August 2026.
Mitigating Factors for Startups
The AI Act explicitly requires authorities to consider these factors when setting penalty levels:
- Nature, gravity, and duration of the infringement
- Number of affected persons and level of harm caused
- Intentionality — was it deliberate or negligent?
- Measures taken to limit harm after the fact
- Financial capacity of the operator, especially for SMEs and startups
- Degree of cooperation with authorities
A startup that makes a good-faith classification error, cooperates with authorities, and corrects the issue quickly is in a materially different position than a company that knowingly deploys prohibited AI or ignores repeated warnings.
This does not mean classification errors are risk-free. But it means that building a documented, good-faith compliance programme is not just good practice — it is a concrete mitigant against maximum penalties.
The Risk Calculation
Consider a 40-person AI startup with €800k ARR building a recruitment AI tool used by EU companies. If they deploy without completing a conformity assessment:
- Probability of a fine in year 1: Low — enforcement infrastructure is still being built
- Probability of scrutiny as the market matures: Much higher
- Consequence of a fine at 3% of €800k: €24,000 directly; but reputational damage and potential market withdrawal are the bigger concerns
- Cost of compliance: A few thousand euros and 20–30 hours of documentation work
The economics strongly favour compliance for any startup that intends to operate in the EU long-term.
What "Good Faith Compliance" Looks Like
Regulators in the AI Act are not looking to destroy startups. They are looking for evidence of genuine engagement with the compliance process. Minimum viable good-faith compliance for a high-risk AI provider includes:
- A documented risk classification for each AI feature
- A written conformity assessment (even a brief, honest self-assessment)
- Technical documentation sufficient to explain what the model does and how
- Evidence of bias evaluation in training data
- A human oversight mechanism in the product
- EU AI database registration before deployment
This is not a huge undertaking for most startups. It is primarily documentation work — which is exactly what ComplyOne automates.