Skip to content
EU AI Act

AI Act for Startups: Minimum Viable Compliance

5 min readUpdated 2 May 2026

The EU AI Act is written for large organisations with compliance teams. The documentation requirements are comprehensive. The conformity assessment process is structured around enterprise processes. A 12-person startup building an AI product cannot run the same compliance programme as a 10,000-person technology company.

This article cuts through to what a startup actually needs to do — and what can wait.


The Honest Framing

The AI Act does not have a startup exemption. If you build a high-risk AI system, you face the same requirements as a larger company. The law does not scale compliance obligations to headcount or revenue.

What the Act does acknowledge:

  • Microenterprises and small enterprises may benefit from simplified technical documentation formats (Article 11.4)
  • The European AI Office must produce guidance and templates tailored to SMEs
  • Regulatory sandboxes exist specifically to support startups (Articles 57–63)

The obligations are the same. The tools to meet them are supposed to be more accessible. In practice, startups need to be efficient about compliance, not exempt from it.


Step 1: Classify Your AI Systems Honestly

The first task is figuring out what tier your products actually fall into. Most startup AI products are not high-risk.

Tier your features:

What you buildLikely risk tier
Internal productivity tools (summariser, assistant)Minimal risk
Customer service chatbotLimited transparency risk
Content generationLimited transparency risk (for deepfakes/synthetic media)
Product recommendation engineMinimal risk
CV screening or candidate rankingHigh-risk
Credit scoring or loan decisioningHigh-risk
Healthcare diagnosis supportHigh-risk
Fraud detection for banksHigh-risk

If none of your current features are high-risk, your primary obligation is chatbot/transparency disclosure. That's the entire compliance stack until you build high-risk features.


Step 2: The Minimum for Limited Transparency Products

If you build chatbots, AI content tools, or other limited transparency products, your obligations are:

  1. Disclose that users are interacting with AI — at the start of any AI interaction
  2. Never deny being an AI when sincerely asked
  3. Label synthetic content as AI-generated where required (deepfakes, AI audio that imitates real people)
  4. Keep internal documentation of what AI systems you deploy

That's it for limited transparency compliance. Build it into the product interface, test it, document it. Estimated time: a few days of engineering and a one-page internal policy document.


Step 3: If You Have High-Risk Features — Start Now

If your product has high-risk features (Annex III), waiting until 2026 is a mistake. What you need:

Immediately:

  • Write down what your system does, who it affects, and how decisions are made
  • Document your training data sources and composition
  • Start logging AI outputs that influence decisions

By early 2026:

  • Prepare formal technical documentation covering Annex IV requirements
  • Complete internal conformity assessment
  • Register in the EU AI database
  • Implement human oversight mechanisms in the product

The documentation is the bottleneck. Technical documentation for a well-understood system takes 4–8 weeks to write properly. Starting early gives you time to do it well.


What Startups Often Miss

The training data problem. Many startups trained models on data they did not have rights to, or data they did not adequately document. The AI Act requires documentation of training data sources, composition, and known limitations. If you cannot produce this, your conformity assessment will fail.

Subgroup performance. If your high-risk system performs differently for different demographic groups, this must be documented and addressed. It is not sufficient to have good overall accuracy if a subgroup is materially disadvantaged.

Human oversight is a product feature. You cannot bolt human oversight on afterwards. Build review workflows into the product design from the start.

GDPR is separate. AI Act documentation and GDPR documentation overlap but are not the same. A DPIA does not replace technical documentation under the AI Act.


Regulatory Sandboxes: The Startup Advantage

The AI Act created regulatory sandboxes — supervised testing environments where companies can develop and test AI systems with regulatory guidance before full compliance obligations apply. Member states must set up sandboxes; the first frameworks are being established in 2025–2026.

For startups with genuinely novel AI systems that do not fit cleanly into existing categories, sandboxes offer:

  • Supervised development with feedback from regulators
  • Protection from enforcement action during the sandbox period
  • Faster path to market once regulatory concerns are addressed

Sandboxes are not an escape from compliance — they are a pathway to it. But for startups building at the frontier, they are worth engaging with.


The Minimum Viable Compliance Stack for AI Startups

ObligationWhat to doWhen
Prohibited AI auditReview product for prohibited featuresNow
Risk classificationClassify all AI features against Annex IIINow
Chatbot disclosureImplement user-facing AI disclosureBefore August 2026
High-risk documentationPrepare technical documentation if applicableQ1 2026
EU AI database registrationRegister high-risk systemsBefore August 2026
Post-market monitoringLog outputs, track performanceFrom deployment

ComplyOne classifies your AI systems against the EU AI Act risk tiers and generates the required documentation automatically.

Run your AI Act risk assessment →