Skip to content
Back to Blog
AI Act

The EU AI Act: What Every SMB Needs to Know Before August 2026

2026-02-159 min read
The EU AI Act: What Every SMB Needs to Know Before August 2026

The EU AI Act is the world's first comprehensive AI regulation. Full enforcement for high-risk AI systems begins on August 2, 2026, and it affects far more businesses than most realise.

If your company operates in the EU and uses AI in any form — including third-party tools like ChatGPT, Copilot, or automated decision-making systems — this regulation likely applies to you.

This guide covers what the AI Act requires, who it applies to, what the penalties look like, and what you should be doing now.

119 days until AI Act full enforcement

2 August 2026

What is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is a risk-based regulatory framework for artificial intelligence. It was published in the Official Journal of the EU on 12 July 2024 and applies across all 27 EU member states plus the EEA.

Unlike GDPR, which regulates data, the AI Act regulates AI systems themselves — how they're built, deployed, and used.

The regulation uses a tiered approach based on risk:

  • Unacceptable risk — banned outright
  • High risk — heavy compliance requirements
  • Limited risk — transparency obligations
  • Minimal risk — no specific requirements

Most businesses fall somewhere between limited and high risk, depending on how they use AI.

Key Dates You Need to Know

The AI Act doesn't arrive all at once. It phases in over three years:

DateWhat happens
2 February 2025Prohibited AI practices are banned. AI literacy obligations begin.
2 August 2025Rules for general-purpose AI models (GPAI) apply. Codes of practice take effect.
2 August 2026Full enforcement for high-risk AI systems. All remaining provisions apply.
2 August 2027High-risk AI systems that are also regulated products (medical devices, machinery, etc.) must comply.

The February 2025 deadline has already passed

If you haven't reviewed your AI practices against the prohibited list, you're already behind.

Does the AI Act Apply to Your Business?

The AI Act applies to:

  1. Providers — anyone who develops or places an AI system on the EU market
  2. Deployers — anyone who uses an AI system in a professional capacity
  3. Importers and distributors — anyone in the AI supply chain

It doesn't matter where your company is headquartered. If the AI system is used in the EU, or its output affects people in the EU, the AI Act applies. This extraterritorial reach is similar to how GDPR works.

Common scenarios where the AI Act applies to SMBs

  • You use AI-powered hiring or CV screening tools
  • You use chatbots for customer service
  • You use AI for credit scoring or financial decisions
  • You deploy AI-generated content for EU audiences
  • You use third-party AI tools (ChatGPT, Copilot, Jasper, Midjourney) in business processes
  • You use automated decision-making that affects individuals

If any of these apply, you have obligations under the AI Act.

The Four Risk Categories Explained

Unacceptable Risk (Banned)

These AI practices are prohibited as of February 2025:

  • Social scoring by public authorities or on behalf of public authorities
  • Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
  • Manipulation — AI systems that use subliminal techniques to distort behaviour in harmful ways
  • Exploitation of vulnerabilities — AI that targets people based on age, disability, or social/economic circumstances
  • Biometric categorisation based on sensitive characteristics (race, political opinions, sexual orientation)
  • Untargeted facial image scraping from the internet or CCTV to build recognition databases
  • Emotion recognition in workplaces and educational institutions (with narrow exceptions)

If you use any system that falls into these categories, you must stop immediately. There is no transition period — this is already enforceable.

High Risk

High-risk AI systems face the strictest requirements. An AI system is classified as high risk if it falls into one of these areas:

  • Employment and worker management — recruitment, CV screening, performance evaluation, promotion decisions
  • Education and training — admissions, assessment, exam proctoring
  • Essential services — credit scoring, insurance risk assessment, utility access decisions
  • Law enforcement — risk assessment, evidence analysis
  • Migration and border control — visa processing, asylum applications
  • Critical infrastructure — energy, water, transport management systems
  • Safety components — AI used as a safety component in regulated products

For high-risk systems, you must:

  • Implement a risk management system (documented, ongoing)
  • Meet data governance requirements (training data quality, bias testing)
  • Maintain technical documentation (system design, capabilities, limitations)
  • Enable record-keeping and logging (automatic event logs for traceability)
  • Provide transparency to deployers (instructions for use, limitations)
  • Ensure human oversight (ability for humans to intervene, override, or shut down)
  • Meet standards for accuracy, robustness, and cybersecurity

Limited Risk

AI systems with limited risk have transparency obligations only. This includes:

  • Chatbots — users must be informed they're interacting with AI
  • Deepfakes and synthetic content — must be labelled as AI-generated
  • Emotion recognition systems (where not banned) — users must be informed

Minimal Risk

AI systems with minimal risk (spam filters, AI-powered search, recommendation engines) have no specific AI Act obligations, though general EU law still applies.

General-Purpose AI Models (GPAI)

The AI Act creates a separate category for general-purpose AI models — the foundation models behind tools like ChatGPT, Claude, Gemini, and similar.

If you're a provider of a GPAI model, you face transparency and documentation obligations from August 2025. If the model is classified as having "systemic risk" (trained with more than 10^25 FLOPs), additional requirements apply.

If you're a deployer (you use these tools), your obligations are more limited but still real:

  • You must comply with transparency requirements when the AI output is presented to people
  • You must ensure human oversight where decisions affect individuals
  • You need to understand the limitations of the tools you use

Key distinction

Using ChatGPT to draft internal emails is low risk. Using it to generate content published to EU audiences, or to make decisions about customers, employees, or partners, triggers transparency and documentation obligations.

Penalties

The AI Act has the harshest penalty framework of any EU regulation:

ViolationMaximum fine
Prohibited AI practicesEUR 35 million or 7% of global annual turnover
High-risk AI non-complianceEUR 15 million or 3% of global annual turnover
Providing incorrect information to authoritiesEUR 7.5 million or 1% of global annual turnover

For SMBs specifically, fines are capped at the lower of these thresholds — but even the lower amounts are existential for a small business.

The regulation also allows for proportionality: regulators can take into account the size of the company and the nature of the violation. But proportionality is not a defence against non-compliance.

What You Should Do Now

Immediate (already overdue)

  1. Check the prohibited list. Review your AI usage against the banned practices. If anything matches, stop using it.

  2. AI literacy. Article 4 requires that staff who operate or oversee AI systems have sufficient AI literacy. This isn't a one-off training — it's an ongoing obligation. Start with the people who deploy or manage AI tools.

Before August 2025

  1. Inventory your AI systems. List every AI tool your business uses — including third-party SaaS tools with AI features. You can't assess risk if you don't know what you're using.

  2. Classify by risk level. For each AI system, determine whether it falls into high-risk, limited-risk, or minimal-risk categories.

Before August 2026

  1. Compliance framework for high-risk systems. If you have any high-risk AI systems, start building the required documentation: risk management, data governance, technical documentation, logging, and human oversight procedures.

  2. Transparency measures. Ensure all chatbots, AI-generated content, and automated decisions are properly labelled and disclosed to users.

  3. Vendor assessments. For third-party AI tools, request compliance documentation from your vendors. You're responsible for how you deploy AI, even if you didn't build it.

  4. Governance structure. Designate someone responsible for AI compliance. This doesn't need to be a new hire — but someone needs to own it.

How ComplyOne Helps

ComplyOne's compliance platform assesses your business against the EU AI Act (and six other EU regulations). You get a report that tells you:

  • Whether the AI Act applies to your business
  • Which risk category your AI usage falls into
  • Where your compliance gaps are
  • What actions to take, prioritised by deadline and risk

Find out where you stand

Join the waitlist and be the first to check your AI Act readiness when we launch.

Join the Waitlist

Further Reading


ComplyOne is an AI-powered compliance intelligence platform for European SMBs. We help businesses understand and manage their EU regulatory obligations — GDPR, the AI Act, Data Act, NIS2, Swiss FADP, UK GDPR, and DORA — from one dashboard.

This article is for informational purposes only and does not constitute legal advice. For legal advice, consult a qualified attorney licensed in your jurisdiction.

Free compliance check — find out which EU regulations apply to your business