The EU AI Act is law. The UK has deliberately chosen a different path. For companies operating in both markets — which includes most EU SaaS companies with any UK presence, and UK companies selling into Europe — the regulatory divergence creates real compliance planning challenges.
This article sets out the key differences between the EU AI Act and the UK's AI regulatory approach.
The Fundamental Difference: Regulation vs Principles
EU AI Act: A binding horizontal regulation. Specific risk tiers. Specific obligations for high-risk systems. Specific prohibitions. Enforced by national competent authorities with defined fines. Applies from 2025–2026 on a phased timeline.
UK approach: No binding AI regulation as of 2026. The UK government has adopted a "principles-based" and "sector-led" approach. Existing regulators (the ICO, FCA, Ofcom, CMA) apply their existing powers to AI within their remit, guided by cross-cutting AI principles but without a new primary AI law.
This means:
- A company operating only in the EU faces specific, codified AI Act requirements
- A company operating only in the UK faces softer, sector-specific expectations with less legal certainty
- A company in both markets must satisfy EU requirements for EU operations and navigate UK guidance for UK operations
The UK's AI Framework
The UK's approach has several components:
The 2023 AI White Paper: Established five "principles" that existing regulators should apply to AI: safety and security; transparency and explainability; fairness; accountability and governance; contestability and redress. These principles are not law — they are guidance for regulators.
The AI Safety Institute (now AISI): Focused on frontier AI safety research and evaluation, not on general business AI compliance. Primarily engaged with foundation model providers.
Sector-specific guidance: The ICO has published AI guidance for data protection compliance. The FCA has published guidance on AI in financial services. The Medicines and Healthcare products Regulatory Agency (MHRA) has guidance for AI as medical devices. Each sector has its own rules — not unified.
The AI Opportunities Action Plan (2025): The government signalled it wants the UK to be an "AI-friendly" environment, with regulation designed to enable rather than constrain. No new primary AI legislation was announced.
Key Differences in Practice
Risk Classification
EU: Prescriptive. Annex III defines specific high-risk AI use cases. If your product falls in a listed category, it is high-risk regardless of actual risk profile.
UK: No equivalent. Risk is assessed by the relevant sector regulator using principles. A UK HR tech company with an AI CV screener faces ICO guidance on data protection and Equality and Human Rights Commission guidance on discrimination, but no AI Act-equivalent classification requirement.
Documentation Requirements
EU: Detailed Annex IV requirements for technical documentation. Mandatory for all high-risk systems.
UK: No equivalent requirement. ICO guidance recommends documenting AI systems for data protection purposes (as part of DPIA or Article 30 obligations), but this is narrower than AI Act technical documentation.
Conformity Assessment and Registration
EU: Mandatory conformity assessment before deployment of high-risk systems. EU AI database registration required.
UK: No equivalent. No registration requirement. No pre-deployment conformity check.
Prohibited Practices
EU: Article 5 creates binding prohibitions (social scoring, subliminal manipulation, real-time biometric ID) from February 2025.
UK: No equivalent statutory prohibition. The ICO and other regulators can take action on specific practices under existing powers (data protection, equality law), but there is no AI-specific prohibition list.
Fines
EU: Up to €35 million / 7% global turnover for prohibited AI; up to €30 million / 6% for high-risk violations; up to €15 million / 3% for other violations.
UK: Fines are sector-specific. ICO can fine up to £17.5 million / 4% of global turnover for GDPR/data protection violations. FCA has its own enforcement powers. No AI-specific fine framework.
Planning for Both Markets
For companies operating in the EU and UK:
Build to EU AI Act standard for everything. The EU requirements are more demanding and more specific. A product that meets EU AI Act requirements for transparency, documentation, and human oversight will also satisfy UK sector guidance in most cases. Build once for the higher standard.
Do not assume UK rules will remain static. The UK has not passed AI law yet, but the government has not ruled it out. The EU Act may create competitive pressure to harmonise. Build your compliance infrastructure to be extensible.
Separate EU and UK risk assessments. Your EU high-risk AI classification may not translate directly to the UK. A UK-only HR tool still faces ICO scrutiny under UK GDPR and the Equality Act — but the compliance pathway is different from the EU AI Act route.
For UK customers deploying EU-sourced AI: If a UK company buys an AI product from an EU vendor, the EU vendor has built to AI Act standards. The UK deployer's obligations are determined by UK sector regulators, not the EU Act. Both parties should understand this division.
The Data Protection Overlap
Both EU and UK GDPR include Article 22 (automated decision-making) and data protection impact assessment requirements for high-risk processing. These obligations apply regardless of AI Act status. Companies in both markets have these baseline obligations regardless of which AI regulation framework they are navigating.