The EU AI Act introduces a dedicated regulatory framework for General Purpose AI (GPAI) models — the large foundation models underpinning products like ChatGPT, Claude, Gemini, and Llama. This is a significant addition: prior drafts of the Act focused only on specific AI applications. The final text regulates the foundation model layer too.
This matters for two groups: companies that develop or release GPAI models, and companies that build products on top of them.
What Is a GPAI Model?
The AI Act defines a GPAI model as an AI model "trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks."
This covers: large language models (LLMs), multimodal foundation models (text + image), large code generation models, large-scale image or video generation models.
It does not cover: task-specific models trained for a single narrow purpose (a fraud detection model, a document classifier, a speech-to-text system for transcription only).
Two Tiers of GPAI Obligation
The Act distinguishes between standard GPAI models and systemic-risk GPAI models.
Tier 1: All GPAI Models
Providers of any GPAI model released in the EU must:
Technical documentation (Article 53)
- Maintain documentation on model architecture, training methodology, and capabilities
- Document known limitations and failure modes
- Keep records of training data sources and data governance measures
Transparency for downstream developers
- Publish a summary of training data used — sufficient for downstream developers to understand what the model was trained on
- Provide instructions for safe and compliant downstream use
- Make technical documentation available to the AI Office and national authorities on request
Copyright compliance
- Providers must comply with EU copyright law
- A summary of training data used must be published, allowing rights-holders to identify if their content was included
Open-source models have reduced obligations — they are partially exempt from documentation requirements unless they reach the systemic risk threshold.
Tier 2: Systemic-Risk GPAI Models
A GPAI model reaches systemic risk when it is trained using more than 10^25 FLOPs (floating-point operations) — the current proxy for frontier model scale. As of 2026, this covers: GPT-4 and successors, Gemini Ultra, Claude 3 Opus and above, and comparable frontier models.
Additional obligations for systemic-risk GPAI models:
Adversarial testing and red-teaming
- Must conduct adversarial testing before and after release
- Must include testing for systemic risks: generating weapons of mass destruction instructions, large-scale cybersecurity attacks, mass manipulation of public opinion
Incident reporting
- Must report serious incidents to the EU AI Office within a defined timeframe
- Incidents include: cases where the model contributed to physical harm, large-scale discrimination, or infrastructure attacks
Cybersecurity measures
- Must implement cybersecurity measures protecting the model from adversarial attacks and misuse
Model evaluation
- Must submit to evaluations commissioned by the EU AI Office
What This Means for Companies Building on GPAI Models
If you are a SaaS company building on GPT-4, Claude, Gemini, Llama, or any other GPAI model, you are a downstream deployer. The GPAI provider's compliance obligations do not replace your own.
Your obligations as a downstream deployer:
- Conduct your own risk assessment for your specific application (is your use case high-risk?)
- Implement controls appropriate to your deployment context
- Comply with transparency obligations for your end users
- Do not use the GPAI model in ways prohibited by the Act
What the GPAI provider's compliance does for you:
- Training data summaries help you understand potential biases in the base model
- Technical documentation helps you assess whether the model is appropriate for your use case
- Safe use instructions give you a baseline for responsible deployment
The key point: API access to a compliant GPAI model does not make your downstream application automatically compliant. You must assess your own deployment against the AI Act's requirements.
Open-Source GPAI: A Special Case
The AI Act gives reduced obligations to open-source GPAI providers (e.g., Meta's Llama, Mistral AI's open models) unless they reach the systemic risk threshold. The rationale: open models are distributed and cannot be centrally controlled, so the compliance burden falls more on deployers.
If you are building on open-source GPAI:
- You take on more responsibility for downstream compliance
- You cannot rely on the model provider for training data summaries or safe use documentation
- You must conduct your own assessment of the model's known limitations and biases
Timeline for GPAI Requirements
GPAI model obligations under the AI Act enter into force in August 2025 — one year ahead of the full high-risk AI system requirements. Providers of GPAI models are already under compliance pressure.
For most SaaS companies building on GPAI APIs, the practical implication is:
- Your GPAI model providers will publish training data summaries and technical documentation — review these for your use case
- Your own obligations (for high-risk downstream applications) enter force August 2026
- Begin your downstream risk assessment now if you are building in high-risk categories