Most SaaS companies building AI features are not training their own models. They are calling the OpenAI API, Anthropic's API, Google's Gemini API, or similar. A common assumption is that using a third-party API transfers compliance responsibility to the API provider. This is wrong.
If you build an application that uses a third-party AI API in a high-risk use case, you are an AI deployer under the EU AI Act — and you have your own obligations.
The API Model: Who Is Responsible for What
The EU AI Act creates distinct categories for providers and deployers:
Provider (e.g., OpenAI, Anthropic, Google): The company that developed, trained, and makes available the AI model. For GPAI models (GPT-4, Claude, Gemini), the provider has GPAI model obligations: training data documentation, transparency summaries, systemic risk assessment.
Deployer (you, the API user): The company that integrates the AI model into a product or service and deploys it to end users. You take responsibility for the deployment — the use case, the configuration, the context, and the impact on end users.
The division of responsibility: The provider is responsible for the model. The deployer is responsible for the application. If you use GPT-4 to build an AI-powered CV screening tool, OpenAI's compliance with GPAI obligations does not make your CV screening tool compliant with high-risk AI requirements. Those obligations are yours.
When Does Using an API Trigger High-Risk Obligations?
The risk classification is determined by the use case, not the underlying technology. If you call an AI API to:
- Screen job applicants or rank candidates → High-risk (Annex III, Section 4)
- Make or assist credit decisions → High-risk (Annex III, Section 5)
- Support clinical decisions → High-risk (Annex III, Section 5b)
- Determine access to benefits or essential services → High-risk (Annex III, Section 6)
- Assist in law enforcement or judicial decisions → High-risk (Annex III, Section 8/9)
...then you are deploying a high-risk AI system, regardless of whether you fine-tuned the model or simply prompt-engineered a foundation model.
The technical implementation does not change the regulatory category. A prompt wrapper around GPT-4 that screens CVs is still a high-risk AI system.
Deployer Obligations for High-Risk AI Systems (Article 26)
If you are a deployer of a high-risk AI system, you must:
1. Use the System in Accordance with Instructions
The AI provider must give you instructions for appropriate use. You must follow them. If you use the system outside its intended purpose (e.g., using a general LLM for clinical decision support when the provider has not validated it for that use), you take on the provider's compliance obligations.
2. Monitor the System in Production
You must monitor the system's operation and detect problems. This includes:
- Logging AI outputs that influence decisions
- Tracking performance over time
- Detecting and investigating unexpected behaviour
3. Inform and Protect Affected Individuals
People affected by high-risk AI decisions must be informed that AI is being used. For automated decisions with significant effects, GDPR Article 22 overlaps with this obligation.
4. Report Serious Incidents
If the AI system causes or contributes to serious harm, you must report it to the relevant national authority.
5. Suspend Use if Risk Detected
If you identify that the AI system poses unacceptable risk in operation, you must suspend its use and notify the provider.
What You Should Ask Your API Provider
Before deploying a foundation model API in any high-risk use case, ask:
Is this model validated for my use case? Most foundation model providers do not validate their models for specific high-risk applications. If they have not validated it for CV screening, clinical decision support, or credit assessment, you are the responsible party.
What technical documentation is available? GPAI providers must publish training data summaries and technical documentation. Review these before deployment in high-risk contexts.
What are the usage restrictions? Most providers prohibit certain high-risk uses in their terms of service. If your use case is prohibited, deployment exposes you to both contractual and regulatory risk.
What logging and audit capabilities are available? For high-risk deployments, you need audit logs. Confirm the API provides the logging fidelity you need.
Limited Transparency Applications (Not High-Risk)
Many API-based applications are not high-risk. Chatbots, content generation, customer service, internal productivity tools — these typically fall into the limited transparency category.
For limited transparency applications, your obligations when using AI APIs are:
- Disclosure: Users must know they are interacting with an AI system
- No deceptive impersonation: The AI must not pretend to be human when sincerely asked
- Synthetic content labelling: AI-generated images, video, and audio must be labelled as such (for certain content categories)
These are material obligations but significantly lighter than the full high-risk stack.
Practical Checklist for API-Based AI Products
- Classify each AI feature against Annex III — determine if high-risk applies
- Review your API provider's terms of service for use restrictions
- Request and review the provider's GPAI technical documentation and training data summary
- Implement logging for AI outputs that influence decisions
- Add user-facing disclosure where required (chatbots, AI-generated content)
- If high-risk: prepare your own technical documentation covering your deployment
- If high-risk: implement human oversight mechanisms
- If high-risk: register in the EU AI database before August 2026