Skip to content
EU AI Act

AI Transparency Requirements Under the EU AI Act

7 min readUpdated 2 May 2026

Transparency is the one AI Act obligation that applies to almost every company using AI — regardless of risk tier. Even if your AI system is classified as minimal risk, if it interacts with users or generates content they might mistake for human-created, you have disclosure obligations.

This article explains exactly what transparency the law requires, when it applies, and what you need to implement before August 2026.


Two Levels of Transparency Obligation

The AI Act creates transparency requirements at two levels:

1. System-level transparency (for high-risk AI) High-risk AI system providers must document and disclose their system's capabilities, limitations, and purpose to deployers (the businesses using their system). This is covered by Article 13.

2. User-level transparency (for all interactive and generative AI) Any AI system that interacts with natural persons, or generates content a person might mistake for real, must disclose its AI nature. This applies regardless of risk tier. Covered by Article 50.

Most SaaS companies need to focus on both.


Article 50: User-Facing Disclosure Requirements

Article 50 creates disclosure obligations for four types of AI:

1. AI Chatbots and Virtual Assistants

If your product includes any conversational AI — a chatbot, virtual support agent, AI assistant — users must be informed at the point of first interaction that they are interacting with an AI system.

This must be:

  • Clear and unambiguous
  • Provided before or at the start of the interaction (not buried in terms)
  • In plain language appropriate to the user

What this means in practice: An "I'm an AI assistant" disclosure at the start of a chat conversation. A label in the chat window. A visible indicator that the agent is AI-powered. These are all acceptable implementations.

What is not acceptable: disclosures buried in the privacy policy, in the terms of service, or disclosed only after a user has already interacted assuming they were talking to a human.

2. AI-Generated Text, Images, Audio, and Video

If your product generates content — written copy, images, audio, or video — that users or third parties might reasonably believe was human-created, it must be labelled as AI-generated.

In scope:

  • AI writing assistants producing customer-facing content
  • AI image generators
  • AI voice synthesis
  • Video generation or face-swapping tools
  • AI-generated reports, documents, or analyses presented as authoritative outputs

Key exception: If the disclosed content is clearly artistic, satirical, or clearly identified as fiction, and disclosure would interfere with the purpose, the obligation is reduced — but the AI nature must still be disclosed by some means.

3. Emotion Recognition Systems

If your product detects or infers a person's emotional state, they must be informed that this is happening before the assessment takes place.

In scope: Customer sentiment analysis tools that assess individual emotional state, employee wellbeing monitoring tools, contact centre AI flagging caller emotional state.

Note: Using emotion recognition in workplaces and educational institutions is prohibited unless it falls within narrow safety or research exceptions.

4. Deepfakes and Synthetic Media

AI systems generating realistic synthetic images, audio, or video of real people must disclose that the content is artificially generated. This applies even in clearly creative contexts — the disclosure must exist, though it can be presented appropriately (e.g., in metadata, watermarking, or visible labelling).


Article 13: Transparency to Deployers (High-Risk AI)

If you provide a high-risk AI system, Article 13 requires that deployers — the businesses buying and using your system — receive sufficient information to understand and use it responsibly. This is embedded in your technical documentation and must be provided before or at the point of deployment.

Required disclosures to deployers include:

ElementWhat to provide
System identityThe provider's name, contact, and the system's name
Intended purposeSpecific tasks the system is designed for
Performance levelsAccuracy metrics, confidence intervals, known limitations
Risk of errorsKnown failure modes, edge cases, circumstances where accuracy drops
Demographic differencesIf performance varies by demographic group, disclose this
Human oversightWhat oversight is expected from the deployer
Expected lifespanMaintenance requirements, how long the model is supported
Data requirementsWhat input data quality is required for reliable output

This documentation should accompany your product — it is not optional for high-risk providers.


What Needs to Be in Your Product (Implementation Checklist)

For any AI chatbot or conversational feature:

  • Clear "this is an AI" disclosure at the start of every conversation
  • Disclosure is visible, not hidden in settings or terms
  • Human agent escalation path exists and is accessible

For any AI-generated content your users share externally:

  • Content is labelled as AI-generated (in the UI, in metadata, or via watermarking)
  • Users are made aware the output is AI-generated before they publish or share it

For any emotion or sentiment analysis of individuals:

  • Users are informed before assessment takes place
  • Data used for analysis is documented and lawfully processed under GDPR

For high-risk AI systems:

  • Technical documentation includes all Article 13 elements
  • Deployer-facing documentation is complete and provided at onboarding
  • User interface does not discourage human override of AI outputs

Common Implementation Mistakes

Disclosing in the terms of service only Terms of service disclosures do not satisfy the Article 50 requirement for chatbots or interactive AI. The disclosure must be at or before the point of interaction.

Labelling generative AI output inconsistently If some outputs are labelled and others are not, this creates legal inconsistency and user confusion. Implement labelling systematically — not on a case-by-case basis.

Not considering your customer's obligations If you build a platform and your customers deploy AI chatbots through it, the customer (deployer) has the Article 50 obligation. But if your platform design makes it difficult for them to implement disclosures, you are creating compliance risk for your customer base. Build disclosure support into your platform.

Treating transparency as a UI afterthought Transparency obligations affect product design, not just legal boilerplate. Engineering and product teams need to understand these requirements — not just the legal team.


When Transparency Obligations Apply (Timeline)

Article 50 transparency obligations for user-facing AI apply from 2 August 2026. Article 13 obligations for high-risk providers apply on the same date.

However, implementing these changes now has no downside. Many companies are already adding AI disclosures ahead of the deadline as a trust measure. Being transparent about AI use is increasingly expected by enterprise customers regardless of regulatory timing.


Overlap with GDPR

If your AI system processes personal data — and most AI chatbots and analytical tools do — transparency under the AI Act overlaps with GDPR transparency requirements (Articles 13 and 14 of GDPR). GDPR requires you to inform data subjects about automated processing and profiling. In practice, the AI Act disclosure and GDPR lawful basis disclosure can be implemented together in a single notification, but each framework has distinct requirements and both must be satisfied.

ComplyOne classifies your AI systems against the EU AI Act risk tiers and generates the required documentation automatically.

Run your AI Act risk assessment →