Skip to content
EU AI Act

AI Act Article 13: Transparency Obligations Explained

5 min readUpdated 2 May 2026

Article 13 of the EU AI Act is the core transparency provision for high-risk AI systems. It sets out what providers and deployers must disclose to the people who use high-risk AI, and to the people affected by its decisions. For many AI products, meeting Article 13 requires rethinking how outputs are presented and documented.


What Article 13 Requires

Article 13 is titled "Transparency and provision of information to deployers." Its requirements operate on two levels:

Level 1: Transparency to Deployers (Businesses Using High-Risk AI)

If you are a high-risk AI system provider selling to enterprise customers, you must provide your customers with documentation that enables them to understand and use the system correctly.

This documentation must cover:

  • Identity and contact details of the provider
  • Characteristics, capabilities, and limitations of the system — what it can and cannot do, in what conditions it performs well, and in what conditions it fails
  • Performance levels for specific persons or groups, including known accuracy variations
  • Hardware and software requirements for appropriate use
  • Description of inputs — what data the system is designed to receive
  • Description of training data — the types and sources of training data used (not the data itself, but what categories)
  • System changes — logging of modifications after placing on the market

Level 2: Transparency to Affected Individuals (End Users)

Deployers of high-risk AI must, in turn, provide transparency to the individuals whose decisions are affected by the system. This is not Article 13 directly — it is Article 26 (deployer obligations) — but it flows from the information the provider supplies in Article 13 documentation.

Affected individuals must be able to understand:

  • That an AI system was used in a decision affecting them
  • The general logic and factors behind the AI-assisted decision
  • How to challenge or seek human review of the decision

What "Transparency" Actually Means for High-Risk AI

Article 13 does not require exposing model weights, training data, or proprietary algorithms. It requires functional transparency — enough information for users and deployers to:

  • Understand what the system does
  • Assess whether it is appropriate for their context
  • Identify when it might be failing
  • Challenge outputs that seem incorrect

The regulation uses the concept of "meaningful information" — drawn from GDPR's existing right to explanation for automated decisions (Article 22). For AI Act purposes, this means:

  • Not a full technical explanation, but an explanation a non-expert deployer can understand
  • Specific enough to identify the factors the AI considered
  • Clear about the system's known limitations

Practical Implications for B2B AI SaaS Companies

If you sell high-risk AI to enterprise customers, Article 13 creates concrete product requirements:

In-product documentation:

  • Your system must generate or include machine-readable or accessible documentation of AI outputs — what the AI decided, what factors it weighted, what confidence level it has
  • This cannot be provided only in a PDF manual — it must be accessible in the context of using the system

Customer-facing documentation:

  • Prepare a formal "system card" or AI disclosure document that covers all Article 13 requirements
  • This document must be available before contract signing — it is part of what the deployer needs to make an informed decision about using your system

Output formatting:

  • AI outputs must include enough context for the deployer and affected individual to understand them
  • A credit score of "rejected" with no explanation does not meet Article 13
  • A score with contributing factors and a confidence level does

Known limitations disclosure:

  • You must disclose known accuracy limitations, especially for demographic subgroups
  • If your system performs worse for a particular group, this must be in the Article 13 documentation — not buried in technical appendices

Article 13 and GDPR Article 22 Interaction

GDPR Article 22 already gives individuals the right to a meaningful explanation of significant automated decisions, and the right to human review. Article 13 of the AI Act operates in parallel:

GDPR Article 22AI Act Article 13
Applies toAutomated decisions affecting individualsHigh-risk AI systems
BeneficiaryAffected individualDeployer (then individual via deployer)
ContentExplanation of specific decisionSystem capabilities and limitations
Right to challengeYes — to data controllerVia deployer and human oversight mechanisms

They are complementary, not redundant. A company deploying high-risk AI must satisfy both.


Common Gaps in Article 13 Compliance

Providing documentation that only covers the best case. If your system has accuracy variations across demographic groups, you must disclose this. Documentation that only shows aggregate performance metrics does not satisfy Article 13.

Documentation in the wrong format. Article 13 documentation must be accessible to deployers when they use the system, not only in a static PDF. B2B AI products increasingly provide this through an API-accessible documentation layer or an in-product admin panel.

No update process. If the system's performance characteristics change (new model version, new training data), the Article 13 documentation must be updated. Many companies treat documentation as a one-time activity.

Treating Article 13 as a sales document. Article 13 documentation is a compliance document. It must disclose limitations honestly, not position the product favourably. A document that omits known failure modes is non-compliant.

ComplyOne classifies your AI systems against the EU AI Act risk tiers and generates the required documentation automatically.

Run your AI Act risk assessment →