Skip to content
EU AI Act

How to Write an AI System Technical Documentation

5 min readUpdated 2 May 2026

Technical documentation is the foundation of EU AI Act compliance for high-risk AI systems. It is the document set you must prepare before deployment, maintain throughout the system's lifetime, and make available to national authorities on request. Without it, there is no conformity assessment and no legal deployment in the EU.

This article covers exactly what technical documentation must contain and how to build it.


What Technical Documentation Is (and Isn't)

Technical documentation is not a marketing brochure describing your AI system. It is a formal compliance document that demonstrates:

  • The system was designed with the AI Act requirements in mind
  • The development process was documented and auditable
  • The system's capabilities and limitations are understood and disclosed
  • The system is safe and appropriate for its intended use

It is also not the same as a data protection impact assessment (DPIA). A DPIA covers GDPR obligations — data minimisation, lawful basis, data subject rights. Technical documentation covers AI Act obligations — model performance, training data quality, bias, human oversight. They are separate documents with some overlapping information.


The Required Contents (Annex IV)

The AI Act's Annex IV sets out the required content. Below is each element with practical guidance.

1. General Description of the AI System

  • Name, version, and intended purpose
  • The deployment context — who uses it, for what decisions, in what sector
  • The types of individuals affected by the system's outputs
  • How it interacts with other systems or software
  • Hardware requirements (if relevant)

Practical tip: Write this section as if explaining the product to a technically informed regulator who has never seen it. Be precise about intended purpose — vague descriptions ("an AI assistant") will not satisfy a conformity assessment.

2. Description of the Elements and Development Process

This is the technical heart of the documentation:

  • Architecture: How the model works at a high level (classification, generation, regression, reinforcement learning, etc.)
  • Design choices: Key decisions made during development and why
  • Training methodology: How the model was trained, what optimisation approach was used
  • Computing resources: Infrastructure used for training and deployment
  • Third-party components: Models, datasets, or libraries incorporated into the system

You do not need to publish model weights or proprietary code. You do need enough detail for a technically competent reviewer to understand how the system works and how it was built.

3. Information on Training, Validation, and Testing Data

This section covers your data governance:

  • Data sources: Where training data came from
  • Data collection methods: How data was gathered, labelled, and curated
  • Data processing steps: Cleaning, augmentation, and preprocessing applied
  • Known limitations: Data gaps, underrepresented populations, potential biases
  • Validation dataset: How the validation set was constructed and what it tests
  • Test dataset: How final testing was conducted and what metrics were measured

This is one of the most scrutinised sections. For high-risk AI, authorities will look carefully at whether training data represents the intended deployment population. A model trained on predominantly Western European data deployed for clinical decisions in a broader EU population needs to document this gap and explain mitigations.

4. Performance Metrics

Document the system's accuracy, reliability, and other relevant performance metrics:

  • Primary performance metric (accuracy, F1 score, AUC, etc.) — with value
  • Subgroup performance — broken down by relevant demographic groups if applicable
  • Confidence intervals where relevant
  • Known failure modes — when does the system perform poorly?
  • Performance in edge cases — how does it handle unusual inputs?

For high-risk systems, subgroup performance is important. If a credit scoring model performs 15% worse for a demographic group, that needs to be in the documentation with an explanation of how it is being addressed.

5. Monitoring, Functioning, and Control Measures

Describe how the system operates in production and how it is monitored:

  • Logging architecture — what outputs are logged, for how long
  • Human oversight mechanisms — how humans review and can override outputs
  • Monitoring processes — what metrics are tracked post-deployment
  • Incident escalation — what triggers an alert, who reviews it, what action is taken
  • Update and version control — how model updates are managed and validated

6. Cybersecurity Measures

High-risk AI systems must be protected against adversarial attacks:

  • Input validation — how the system handles adversarial or out-of-distribution inputs
  • Access controls — who can access and modify the model
  • Monitoring for adversarial behaviour — detecting inputs designed to manipulate outputs

7. Results of Conformity Assessment

The completed internal conformity assessment checklist, or the notified body certificate, must be included in or referenced from the technical documentation.

8. Copy of the Declaration of Conformity

The formal declaration signed by the provider confirming that the system complies with the AI Act.


How to Structure and Maintain Technical Documentation

Treat it as a living document. Technical documentation is not written once and filed. It must be updated when:

  • The system is substantially modified
  • New performance data or failure modes are identified
  • The deployment context changes

Version it. Each major update to the system should produce a new version of the technical documentation, with changes documented.

Store it securely and accessibly. Authorities can request it. It must be available in English or the relevant member state language within a reasonable timeframe.

Typical length: For a well-scoped high-risk AI system, complete technical documentation typically runs 40–120 pages. Simple systems will be shorter; complex systems with multiple use cases or extensive training data will be longer.

ComplyOne classifies your AI systems against the EU AI Act risk tiers and generates the required documentation automatically.

Run your AI Act risk assessment →