Legal tech companies building AI-powered contract analysis, due diligence automation, document review, and legal research tools face a specific compliance challenge: most of their products sit in an ambiguous zone under the EU AI Act. Some are clearly high-risk. Others fall into the limited transparency or minimal risk categories. Getting the classification wrong — in either direction — has real consequences.
This article sets out how the AI Act applies to legal tech products and what you need to do before August 2026.
Risk Classification for Legal Tech AI
The EU AI Act's high-risk categories in Annex III are defined by use case, not technology. For legal tech, the critical categories are:
High-Risk (Annex III)
Section 6 — Access to essential private and public services AI systems used to assess individuals' access to legal services or to determine legal outcomes can be high-risk. This covers:
- AI-assisted judicial decision support tools
- Automated legal aid eligibility assessors
- Tools that score case strength and influence legal strategy in ways that affect individuals' rights
Section 1 — Biometric categorisation (if applicable) If your tool profiles individuals from documents (e.g., extracting demographic information to inform legal strategy), this may trigger biometric or personal profiling rules.
Section 4 — Education and vocational training (marginal) AI tools used in legal training and assessment contexts may apply.
Limited Transparency Risk
Most contract AI, document review, and legal research tools fall into the limited transparency category — not high-risk, but subject to disclosure obligations:
- AI-generated legal summaries, drafts, or recommendations must be disclosed as AI-generated
- Users must be aware they are interacting with an AI system
- Human review must be available for outputs that influence legal decisions
Minimal Risk
AI tools used purely for legal research, case law retrieval (retrieval-augmented generation), grammar and formatting assistance, or document classification with no direct impact on individual rights are minimal risk. The main obligation is voluntary adherence to the AI Act's code of conduct.
The Pivotal Question: Does Your Tool Influence Individual Rights?
The distinction between limited transparency risk and high-risk in legal tech turns on a single question: Does the AI output directly influence a decision that affects an individual's legal rights, employment, or access to services?
| Use case | Risk tier |
|---|---|
| Contract clause extraction and summarisation | Limited transparency |
| Due diligence automation (corporate, M&A) | Limited transparency |
| AI-assisted judicial decision support | High-risk |
| Legal aid eligibility screening | High-risk |
| Employment contract review for HR decisions | High-risk (if used to make employment decisions) |
| Case strategy recommendation for lawyers | Limited transparency (lawyer retains decision) |
| Automated demand letter generation | Limited transparency |
| Document classification for e-discovery | Limited transparency |
| Regulatory compliance checking | Limited transparency |
The key variable is who makes the final decision and whether the AI output bypasses human judgment. If a lawyer reviews the AI recommendation before acting on it, the risk is generally lower. If the AI output is directly actioned without review, scrutiny increases.
Obligations for Limited Transparency Risk Systems
Most legal tech AI falls here. Requirements:
Disclosure obligation: Users must be clearly informed when they are interacting with AI-generated content. This includes:
- Flagging AI-generated contract summaries as AI-generated
- Identifying AI-extracted clauses as AI-extracted (with confidence indicators where useful)
- Disclosing when legal research results are AI-ranked or AI-curated
Human review pathway: There must be a clear mechanism for human review of AI output. For legal tech, this typically means:
- Lawyer or paralegal review before AI-generated content is used in a client matter
- Flagging low-confidence extractions for manual verification
- No system that presents AI output as definitive without qualification
No deceptive design: AI output must not be presented in a way that implies it has been reviewed when it has not. Contract review tools that present "compliant" or "non-compliant" verdicts without surfacing the underlying clause analysis or confidence level may fall foul of this requirement.
GPAI Models and Legal Tech
Many legal tech companies build on top of general-purpose AI (GPAI) models — GPT-4, Claude, Gemini. The AI Act places obligations on the GPAI model providers (OpenAI, Anthropic, Google) but also on the companies building downstream applications.
If you build a legal tech product on a GPAI model:
- You are an AI deployer under the Act
- You must conduct your own risk assessment of the downstream use
- You must implement appropriate controls for the deployment context (legal advice = higher-scrutiny context)
- You cannot rely on the upstream provider's AI Act compliance as a substitute for your own
The upstream provider's transparency obligations (publishing training data summaries, maintaining technical documentation) help you but do not replace your compliance obligations.
Practical Compliance Checklist for Legal Tech AI
Documentation:
- Risk classification for each AI feature (Annex III assessment)
- Technical documentation for any high-risk systems
- Description of how AI output influences user decision-making
Transparency:
- Clear in-product labelling of AI-generated output
- Confidence indicators where AI output accuracy varies
- No design that implies AI output has been reviewed when it has not
Human oversight:
- Review workflow for AI-generated content before it is used in client matters
- Escalation pathway for low-confidence or high-stakes AI output
Data governance:
- Training data documentation if you have fine-tuned models on legal data
- Data processing agreements with customers covering legal document data
- Retention and deletion procedures for customer document data
Ongoing:
- Post-market monitoring — tracking accuracy and failure modes in production
- Process for reporting serious incidents if AI output contributes to legal harm