Customer service chatbots and conversational AI products fall primarily under the EU AI Act's limited transparency risk provisions. They are not high-risk systems under Annex III in most deployments — but they carry specific, enforceable disclosure obligations that apply from August 2026.
This article explains what the AI Act requires from chatbot and conversational AI companies.
Chatbots Are Not High-Risk — But They Are Regulated
Most customer service chatbots are not classified as high-risk under Annex III. They are not making employment decisions, credit decisions, or clinical recommendations. They are answering product questions, handling returns, routing support tickets.
This is relevant because:
- You do not need a full conformity assessment for a customer service chatbot
- You do not need to register in the EU AI database for a standard support chatbot
- The documentation burden is lower than for high-risk systems
What you do have are transparency obligations under Article 50 — and they are not optional.
Article 50: The Transparency Rules for Chatbots
Article 50 of the AI Act sets out specific transparency requirements for AI systems that interact with humans.
1. Chatbot Disclosure (Article 50.1)
Providers of AI systems that interact with natural persons must ensure that those persons are informed they are interacting with an AI system, unless this is obvious from the context.
This requirement applies to:
- Customer service chatbots on websites and apps
- AI-powered voice assistants handling customer calls
- Automated email response systems driven by AI
- In-product AI assistants
What "obvious from context" means: A button clearly labelled "Chat with AI" or a chatbot clearly identified as a bot in the interface satisfies this. Ambiguous names ("Alex", "Your assistant") combined with human-like language do not.
When the exception applies: If a user asks sincerely whether they are talking to a human or an AI, the AI must disclose that it is an AI — regardless of how it has been configured. This is not an exception that operators can override. A product configured to claim to be human when sincerely asked is non-compliant.
2. Synthetic Content Labelling (Articles 50.2–50.4)
For AI systems generating certain types of content, additional labelling is required:
- Deepfake images and video — AI-generated images and video that realistically depict real persons or realistic scenes must be machine-readable labelled as AI-generated
- Synthetic audio — AI-generated audio that imitates real persons must be labelled
- AI-generated text for public communication — articles, social media posts, and similar content generated by AI for public audiences should carry AI disclosure in many contexts
For customer service chatbots, the deepfake rules are unlikely to apply. The core obligation is the chatbot disclosure — informing users they are talking to an AI.
When Chatbots Become High-Risk
A chatbot is not automatically low-risk just because it is a chatbot. The risk classification depends on what the chatbot does:
| Chatbot use case | Risk tier |
|---|---|
| Answering product questions | Limited transparency |
| Handling returns and order queries | Limited transparency |
| First-line triage for customer support | Limited transparency |
| Medical triage and symptom assessment | High-risk |
| Legal advice (substantive decisions) | High-risk (context-dependent) |
| Financial product recommendations | Potentially high-risk |
| Job application assistance that influences hiring | High-risk |
| Social scoring or profiling | Prohibited |
If your chatbot collects information and uses it to make or recommend decisions with significant effects on individuals, reassess the risk tier. A healthcare chatbot that advises on treatment options, or a financial chatbot that recommends products, is not categorically limited-risk.
Practical Compliance for Chatbot Products
What to Build Into Your Product
Disclosure flow:
- On opening a chat, display a clear disclosure that the user is interacting with an AI
- Do not require users to read through terms to find this — it must be upfront
- A brief message at the start of the conversation is sufficient: "Hi, I'm [Name], ComplyOne's AI assistant."
Sincere question handling:
- If a user asks "Are you a human?" or "Am I talking to a person?", the AI must answer truthfully
- Configure your system to handle this explicitly — do not leave it to the LLM's default behaviour
- Test this interaction before deployment
No deceptive personas:
- Do not configure the AI to deny being an AI in any circumstance
- Human-sounding names are acceptable; claiming human identity is not
Documentation:
- Maintain records of what AI systems you deploy and their configuration
- Document how disclosure is implemented
- For B2B chatbot SaaS: provide customers with documentation to help them deploy compliantly
What to Tell Your Enterprise Customers
If you sell chatbot technology to businesses that deploy it with their own branding, both you (the provider) and they (the deployer) have obligations:
- Provider: Build in the capability to display AI disclosure; document it in your technical materials
- Deployer (your customer): Configure the chatbot to disclose AI status; ensure their users see the disclosure
Your contracts with enterprise customers should clearly allocate responsibility for end-user disclosure compliance.
GDPR Intersection
Customer service chatbots collect personal data. In addition to AI Act obligations:
- Privacy notice must cover AI processing of customer conversations
- Data retention limits apply to conversation logs
- Customers can request deletion of their conversation data
- If conversations are used to improve the model, this requires a lawful basis and disclosure
Timeline
Article 50 transparency obligations apply from 2 August 2026. This is a hard enforcement date. Chatbot products that do not carry AI disclosure by this date are non-compliant.
Start now:
- Audit all chatbot and conversational AI features in your product
- Confirm disclosure is implemented in the user interface
- Confirm the system correctly handles sincere questions about its AI nature
- Update product documentation and customer contracts to reflect disclosure requirements