Before any discussion of high-risk AI compliance, conformity assessments, or transparency requirements, the EU AI Act establishes a list of AI practices that are simply banned. Prohibited from 2 February 2025. No compliance pathway, no exception for startups, no grace period. If your product does any of these things, it cannot legally operate in the EU.
This article covers each prohibited category clearly.
Article 5: The Prohibited AI Practices
1. Subliminal Manipulation
AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort their behaviour in a way that causes or is likely to cause them or another person significant harm are prohibited.
"Subliminal" here means techniques operating below conscious perception. This includes:
- Imperceptibly embedded audio or visual content designed to influence behaviour
- Micro-targeted emotional manipulation based on real-time psychological profiling
- Systems that exploit priming effects without the user's awareness
Note the "significant harm" threshold — minor influence does not automatically trigger this prohibition. The prohibition targets manipulation that causes real-world harm.
2. Exploitation of Vulnerabilities
AI systems that exploit vulnerabilities of specific groups — defined by age, disability, or social and economic situation — to materially distort their behaviour in a harmful way are prohibited.
This covers:
- Targeting children with predatory AI-powered marketing or gaming mechanics
- Exploiting elderly users' cognitive limitations for financial gain
- Using AI to target individuals in financial hardship with harmful financial products
The distinction between permitted personalisation and prohibited exploitation is the intent and effect — systems designed to exploit a vulnerability for commercial gain at the expense of the vulnerable individual are prohibited.
3. Social Scoring by Public Authorities
Public authorities are prohibited from using AI systems for social scoring — evaluating individuals based on their social behaviour or personal characteristics in a way that:
- Treats them in detrimental or unfavourable ways in unrelated social contexts
- Results in unjustified or disproportionate treatment relative to the severity of the behaviour
China's social credit system is the reference point. This prohibition is targeted at government AI, not private companies. However, private systems that feed into government scoring decisions may be caught in practice.
4. Real-Time Remote Biometric Identification in Public Spaces (Law Enforcement)
Law enforcement authorities are prohibited from using AI systems for real-time remote biometric identification (face recognition, gait analysis, etc.) in publicly accessible spaces, with narrow exceptions:
- Targeted searches for missing children
- Prevention of imminent terrorist threats
- Prosecution of serious criminal offences (subject to prior judicial authorisation)
This applies to law enforcement, not private companies. Private companies operating in-store face recognition for loss prevention are not directly covered by this specific prohibition — but may be caught by GDPR special category data rules and other applicable regulations.
5. Biometric Categorisation Inferring Sensitive Attributes
AI systems that use biometric data to infer or deduce an individual's race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation are prohibited.
This is broader than it might appear:
- Systems that infer political orientation from facial features are prohibited
- Systems that classify individuals by perceived race from biometric data are prohibited
- This covers not just surveillance systems but any product that uses biometric data to infer protected characteristics
6. Emotion Recognition in Workplace and Educational Settings
AI systems used to recognise or infer the emotions of natural persons in workplace and educational institution settings are prohibited.
This covers:
- Workplace monitoring tools that assess employee stress, engagement, or mood from facial analysis, voice tone, or behaviour
- Educational platforms that track student attention or emotional state using AI
- Video interview analysis tools that score candidates on detected emotional states
Note: The prohibition covers emotion recognition in these settings, not biometric authentication (face recognition for building access) or health monitoring for safety purposes.
7. AI-Powered Real-Time Mass Biometric Surveillance
AI systems creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage are prohibited.
This targets systems that build mass biometric databases without individual consent — Clearview AI being the paradigmatic example. The prohibition covers the collection and processing of facial images at scale without a specific lawful purpose.
Who Does This Affect?
Startups and SaaS companies: Review your product for any of the above. The emotion recognition prohibition in workplace settings affects HR tech and productivity tools. The vulnerability exploitation prohibition potentially affects consumer fintech, gaming, and edtech. The manipulation prohibition affects any product using personalisation for persuasion.
Enterprises deploying AI: Review third-party AI tools. If a vendor provides a prohibited AI system, using it is itself non-compliant.
What to do if you are in doubt: The prohibition categories are written broadly and there will be edge cases. Seek legal advice on specific product features. The European AI Office is developing guidance on the prohibited practices.
Penalties for Prohibited AI
Violations of the prohibited AI provisions carry the highest fines in the AI Act:
- Up to €35 million or 7% of global annual turnover, whichever is higher
The prohibited practices deadline was 2 February 2025. Any product operating a prohibited AI system in the EU is already in violation.