Recruitment and hiring automation is explicitly listed in Annex III of the EU AI Act as a high-risk AI use case. If your product uses AI to screen CVs, rank candidates, score interviews, or assist in hiring decisions, you are in the high-risk category. This applies to both the companies building these tools and the employers using them.
Why Recruitment AI Is Explicitly High-Risk
The EU AI Act classifies AI systems in the following employment and recruitment contexts as high-risk (Annex III, Section 4):
- CV screening and sorting — AI systems that filter applicants before a human reviews them
- Candidate ranking and scoring — algorithms that produce ranked candidate lists based on application materials
- Interview analysis — AI that analyses video, audio, or text from interviews to score candidates
- Job advertising targeting — AI that determines which individuals see job postings
- Employment decisions — AI assisting with promotion, performance management, or contract termination
The regulation covers both SaaS tools sold to employers and AI features embedded in HR platforms (HRIS, ATS systems with AI layers).
Who Is Affected
HR SaaS vendors: Companies building applicant tracking systems, CV screeners, video interview analysis tools, and candidate ranking platforms. You are an AI system provider — the high-risk obligations fall on you to build and document.
Employers using recruitment AI: Companies using third-party or in-house AI for hiring decisions. You are an AI deployer — you have obligations around transparency, oversight, and informing candidates.
Both parties have obligations. The vendor must build a compliant system. The employer must use it compliantly.
High-Risk Requirements for Recruitment AI Systems
Technical Documentation (Article 11)
You must prepare documentation covering:
- The AI system's intended purpose in the recruitment workflow
- Performance metrics (precision, recall, false positive/negative rates by demographic group)
- Training data composition — sources, demographic representation, labelling methodology
- Known limitations and failure modes
- How the system handles edge cases (career gaps, non-linear career paths, different education systems)
Recruitment AI trained on historical hiring data is known to replicate historical bias. Your technical documentation must address how you have identified and mitigated this. Regulators and customers will ask for it.
Data Governance (Article 10)
Training data requirements are strict:
- Training datasets must be documented for known demographic biases
- Steps taken to identify and correct bias must be recorded
- Validation datasets must be representative of the intended candidate population
- Testing for discriminatory outcomes by protected characteristics (gender, age, ethnic background, disability) must be conducted and documented
Transparency (Article 13)
Employers deploying your system must be able to explain its outputs to candidates. This means:
- Outputs must include explanations — not just a score, but the factors behind the score
- The system must flag low-confidence assessments
- Decision logs must be retained for audit purposes
Human Oversight (Article 14)
Recruitment AI must not make autonomous hiring decisions. Requirements:
- A human must be able to review and override AI-generated rankings and scores
- The system must make override easy — not buried in the interface
- Final hiring decisions must involve a human reviewer
- The system must not be designed to discourage human review (e.g., by presenting AI scores as definitive)
EU AI Database Registration (Article 49)
High-risk AI systems must be registered in the EU AI Act database before deployment. For recruitment AI sold to employers, the vendor registers the system.
Employer Obligations When Using Recruitment AI
If you are an employer using AI-powered recruitment tools (not the vendor building them):
Transparency to candidates (Article 50): Candidates must be informed that AI is being used in the screening or selection process. This must happen before the process, not buried in terms and conditions.
Meaningful human review: You cannot rely entirely on AI rankings without human review. Candidates rejected purely by AI screening without any human involvement may have grounds to challenge the process under GDPR Article 22 (automated decision-making) as well as the AI Act.
Data subject rights: Candidates have the right to request an explanation of the automated assessment that affected their application. You must be able to provide this within the GDPR 30-day timeframe.
GDPR Intersection: Automated Decision-Making
Recruitment AI sits at the intersection of the AI Act and GDPR Article 22. Automated decisions with significant effects — including rejection from a hiring process — require:
- Informing the candidate that automated processing is taking place
- The right to request human review of the decision
- The right to contest the decision
Many recruitment AI products have GDPR Article 22 obligations that predate the AI Act. If you are already compliant with these, AI Act transparency requirements largely overlap. The additional AI Act obligations are mainly around technical documentation, data governance, and registration.
Compliance Checklist for Recruitment AI Vendors
- Classify the product as high-risk under AI Act Annex III, Section 4
- Prepare technical documentation per Article 11
- Audit training datasets for demographic bias and document findings
- Conduct and document bias testing across protected characteristics
- Implement explainability — outputs must include rationale, not just scores
- Build human override capability into the interface
- Create customer documentation explaining how to use the system compliantly
- Register in the EU AI database before August 2026
- Establish post-market monitoring and incident reporting procedures