Skip to content
EU AI Act

EU AI Act for HR Software Companies

7 min readUpdated 2 May 2026

HR software is one of the most clearly high-risk categories under the EU AI Act. If your platform uses AI to screen CVs, rank candidates, evaluate employee performance, monitor workers, or influence any employment decision — you are building a high-risk AI system under Annex III.

This is not a grey area. The Act explicitly names employment, worker management, and access to self-employment as a high-risk category. Compliance is required by 2 August 2026.


Which HR AI Features Are High-Risk

The trigger is whether your AI feature influences a decision that affects a person's employment. That is a broad definition, and it is intentional.

FeatureHigh-risk?Why
CV parsing and rankingYesDirectly influences candidate shortlisting
Candidate scoring / matchingYesInfluences hiring decision
Interview question generationPossibly — if output influences assessmentDepends on how it's used
Automated interview analysis (tone, word choice)YesAssesses candidate characteristics
Performance scoring / OKR tracking with AIYesInfluences promotion, termination
Absence monitoring with AI flagsYesInfluences disciplinary decisions
Workforce scheduling with AI allocationPossibly — if linked to pay or conditionsReview the decision context
Salary benchmarkingNo — general data, no individual decisionInformational only
Learning path recommendationsNo — unless linked to access/assessmentTraining, not gatekeeping
Employee sentiment analysisDepends — if used to flag individualsHigh concern even if not strictly Annex III

What "Significantly Affects" Means

The AI Act applies to AI systems used to "make or significantly assist" decisions on access to employment, advancement, and termination. The word significantly matters.

An AI that outputs a pass/fail on a CV is clearly significant. An AI that suggests 10 potential interview questions has less direct influence. But if a recruiter uses that AI output as the primary or only driver of their decision, the system is still significantly influencing the outcome.

Courts and regulators will look at how the system is actually used in practice, not just what your product documentation says.


Your Obligations as a High-Risk Provider

As a provider of a high-risk AI system, you must meet these requirements before placing your product on the EU market:

1. Risk Management System

Implement a documented risk management process across the AI system's lifecycle — not a one-time assessment, but an ongoing process identifying risks, evaluating them, and implementing mitigations. This must be reviewed and updated regularly.

2. Data Governance

Training and evaluation data must be:

  • Relevant to the intended purpose
  • Sufficiently representative of the population the system will be used on
  • Free from errors and bias that could lead to discrimination
  • Documented — you need to show where your training data came from and how it was evaluated for bias

For HR AI, this means your training data should reflect diversity across gender, ethnicity, age, and other protected characteristics. Systems trained predominantly on historical hiring data from homogeneous companies carry structural bias risk.

3. Technical Documentation

Article 11 requires detailed technical documentation including:

  • Description of the AI system, including its purpose and intended use
  • The design and development methodology
  • Architecture, algorithms, and key design choices
  • Training, testing, and validation datasets
  • Performance metrics and accuracy benchmarks
  • Known limitations and foreseeable risks

This documentation must be sufficient for a conformity assessment. It should be maintained and updated as the system changes.

4. Record-Keeping and Logging

Your system must be capable of automatic logging of events relevant to identifying risks and post-deployment monitoring. For HR AI, this typically means:

  • Logging when the system was used and for which candidate/employee
  • Recording the output (score, recommendation, flag)
  • Enabling reconstruction of decisions for audit purposes

Deployers (your customers using your software) rely on this logging to demonstrate compliance on their side. It is part of your product's value proposition, not just a compliance cost.

5. Transparency to Deployers

You must provide customers with instructions that are clear about:

  • What the system does and does not do
  • Performance characteristics and known limitations
  • How to use it in a way that enables human oversight
  • What decisions the system influences

Your documentation, terms, and product interface all contribute to this.

6. Human Oversight

High-risk AI systems must be designed to allow human oversight. For HR software, this means:

  • The system cannot fully automate consequential hiring or employment decisions
  • The deployer (your customer's HR team) must be able to review, override, or disregard the AI output
  • The system must not present AI outputs in a way that discourages override

In practice: if your product presents a candidate ranking without any mechanism for a human to review the underlying reasoning or override the outcome, that is a design flaw under the Act.

7. Conformity Assessment

For employment AI, the conformity assessment is self-assessment (not third-party). But it must be documented, rigorous, and kept on file. You produce a Declaration of Conformity and register the system in the EU AI database before deployment.

8. EU Database Registration

Before placing a high-risk AI system on the EU market, you must register it in the EU AI database managed by the European Commission. The database is publicly searchable — customers and regulators can look up registered systems.


Obligations for HR Departments Using AI Tools (Deployers)

If you are an HR team using third-party AI tools (not building them), you have obligations too:

  • Conduct a fundamental rights impact assessment before deploying high-risk AI
  • Ensure human oversight — someone with authority must be able to review and override AI outputs
  • Inform employees that AI is being used to monitor or assess them (this overlaps with GDPR requirements)
  • Use the system as intended — you cannot use the system for purposes the provider has not documented or approved

Practical Steps for HR SaaS Founders

By 2 August 2026:

  • Classify every AI feature — document which are high-risk
  • Implement bias evaluation in your training and testing data pipeline
  • Build or update your technical documentation to meet Article 11
  • Enable logging sufficient for post-deployment audit
  • Add human oversight mechanisms to the product UI
  • Write a Declaration of Conformity
  • Register in the EU AI database
  • Update your product terms to meet Article 13 transparency requirements

Recommended before that:

  • Engage a legal reviewer to validate your classification
  • Run a GDPR overlap review — HR AI has significant intersection with GDPR (employee data, special categories, automated decision-making under Article 22)

The GDPR Overlap

HR AI sits at the intersection of the AI Act and GDPR. GDPR Article 22 gives employees the right not to be subject to solely automated decisions that produce legal or similarly significant effects. For most HR AI tools, the system must support human review — which aligns with the AI Act's human oversight requirement, but the legal basis and requirements differ.

If your product involves special category data (health data in absence monitoring, disability data in accommodation tools), additional GDPR obligations apply.

ComplyOne classifies your AI systems against the EU AI Act risk tiers and generates the required documentation automatically.

Run your AI Act risk assessment →