Fintech companies face a double compliance challenge under the EU AI Act. Credit scoring, loan eligibility, insurance underwriting, and fraud detection tools may all fall within the high-risk category under Annex III — on top of existing DORA and GDPR obligations that already apply to the sector.
If your fintech platform makes or influences access to financial services for individuals, this article explains what the AI Act requires from you.
Which Fintech AI Features Are High-Risk
The AI Act classifies as high-risk any AI system used to assess the creditworthiness of natural persons, or establish their credit score — and more broadly, AI used to determine access to essential private services including insurance.
| Feature | High-risk? | Notes |
|---|---|---|
| Credit scoring / loan eligibility | Yes | Explicitly named in Annex III |
| Insurance risk assessment for individuals | Yes | Access to essential services |
| Fraud detection flagging individuals | Possibly | If leads to service denial or account closure |
| AML/transaction monitoring with individual consequence | Possibly | Review decision-link |
| Robo-advisory for investment | Possibly — if affecting access to services | Context-dependent |
| Algorithmic trading (no individual assessment) | No | Systematic, not individual-level decisions |
| KYC identity verification | Possibly — if biometric | Biometric component triggers Annex III |
| Customer segmentation / marketing analytics | No | No individual consequential decision |
| Chatbot customer service | No (disclosure required) | Limited transparency tier |
The Credit Scoring Red Line
Any AI system that generates a score, probability, or classification used to determine whether an individual gets access to credit, insurance, or essential financial services is unambiguously high-risk.
This includes:
- Consumer lending eligibility tools
- Buy Now Pay Later risk engines
- Insurance premium calculation using individual AI-driven risk scores
- Open banking apps that generate affordability assessments for third parties
It does not include tools that analyse aggregate market data, provide investment analytics, or run portfolio-level risk models without individual-level consequence.
Fraud Detection: A Nuanced Case
Fraud detection is not explicitly named in Annex III, but it sits in a grey zone. If your fraud AI:
- Causes an account to be frozen or a transaction to be blocked (access to essential services)
- Results in a customer being denied service or reported to authorities (law enforcement category)
- Is used to assess individual risk profiles in a way that affects access to financial products
...then a conservative legal assessment treats it as high-risk or at minimum warrants documented analysis. Build that documentation now.
The DORA Overlap
Most fintech platforms operating in the EU already face Digital Operational Resilience Act (DORA) requirements, which apply since 17 January 2025. DORA and the AI Act overlap in two places:
ICT risk management: DORA requires you to document and manage ICT risks across your third-party supply chain. AI systems — including the AI models you use internally — are part of that ICT supply chain. A vendor providing an AI credit scoring model to your platform is an ICT third-party provider under DORA.
Operational resilience of AI systems: Under the AI Act, high-risk AI systems must be robust to errors, faults, and inconsistencies. DORA requires operational continuity. The documentation for both overlaps — a single well-structured technical file can serve both frameworks.
What Fintech AI Providers Must Do
If you provide high-risk AI to fintech customers (even if you are not a fintech yourself), the full Annex III compliance burden falls on you as the provider.
Risk Management
Implement an ongoing risk management process that identifies and mitigates risks related to accuracy, bias, and failure modes in your model. For credit scoring, this includes testing for discriminatory outcomes across protected characteristics (gender, ethnicity, age, nationality).
Data Governance
Training data must be:
- Representative of the population the model will be used on
- Evaluated for bias across protected characteristics
- Documented — provenance, selection criteria, preprocessing steps
Credit models trained on historical data that reflects past discriminatory lending practices will carry that bias forward unless corrected. This is both an AI Act compliance issue and a significant legal liability.
Technical Documentation
Article 11 documentation for a credit AI system includes:
- Model architecture and training methodology
- Performance metrics (accuracy, precision, recall) broken down by demographic group
- Known failure modes and edge cases
- Validation data and testing results
- Bias evaluation methodology and results
Human Oversight
Credit and insurance decisions influenced by AI must allow human review. Lending decisions cannot be fully automated without the ability for a qualified person to review, challenge, and override the AI output. This connects to GDPR Article 22 (right not to be subject to solely automated decisions).
Explanation Rights
Where your AI system influences a credit or insurance decision, the affected individual has rights under both the AI Act and GDPR. They can request an explanation of the decision. Your system must generate meaningful explanations — not just a score, but the factors that drove it.
This is both a compliance obligation and a product requirement. Build explainability into your model architecture, not as an afterthought.
Conformity Assessment and Registration
Before deploying a high-risk fintech AI system in the EU:
- Complete a self-assessment conformity process
- Produce a Declaration of Conformity
- Register the system in the EU AI database
- Affix CE marking to relevant documentation
Obligations for Fintechs Using Third-Party AI (Deployers)
If you buy credit scoring or fraud detection AI from a vendor and deploy it in your product:
- You must verify the provider has completed conformity assessment
- You must conduct a fundamental rights impact assessment before deployment
- You must ensure human oversight in your operational process
- You must inform affected individuals that AI is involved in decisions
- You are responsible for how the system is used, even if you did not build it
Due diligence question for any AI vendor: "Can you provide your EU AI Act Declaration of Conformity and EU database registration number?" If they cannot, they are not compliant and your deployment is at risk.
Timeline for Fintech AI Compliance
| Deadline | What it means for fintech AI |
|---|---|
| 17 January 2025 | DORA already in force — ICT risk documentation for AI systems required now |
| 2 August 2025 | GPAI model obligations apply — assess any foundation model use |
| 2 August 2026 | High-risk AI compliance mandatory — all fintech AI in Annex III must be compliant |
| 2 August 2027 | Transition period ends for legacy systems placed on market before August 2024 |
For fintech companies already managing DORA compliance, the AI Act documentation overlaps significantly. A combined approach — one technical file serving both frameworks — is more efficient than two separate compliance tracks.