The EU AI Act (Regulation 2024/1689) introduces obligations not just for companies that build AI systems, but for every organisation that uses them. The regulation calls these organisations "deployers" — and if your company uses AI-powered tools in its operations, you are one. This article explains the specific obligations deployers face under Art. 26 and Art. 27, the difference between a DPIA and a Fundamental Rights Impact Assessment (FRIA), and the practical steps you need to take before August 2, 2026.
Who is a deployer?
Art. 3(4) defines a deployer as any natural or legal person that uses an AI system under its authority. You did not build the AI — but you decided to deploy it in your organisation, and that makes you responsible for how it operates and how it affects people.
If your HR team uses AI-powered candidate screening, your finance department uses AI credit scoring, or your customer service runs an AI chatbot — your organisation is a deployer for each of those systems. The obligations scale with risk: minimal-risk systems require little, but high-risk systems demand structured documentation, human oversight, and impact assessments.
Art. 26: Core deployer obligations
Article 26 sets out what every deployer of a high-risk AI system must do. These are not suggestions — they are legal requirements with enforcement behind them.
Use in accordance with instructions. Deployers must operate high-risk AI systems in line with the provider's instructions of use. This means reading the documentation, understanding the system's intended purpose and limitations, and not repurposing it for uses the provider did not intend or validate.
Ensure human oversight. Assign competent individuals to oversee the AI system's operation. These people must have the authority, competence, and resources to override or disregard the system's output. Document who these individuals are, what training they have received, and what their escalation process is when the AI produces incorrect or biased results.
Monitor operation. Deployers must monitor the AI system for risks during operation. If the system starts producing unexpected outputs, shows signs of bias, or operates outside its intended parameters, you must act — and you must have processes in place to detect these issues.
Keep logs. Automatically generated logs must be retained for a period appropriate to the intended purpose of the high-risk AI system, and at least six months. These logs must be available to supervisory authorities on request.
Conduct a Data Protection Impact Assessment. When deploying a high-risk AI system that processes personal data, deployers must carry out a DPIA under GDPR Art. 35. This is not new — GDPR already required it — but the AI Act makes it explicit and mandatory for all high-risk AI deployers.
Inform affected persons. Individuals subject to decisions made or assisted by a high-risk AI system must be informed that the system is being used, unless this is already apparent from the circumstances.
Art. 27: Fundamental Rights Impact Assessment
Article 27 introduces a new obligation that goes beyond GDPR: the Fundamental Rights Impact Assessment (FRIA). This applies specifically to deployers of high-risk AI systems that are bodies governed by public law, or private entities providing public services, or deployers using AI for certain listed purposes including credit scoring, risk assessment in life and health insurance, and law enforcement.
The FRIA requires deployers to assess the impact of the AI system on fundamental rights — including the right to non-discrimination, privacy, freedom of expression, and human dignity — before putting the system into use. The assessment must be sent to the relevant market surveillance authority.
FRIA vs DPIA: what is the difference?
These two assessments serve different purposes and neither replaces the other.
A DPIA (GDPR Art. 35) focuses on data protection risks. It asks: what personal data is processed, what are the risks to individuals' privacy, and what measures mitigate those risks? It covers data minimisation, storage limitation, access controls, and breach scenarios.
A FRIA (AI Act Art. 27) focuses on the broader impact of AI-driven decisions on fundamental rights. It asks: could this AI system discriminate against protected groups? Could it affect access to essential services? Could it undermine human autonomy or dignity? The FRIA looks at the decision-making impact, not just the data processing.
In practice, a high-risk AI system that processes personal data will require both assessments. They share some ground — both consider risks to individuals — but they approach the question from different angles. Your DPIA protects data subjects. Your FRIA protects fundamental rights holders. Often these are the same people, but the analysis is different.
What deployers need to document
For every AI system in your organisation, you should be able to answer these questions:
- Provider and model. Who built the AI system? What version are you using?
- Intended purpose. What do you use it for? Is this within the provider's intended use?
- Risk classification. Is it minimal, limited, or high-risk under the AI Act? Document your reasoning.
- Affected persons. Who is affected by the system's outputs or decisions? Employees, customers, applicants, the public?
- Human oversight. Who reviews AI output? What is their authority to override? What training have they received?
- Conformity status. Has the provider completed a conformity assessment? Is there a CE marking? Is the system registered in the EU AI database?
- DPIA status. Has a Data Protection Impact Assessment been completed for this system?
- FRIA status. If required, has a Fundamental Rights Impact Assessment been completed and submitted?
- Logs and monitoring. Are logs being retained? Who monitors the system's operation?
Practical steps for deployers
Step 1: Inventory your AI systems
You cannot govern what you have not mapped. List every AI-powered service in your organisation — from enterprise platforms with embedded AI to individual tools adopted by teams. Add each one to your data service register with its purpose, data categories, and access controls.
Step 2: Classify risk
For each AI system, determine its risk level under the AI Act. High-risk categories are listed in Annex III and include AI used for recruitment, credit scoring, access to essential services, and biometric identification. Document your classification and the reasoning behind it.
Step 3: Document purpose and affected persons
For each AI system, record its intended purpose, the categories of people affected by its outputs, and whether it makes or supports decisions about individuals. This documentation is the foundation of both your DPIA and your FRIA.
Step 4: Conduct impact assessments
For high-risk AI systems: complete a DPIA under GDPR Art. 35 and, where Art. 27 applies, a FRIA. These should reference each other but remain separate assessments with distinct scopes.
Step 5: Track conformity and governance
Maintain records of provider conformity declarations, CE markings, and registrations in the EU AI database. Store these documents alongside your DPAs and SLAs — they form part of your vendor governance chain.
How Readmodel® supports AI deployer compliance
Readmodel® treats AI services as part of your data map, not a separate compliance silo. When you add an AI service to your project, you can document its risk classification, human oversight arrangements, and conformity status alongside the data items it processes, the legal bases, and the retention periods.
The risk register automatically flags AI services that are classified as high-risk but lack a documented DPIA. The document storage feature lets you attach conformity declarations, provider instructions of use, and FRIA documentation directly to the relevant service. The ROPA export includes AI governance fields, so your Article 30 register reflects both GDPR and AI Act requirements in a single document.
AI governance does not require a separate platform or a six-figure consulting engagement. It requires structured documentation integrated with your existing data map. Readmodel® provides that structure — create a free account and start documenting your AI deployer obligations today.
The deadline is approaching
The deployer obligations under Art. 26 take effect on August 2, 2026. If you already maintain a GDPR data map, you have the foundation. Extend it with AI-specific fields — risk classification, human oversight, conformity status — and conduct the required impact assessments. The regulators will expect documentation, not promises. Start now.