Your employees use ChatGPT to draft emails, Copilot to write code, and AI-powered analytics to forecast demand. These tools are already embedded in daily operations — often adopted bottom-up, without formal approval or documentation. The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024, and its obligations are rolling out in phases. It introduces rules not just for AI developers, but for every organisation that uses AI systems. The regulation calls these organisations "deployers."

Most small and mid-sized businesses are deployers. You did not build ChatGPT or your AI-powered HR screening tool — but you chose to use them, and that makes you responsible for how they affect people. The obligations are manageable, but they require documentation that most SMBs do not yet have. This guide explains what you need to do: inventory your AI systems, classify their risk, document human oversight, and integrate all of it with your existing GDPR data mapping.

The AI Act timeline — what applies when?

The EU AI Act does not take effect all at once. It was published in the Official Journal on July 12, 2024, entered into force on August 1, 2024, and its provisions apply on a staggered schedule:

  • February 2, 2025 — Prohibited AI practices (Art. 5) and AI literacy requirements (Art. 4) already apply. If your staff use AI systems, they must already have sufficient training.
  • August 2, 2025 — Rules for general-purpose AI models (like the foundation models behind ChatGPT and Gemini) take effect.
  • August 2, 2026 — The bulk of the regulation applies: deployer obligations (Art. 26), high-risk AI system requirements, transparency obligations (Art. 50), and the full compliance framework. This is the deadline most SMBs need to prepare for.
  • August 2, 2027 — Obligations for high-risk AI systems that are safety components of products already regulated under EU product legislation (e.g. medical devices, machinery).

Where are we now? As of the publication of this article (April 2026), the prohibited practices and AI literacy requirements are already in force. You have approximately four months until the deployer obligations and high-risk requirements take effect on August 2, 2026.

The EU AI Act risk classification

The AI Act sorts every AI system into one of four risk levels. Your obligations depend entirely on which level applies.

Unacceptable risk (banned). These systems are prohibited outright. Social scoring by governments, subliminal manipulation techniques that cause harm, and real-time biometric identification in public spaces for law enforcement (with narrow exceptions). If you are an SMB, you are unlikely to encounter these — but verify that no vendor tool you use crosses the line.

High-risk. This is where obligations become substantial. AI systems used for hiring and recruitment decisions, credit scoring and creditworthiness assessment, access to essential private and public services, and biometric identification all fall into this category. High-risk systems require a conformity assessment, a Data Protection Impact Assessment, documented human oversight, and audit logs retained for at least six months. If your company uses AI to screen CVs, score loan applications, or determine customer eligibility for services — you are operating a high-risk AI system.

Limited risk. Under Article 50 (transparency obligations), chatbots and conversational AI must disclose to users that they are interacting with an AI. AI-generated content (deepfakes, synthetic media) must be clearly labelled. This applies to customer-facing chatbots, AI assistants on your website, and any AI-generated marketing material.

Minimal risk. No specific requirements. Spam filters, AI-powered search, content recommendation engines, and grammar checkers fall here. Most everyday AI tools land in this category.

The key question for every SMB: do any of your AI tools make or directly support decisions about people? If yes, you may be in high-risk territory regardless of how simple the tool appears.

What deployers must document (Art. 26)

Article 26 of the AI Act defines six core obligations for deployers. None of them require you to build AI expertise from scratch — but all of them require documentation.

1. Maintain an AI inventory. List every service in your organisation that uses AI, what it does, and what data it processes. This is not optional — it is the foundation of everything else. If you already maintain a data service register for GDPR, extend it. AI services belong in the same map alongside your CRM, email provider, and cloud storage.

2. Classify each system by risk level. For every AI service in your inventory, determine whether it falls under unacceptable, high-risk, limited, or minimal risk. Document your reasoning. A customer support chatbot is limited risk. An AI tool that scores job applicants is high-risk.

3. Document human oversight for high-risk systems. Who reviews AI output before it affects a person? What is the escalation process when the AI produces an incorrect or biased result? This must be written down, assigned to named roles, and actually followed.

4. Keep audit logs. For high-risk AI systems, retain logs for a minimum of six months. These logs must be sufficient to reconstruct how the system reached a particular output. Most AI providers generate logs — your job is to ensure they are retained and accessible.

5. Report serious incidents. If a high-risk AI system causes or contributes to a serious incident (harm to health, safety, fundamental rights, or property), you must report it to the AI provider, the importer or distributor, and the relevant market surveillance authority.

6. Ensure AI literacy (Art. 4). Staff who operate or oversee AI systems must have sufficient understanding of how those systems work, their limitations, and the risks they pose. This does not mean everyone needs a machine learning degree — it means structured training proportionate to the role. Note: unlike the other obligations above, AI literacy requirements already took effect on February 2, 2025 — this is not a future deadline but a current obligation.

The GDPR connection

The AI Act does not replace GDPR — it layers on top of it. If your AI system processes personal data (and almost all of them do), GDPR applies in full alongside the AI Act.

Article 22 of the GDPR is particularly relevant: no individual shall be subject to a decision based solely on automated processing that produces legal effects or similarly significant effects. If AI makes or materially influences decisions about hiring, loan approval, insurance eligibility, or access to services, you need a legal basis for that processing, a documented human review process, and the right for individuals to contest the decision. Article 35 GDPR additionally requires a Data Protection Impact Assessment when automated processing is likely to result in high risk to individuals — which includes most AI-driven decision-making about people.

These are not new obligations — GDPR has required them since 2018. But the AI Act raises the stakes and the specificity of what documentation supervisory authorities expect. Your ROPA should already cover the processing activities. The ROPA tool guide explains how to structure that register. Your data map should already include the services. The GDPR data mapping guide covers the methodology.

What changes with the AI Act is that AI-specific fields — risk classification, human oversight arrangements, AI literacy training — must now be part of that documentation.

4 steps to AI governance for your organisation

Step 1: Inventory AI services

Start by identifying every AI-powered service in your organisation. The obvious ones come first: ChatGPT, Microsoft Copilot, Google Gemini. Then go deeper. Does your CRM use AI for lead scoring? Does your HR platform use AI to rank candidates? Does your analytics tool use machine learning for predictions?

Add each AI service to your data map with its purpose, the data it processes, and who has access to it. Readmodel® includes service templates for ChatGPT, Claude, Gemini, Copilot, Mistral, and Perplexity — pre-configured with the right data items and classifications. Pick from the library and you have your AI inventory started in minutes.

Step 2: Classify risk

For each AI service, assign the EU AI Act risk level: unacceptable, high-risk, limited, or minimal. Be honest about how the tool is actually used, not just how the vendor markets it. A chatbot used for customer FAQ is limited risk. The same chatbot used to triage insurance claims is potentially high-risk.

Document your classification reasoning. When an auditor asks why you classified a particular system as minimal risk, "because it seemed low-risk" is not an acceptable answer.

Step 3: Document human oversight

For every high-risk and limited-risk AI system, document who is responsible for reviewing AI output. Be specific: name the role, describe the review process, and define what happens when the AI is wrong. For decision-support systems (AI recommends, human decides), document how the human verifies the recommendation. For automated decision systems, document the appeal process and how affected individuals can request human review.

Step 4: Integrate with your ROPA

AI processing activities are processing activities. They belong in your Article 30 register alongside everything else. For each AI service, your ROPA should include the categories of personal data processed, the legal basis, the retention period, whether data is transferred outside the EU (most AI providers process data in the US), and the AI-specific fields: risk classification, human oversight, and AI literacy status.

Readmodel® generates ROPA exports that include AI governance fields automatically. If an AI service is classified as high-risk but has no documented DPIA, the risk register flags it. If a service has no human oversight documented, it appears as an action item. Create a free account and see how your AI services integrate into the same compliance workflow as the rest of your data map.

What Readmodel® provides

Readmodel® treats AI services as first-class citizens in your data map. The template library includes pre-configured AI service templates with accurate data items and classifications. The risk register automatically flags AI services that lack human oversight documentation or require a DPIA. The ROPA export includes AI-specific columns so your Article 30 register reflects both GDPR and AI Act requirements.

All AI governance fields are available on every plan, including the free Explore plan. You do not need an enterprise budget to document AI governance — you need a structured tool and thirty minutes.

See how Readmodel® compares to other GDPR compliance tools, or create your free account and start mapping your AI services today.

Start now

The AI Act deadline is August 2, 2026. Supervisory authorities will expect documentation, not good intentions. The good news: if you already maintain a GDPR data map, you are halfway there. Add your AI services, classify their risk, document human oversight, and integrate the results into your ROPA. That is AI governance for an SMB — practical, proportionate, and built on the compliance foundation you already have.