EU AI Act Risk Classifier
Classify your AI system under the EU AI Act in about 90 seconds. Strict legal classification across all four risk categories — Unacceptable (Art. 5), High-risk (Art. 6 + Annex III), Limited risk (Art. 50), and Minimal risk.
Does your AI system involve any of these prohibited practices?
Article 5 of the AI Act bans these uses outright. Tick any that apply, or "None of these" if none do.
Is your AI system a safety component of a regulated product?
Article 6(1) classifies as high-risk any AI system that is itself a product (or a safety component of a product) covered by the EU harmonisation legislation listed in Annex I. Examples: medical devices, in-vitro diagnostics, machinery, lifts, toys, radio equipment, civil aviation security, motor vehicles, agricultural vehicles, marine equipment, pressure equipment.
Does your AI system fall into any of the Annex III high-risk areas?
Annex III lists eight high-risk use-case categories. Tick any that apply.
Does your AI system interact directly with people?
Examples: chatbots, voice assistants, customer-support agents. Article 50(1) requires that such systems disclose to the user that they are an AI, in a clear and timely way.
Does your AI generate or manipulate image, audio or video content?
Examples: image generators, voice cloning, video deepfakes, synthetic music. Article 50(2) requires AI-generated synthetic content to be marked as such in a machine-readable way.
Does your AI generate or manipulate text intended to inform the public on matters of public interest?
Examples: AI-written news articles, AI-generated public-policy summaries, AI-translated official communications. Article 50(4) requires that such text be disclosed as AI-generated, unless a human exercises editorial responsibility for the publication.
How the four EU AI Act risk categories work
The EU AI Act (Regulation 2024/1689) sorts every AI system into one of four risk levels. The level decides which compliance obligations apply.
Unacceptable risk (Article 5)
Banned outright. Eight categories of practice are listed in Article 5, including social scoring by public authorities, subliminal manipulation, exploitation of vulnerabilities, untargeted facial-image scraping, emotion recognition in workplaces and schools, biometric categorisation by sensitive attributes, real-time biometric identification in public spaces, and individual criminal-risk profiling. If your system involves any of these, it cannot lawfully be placed on the EU market.
High-risk (Articles 6 + Annex III)
Heavy compliance obligations apply. There are two routes to high-risk status: the system is a safety component of a product covered by EU harmonisation legislation (Article 6(1)), or the system is used in one of the eight Annex III areas — biometrics, critical infrastructure, education, employment, access to essential services, law enforcement, migration, administration of justice. Deployer obligations under Article 26 include AI inventory, risk classification, human oversight, audit logs (≥6 months), incident reporting, and AI literacy under Article 4. A Fundamental Rights Impact Assessment under Article 27 may also apply. The bulk of these obligations becomes enforceable on August 2, 2026.
Limited risk (Article 50)
Transparency obligations only. Conversational AI must disclose that it is an AI to the human it interacts with (Art. 50(1)). AI-generated synthetic image, audio and video content must be marked as such in a machine-readable way (Art. 50(2)). AI-generated text that informs the public on matters of public interest must be disclosed as AI-generated unless a human takes editorial responsibility (Art. 50(4)).
Minimal risk
No specific compliance obligations under the AI Act. Most everyday business AI tools — spam filters, content recommendation, grammar checking, search ranking — fall here. Note that AI literacy under Article 4 still applies to staff who operate any AI system, regardless of risk level.
How to use the result
The classification produced by this tool is a starting point — a strict legal reading of how the EU AI Act applies to your stated use. It is not a substitute for advice from a qualified lawyer or DPO, particularly in edge cases (mixed-use systems, products that span multiple Annex III categories, novel use cases not yet addressed by EDPB or AI Office guidance).
If your AI system is High-risk or Limited risk, the next step is to document it. Readmodel®'s free Explore plan includes an AI Register that captures the classification, the human-oversight arrangement, the AI literacy status of operators, and any Fundamental Rights Impact Assessment. The AI Register is part of the same data map as your services, devices, and data items — so the AI Act and GDPR documentation share a single source of truth.
This classifier reflects the EU AI Act as adopted (Regulation 2024/1689) and the implementation timeline as of April 2026. The tool is provided for informational purposes only and does not constitute legal advice. National implementing measures and AI Office guidance may add detail; check current sources before relying on a classification for compliance decisions.