CertifiedData.io
Framework

AI Risk Classification

The EU AI Act organizes AI systems into four risk tiers. Your tier determines which obligations apply — from no requirements at all to full conformity assessment before deployment.

Classification is not based on the underlying model or technology. It is based on the intended purpose of the AI system — what it does, in what context, for whom.

How classification works

The EU AI Act uses a risk-based model: higher potential for harm means more obligations. Classification is determined by the AI system's intended purpose — not by its technical architecture, the model size, or whether it uses machine learning.

Classification follows a decision tree. Work through the questions in order — stop at the first applicable tier:

1. Is the practice prohibited?

Does the system fall within Article 5 (prohibited practices)? → Unacceptable risk. Cannot be deployed.

2. Is it a safety component of a regulated product?

Is the system a safety component of a product covered by Annex I legislation AND does that product require third-party conformity assessment? → High risk.

3. Is the intended purpose in Annex III?

Does the system's intended purpose fall within any Annex III category? → High risk (unless a specific exception applies).

4. Does the system interact with natural persons or generate content?

Is it a chatbot, emotion recognition system, or deepfake generator? → Limited risk (transparency obligations only).

5. None of the above

→ Minimal risk. No mandatory obligations.

Intent governs classification. A general-purpose AI tool that is marketed and sold for use in employment screening is a high-risk AI system — even if the same underlying model is used in other applications that are minimal risk. The intended purpose disclosed in product documentation and marketing is the determining factor.

Unacceptable risk — Prohibited practices

Article 5 lists practices that are banned entirely. No compliance path exists — these systems cannot be developed, placed on the market, or put into service in the EU. The prohibition applies from 2 February 2025.

Subliminal manipulation

AI that deploys subliminal techniques beyond a person's consciousness to materially distort their behaviour in a way that causes or is likely to cause harm.

Exploitation of vulnerabilities

AI exploiting vulnerabilities of specific groups (age, disability, social/economic situation) to distort behaviour likely to cause harm.

Social scoring by public authorities

AI systems used by public authorities to evaluate or classify natural persons based on social behaviour or inferred characteristics over time.

Real-time remote biometric identification in public spaces (law enforcement)

With narrow exceptions for targeted search for missing persons, preventing specific terrorist threats, or identifying perpetrators of serious crimes listed in the regulation.

Biometric categorisation inferring protected attributes

Categorising natural persons based on biometric data to infer race, political opinions, trade union membership, religion, sex life, or sexual orientation.

Predictive policing based solely on profiling

AI making predictions of criminal behaviour based solely on profiling a natural person or assessing their characteristics.

Facial recognition databases via untargeted scraping

Compiling or expanding facial recognition databases through untargeted scraping of facial images from internet or CCTV.

Emotion recognition in workplaces and schools

AI used to infer the emotions of natural persons in the workplace or educational institutions — with narrow exceptions for medical or safety reasons.

High risk — Full obligations

High-risk AI systems trigger the full obligation set in Articles 9–25. These obligations must be satisfied before the system is placed on the market or put into service.

Art. 9Risk management

Ongoing risk identification, analysis, evaluation, and mitigation throughout the lifecycle.

Art. 10Data governance

Training data documentation, governance practices, bias examination, representativeness.

Art. 11Technical documentation

Detailed Annex IV documentation drawn up before deployment and kept current.

Art. 12Record-keeping

Automatic logging of events enabling post-deployment investigation.

Art. 13Transparency

Instructions for use enabling deployers to interpret outputs and maintain oversight.

Art. 14Human oversight

Technical design features ensuring humans can understand, monitor, and intervene.

Art. 15Accuracy & robustness

Appropriate levels of accuracy, robustness, and cybersecurity throughout the lifecycle.

Art. 16–25Conformity & registration

Conformity assessment, CE marking, EU declaration of conformity, and registration in EU database.

Annex I — Safety components of regulated products

An AI system is automatically high-risk if it is a safety component of a product governed by any of the product safety legislation listed in Annex I — AND if that product requires third-party conformity assessment under that legislation.

Machinery Regulation (EU) 2023/1230

Toys Directive 2009/48/EC

Recreational craft and personal watercraft Directive 2013/53/EU

Lifts Regulation EU 2016/424

Pressure equipment Directive 2014/68/EU

Radio equipment Directive 2014/53/EU

In vitro diagnostic medical devices Regulation (EU) 2017/746

Medical devices Regulation (EU) 2017/745

Aviation Regulation (EU) 2018/1139

Automotive safety Regulation (EU) 2019/2144

Agricultural vehicles Regulation (EU) 167/2013

Marine equipment Directive 2014/90/EU

Note: the AI system must be a safety component — not just any component. A non-safety AI feature in a medical device (e.g., scheduling reminders) is not automatically high-risk under Annex I.

Annex III — High-risk use case categories

AI systems whose intended purpose falls within the categories listed in Annex III are high-risk — regardless of whether they involve regulated product safety.

Category 1Biometric identification and categorisation
  • ·Remote biometric identification
  • ·Biometric categorisation inferring protected attributes (limited permitted uses)
Category 2Critical infrastructure management
  • ·Safety components of digital infrastructure
  • ·Road traffic, water, gas, heating, electricity supply
Category 3Education and vocational training
  • ·Access/admission decisions
  • ·Assessment of learning outcomes, student evaluation
  • ·Monitoring during exams
Category 4Employment and worker management
  • ·Recruitment and CV screening
  • ·Promotion decisions, task allocation
  • ·Monitoring and evaluating performance
  • ·Contract termination decisions
Category 5Access to essential private services and public services
  • ·Creditworthiness assessment
  • ·Life and health insurance risk assessment
  • ·Emergency services dispatch prioritisation
  • ·Benefits, services, assistance eligibility
Category 6Law enforcement
  • ·Polygraph-equivalent tools
  • ·Reliability assessment of evidence
  • ·Profiling in criminal investigations
  • ·Predicting criminal or reoffending behaviour
Category 7Migration, asylum, and border control management
  • ·Risk assessment of irregular migration
  • ·Visa and asylum application examination
  • ·Document authentication
Category 8Administration of justice and democratic processes
  • ·Judicial decisions or dispute resolution
  • ·Influencing elections or voting behaviour
Annex III can be updated by the European Commission through delegated acts. Providers should monitor for amendments — a use case that is minimal risk today may be reclassified as high-risk in a future update.

Limited risk — Transparency obligations

Limited risk systems are not prohibited and are not high-risk — but they interact with natural persons in ways that require transparency disclosure.

Chatbots and conversational AI

Providers and deployers must ensure that natural persons are informed they are interacting with an AI system — unless the AI nature is obvious from context.

Deepfake / synthetic media generators

AI-generated or manipulated image, audio, or video content must be labelled as artificially generated or manipulated — with exceptions for legitimate artistic expression.

Emotion recognition systems

Persons subject to emotion recognition systems must be informed of the operation of the system — with exceptions for security, defence, and military use.

Biometric categorisation (permitted uses)

Persons subject to permitted biometric categorisation must be informed, unless the system is used for crime prevention under law enforcement.

Minimal risk — No mandatory obligations

The vast majority of AI systems fall in the minimal risk tier. There are no mandatory compliance obligations under the EU AI Act for these systems.

Voluntary adherence to codes of conduct — which providers are encouraged to adopt — can demonstrate responsible AI practice. But there is no legal requirement to do so.

AI-enabled spam filters
Recommendation engines
AI in video games
Inventory management AI
Scheduling and logistics AI
Document summarisation tools
Customer service routing (non-emotion)
Agricultural yield prediction
B2B analytics tools

Classification checklist

Use this checklist to determine your AI system's risk tier. Work through the questions in order — stop at the first "yes".

Does the system engage in any of the 8 prohibited practices in Article 5?

→ Unacceptable risk. System cannot be deployed.

Is the system a safety component of a product listed in Annex I, and does that product require third-party conformity assessment?

→ High risk. Articles 9–25 apply.

Is the intended purpose of the system within any of the 8 Annex III categories (biometric, infrastructure, education, employment, services, law enforcement, migration, justice)?

→ High risk. Articles 9–25 apply.

Is the system a GPAI model (large-scale self-supervised model)?

→ Chapter V GPAI obligations apply. May also be subject to high-risk obligations if integrated into a high-risk system.

Is the system a chatbot that interacts with natural persons, an emotion recognition system, or a deepfake/synthetic media generator?

→ Limited risk. Transparency disclosure obligations apply.

None of the above apply.

→ Minimal risk. No mandatory obligations.

This checklist covers the core classification decision. Edge cases — including systems deployed in multiple jurisdictions, systems with multiple intended purposes, and systems modified after market placement — should be reviewed with legal counsel familiar with the EU AI Act.

Frequently asked questions

How do I know if my AI system is high-risk?

Work through the classification checklist above. An AI system is high-risk if it is a safety component of a regulated product (Annex I) or if its intended purpose falls within an Annex III category. If neither applies, your system is likely in the limited or minimal risk tier.

My system uses an LLM as a component. Is it high-risk?

Not necessarily. Risk tier is determined by the intended purpose of the AI system as a whole — not by the components it uses. An LLM-powered tool for document drafting is likely minimal risk. The same LLM integrated into a hiring decision tool is high-risk (Annex III, employment category).

What does 'intended purpose' mean in practice?

Intended purpose is the use for which an AI system is designed by its provider, as specified in technical documentation, instructions for use, promotional materials, and public statements. It is not just the technical capability of the system — it is the marketed and documented use case. Misuse by a deployer (using a tool outside its intended purpose) may shift liability to the deployer.

Can a high-risk system be made lower risk by adding safeguards?

No. Safeguards do not change the risk tier — they are required obligations within the high-risk tier. If the intended purpose falls within Annex III, the system is high-risk regardless of safeguards applied. Adding safeguards means you are complying with high-risk obligations, not reclassifying.

Does dataset certification change my risk tier?

No. Dataset certification is a compliance tool within the high-risk tier — specifically addressing Article 10 (training data governance) and Article 19 (technical documentation) obligations. It does not determine classification. Classification is based entirely on intended purpose.

Continue reading