EU AI Act Explained
The EU AI Act is the world's first comprehensive AI regulation — a risk-based framework that imposes binding technical, governance, and transparency obligations on AI systems placed on the EU market.
This guide covers the risk tier architecture, the articles that create direct technical obligations, the enforcement timeline, and what high-risk classification means in practice.
What the EU AI Act is
The EU AI Act (Regulation 2024/1689) is a directly applicable EU regulation that governs AI systems placed on or put into service in the EU market. It entered into force on 1 August 2024, with obligations phased in over 36 months.
Like GDPR, the Act has extraterritorial reach: it applies to any provider whose AI system is used in the EU, regardless of where the provider is established. A US company deploying an AI system to EU users must comply.
The Act is structured around risk: the higher the risk of harm, the more obligations apply. Most AI systems fall in the minimal risk tier — no mandatory obligations. High-risk systems face significant technical, governance, and documentation requirements before deployment.
120+
Articles in the regulation
13+
Annexes covering prohibited uses, high-risk categories, and standards
35M EUR
Maximum fine for prohibited practice violations (7% of global turnover)
Four-tier risk architecture
The Act classifies AI systems into four risk tiers. Classification determines which obligations apply. Most AI products are in the limited or minimal tier — no mandatory compliance obligations, though best practice still applies.
Unacceptable risk
Prohibited- ·Social scoring by public authorities
- ·Real-time biometric ID in public spaces (law enforcement, with narrow exceptions)
- ·Subliminal manipulation exploiting vulnerabilities
- ·AI that infers protected characteristics from biometrics to discriminate
Banned entirely. No compliance path exists.
High risk
Heavy obligations- ·AI in medical devices, safety systems, critical infrastructure
- ·AI for employment decisions (CV screening, promotion)
- ·Credit scoring, insurance risk assessment
- ·Law enforcement, border control, judicial AI
Articles 9–25 apply. Risk management, data governance, technical documentation, conformity assessment required before deployment.
Limited risk
Transparency only- ·Chatbots — must disclose AI nature to users
- ·Deepfake generation — must label synthetic media
- ·Emotion recognition systems — must disclose to affected persons
Transparency disclosure obligations only. No pre-deployment conformity assessment.
Minimal risk
No mandatory obligations- ·AI-enabled spam filters
- ·Recommendation systems
- ·AI in video games
- ·Most B2B AI tooling
No mandatory obligations. Voluntary codes of conduct available.
Annex III — High-risk use cases
Annex III lists the specific categories that automatically qualify as high-risk. If your AI system falls within one of these categories, Articles 9–25 apply in full.
Biometric identification
Remote biometric identification and categorisation of natural persons
Critical infrastructure
Safety components of critical infrastructure (energy, water, transport, finance)
Education
Admission decisions, student assessment, evaluation of educational institutions
Employment
Recruitment, CV screening, promotion decisions, task allocation, performance monitoring
Essential services
Creditworthiness assessment, insurance risk, emergency services dispatch
Law enforcement
Polygraph-equivalent tools, evidence reliability assessment, predictive policing, crime analytics
Migration & asylum
Border control, visa and asylum application assessment
Justice & democracy
Administration of justice, democratic process influence
Article 10 — Training data governance
Article 10 is the training data article. It is one of the most technically demanding obligations in the Act for AI teams — it requires documented governance of every dataset used to train, validate, or test a high-risk AI system.
10(2)(a) — Relevant and appropriate
Data must be relevant to the intended purpose of the AI system. Generic datasets applied to out-of-scope domains are non-compliant.
10(2)(b) — Sufficiently representative
Data must represent the populations, geographies, and scenarios the system will encounter in deployment. Skewed training populations create biased outputs.
10(2)(c) — Free from errors and biases
To the extent possible, training data must be examined for errors, completeness gaps, and discriminatory bias — with examination results documented.
10(2)(d) — Data governance practices
Providers must have practices covering collection methods, processing operations, data provenance, scope, main characteristics, and possible shortcomings.
10(3) — Sensitive attributes
Processing of special categories of personal data for bias correction purposes is permitted under strict conditions and with appropriate safeguards.
10(5) — Publicly available data
Publicly available data used for training must also meet the above requirements. The public availability of data does not exempt it from governance requirements.
Article 12 — Automatic logging
Article 12 requires that high-risk AI systems have automatic logging capabilities. Logs must capture relevant system events throughout the system's operational lifetime to enable post-hoc investigation and regulatory review.
| Log requirement | Technical implementation |
|---|---|
| Activation / deactivation periods | Timestamp when system starts and stops processing — tied to run IDs |
| Input data characteristics | Record of input features, not necessarily raw data — hash or schema reference |
| Decisions taken by the system | Output of high-risk decisions with traceability to the responsible natural person |
| Changes to the system | Model version, dataset version, parameter changes — all with timestamps |
| Retention period | Determined by intended purpose; at minimum, for the duration of the system's expected operational life |
certification_id of the training dataset becomes an immutable log entry, traceable across audit trails. Any model trained on a certified dataset can reference the certificate in its logs. Read the full Article 12 guide →Article 13 — Transparency and provision of information
Article 13 requires that high-risk AI systems be designed to be sufficiently transparent that deployers can interpret the system's output and use it appropriately.
The article mandates that providers supply deployers with a user-facing instructions for use document that covers:
- ·Identity and contact details of the provider
- ·Characteristics, capabilities, and limitations of the system
- ·Performance levels with respect to specific persons or groups
- ·Input data specifications — what data the system is designed to process
- ·Changes to the system and its performance after deployment
- ·Human oversight measures — how humans remain in the loop
- ·The expected lifetime of the system and maintenance requirements
For training data specifically, Article 13 transparency means that the datasets used must be describable to deployers. Certified synthetic datasets — with their documented algorithm, schema, and integrity score — provide the audit trail needed to satisfy this disclosure requirement.
Article 19 — Technical documentation
Article 19 requires providers of high-risk AI systems to draw up technical documentation before the system is placed on the market or put into service. The documentation must be kept up to date throughout the system's lifetime.
The required content is detailed in Annex IV. Key elements include:
General description
Intended purpose, version history, system architecture
Training data description
Datasets used, data governance applied, characteristics and limitations
Validation and testing
Validation datasets, testing procedures, metrics, and results
Risk management
Known risks, risk mitigation measures applied, residual risk
Human oversight measures
Technical features enabling human control and intervention
Accuracy and robustness
Performance metrics, discrimination metrics, cybersecurity measures
GPAI models — General Purpose AI
Chapter V of the Act creates a separate obligation tier for GPAI models — large models trained on broad data at scale, capable of a wide range of tasks (e.g., large language models, large vision models).
All GPAI models
- · Technical documentation of training and architecture
- · Summary of training data sufficient for copyright compliance
- · Compliance with EU copyright law (opt-out mechanisms)
- · Publish and maintain a model policy
GPAI with systemic risk
Triggered when training compute exceeds 10²⁵ FLOPs
- · Adversarial testing and red-teaming
- · Serious incident reporting to the AI Office
- · Cybersecurity protection measures
- · Energy efficiency reporting
GPAI obligations apply to the model provider — not to organizations that use GPAI models as components in their own applications (deployers). However, deployers who build high-risk AI systems on top of GPAI models are responsible for compliance of the downstream system under the high-risk tier.
Enforcement timeline
The Act phases in obligations across three years. The most demanding high-risk obligations apply from August 2026.
1 Aug 2024
Entry into force
Regulation 2024/1689 published in the Official Journal and enters into force.
2 Feb 2025
Prohibited practices ban
Prohibited AI practices (Annex I of Chapter II) are banned. Providers must cease prohibited uses or face fines.
2 Aug 2025
GPAI model obligations
Chapter V GPAI obligations apply. GPAI providers must publish technical documentation and maintain EU copyright compliance.
2 Aug 2026
High-risk AI system obligations
Articles 9–25 apply in full. Risk management, data governance (Art. 10), logging (Art. 12), transparency (Art. 13), technical documentation (Art. 19), and conformity assessment required before deployment.
2 Aug 2027
Full application
Remaining provisions fully applicable. All AI systems covered by the Act are subject to applicable obligations.
Frequently asked questions
Does the EU AI Act apply to companies outside the EU?
Yes. The Act has extraterritorial reach — it applies to any provider whose AI system produces outputs used in the EU, regardless of where the provider is established. A US company whose AI system is accessed by EU users must comply with the obligations applicable to their system's risk tier.
What is the difference between a provider and a deployer?
A provider places an AI system on the market or into service. A deployer uses the system under their own authority for a specific purpose. Both have obligations — providers for the system itself, deployers for the use context. When an organization fine-tunes or significantly modifies a third-party model, they may become a provider for the modified version.
What does 'conformity assessment' mean for high-risk systems?
Conformity assessment is the process of verifying that a high-risk AI system complies with all applicable requirements before deployment. Most high-risk systems undergo self-assessment by the provider. Systems in certain categories (biometric identification, critical infrastructure safety) require assessment by a notified third-party body.
Can open-source AI models benefit from exemptions?
The Act provides limited exemptions for open-source models that are not placed on the market for profit. However, providers who place modified versions of open-source models on the market, or who use open-source models in high-risk applications, remain fully subject to applicable obligations.
What are the penalties for non-compliance?
Prohibited practices violations: up to €35 million or 7% of global annual turnover. High-risk obligations violations: up to €15 million or 3% of turnover. Providing incorrect information to authorities: up to €7.5 million or 1% of turnover. For SMEs, absolute caps apply where proportionate.
Continue reading
AI Risk Classification
Four-tier risk model in depth — how to determine your system's tier.
AI Regulation Primer
EU AI Act, NIST RMF, and the global regulatory landscape.
Dataset Certification
Ed25519 signing, SHA-256 fingerprinting, and verification.
EU AI Act Compliance Guide
Full compliance guide for high-risk AI system providers.
Article 12 — Logging
Automatic logging obligations in full.
Article 19 — Documentation
Technical documentation for conformity assessment.