Updated August 2024

EU AI Act Compliance Checklist

A comprehensive compliance framework for the European Union Artificial Intelligence Act (Regulation EU 2024/1689), which establishes harmonized rules for AI systems throughout their lifecycle.

Unacceptable Risk

AI systems posing unacceptable risks to safety, livelihoods, or rights of individuals are prohibited under the EU AI Act (Article 5).

Examples:

  • AI systems using subliminal or manipulative techniques to distort behavior
  • AI systems exploiting vulnerabilities of specific groups (age, disability, socioeconomic)
  • Social scoring systems leading to detrimental treatment in unrelated contexts
  • Predictive policing based solely on profiling or personality traits
  • Untargeted scraping of facial images for recognition databases
  • Emotion recognition in workplaces and educational institutions (with exceptions)
  • Biometric categorization systems inferring sensitive characteristics (race, political opinions, etc.)
  • Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions)

Key Checkpoints:

  • Verify your AI system does not fall into any prohibited use case
  • Document justification why system does not qualify as unacceptable risk
  • Conduct legal review to confirm classification
  • Review potential exemptions if applicable (e.g., law enforcement exceptions)
High Risk

AI systems defined in Article 6 and Annexes I & III with significant potential impact on health, safety, fundamental rights, or EU values.

Examples:

  • Safety components of products requiring third-party conformity assessment (machinery, toys, medical devices)
  • Critical infrastructure management (road traffic, water, gas, electricity supply)
  • Education and vocational training (admission, evaluation, monitoring)
  • Employment and worker management (recruitment, promotion, task allocation)
  • Essential services (public assistance, credit scoring, emergency services dispatch)
  • Law enforcement (risk assessment, polygraphs, evidence reliability evaluation)
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

Key Checkpoints:

  • Implement risk management system (Article 9)
  • Ensure data governance with high-quality datasets (Article 10)
  • Create and maintain comprehensive technical documentation (Article 11, Annex IV)
  • Enable automatic logging and record-keeping (Article 12)
  • Ensure transparency and provision of information to users (Article 13)
  • Establish appropriate human oversight mechanisms (Article 14)
  • Achieve required levels of accuracy, robustness, and cybersecurity (Article 15)
  • Register in EU database if applicable (Article 49)
  • Conduct conformity assessment (Article 43)
  • Draw up EU declaration of conformity
  • Affix CE marking (Article 48)
Limited Risk

AI systems with specific transparency obligations under Article 50 to ensure users are informed and content is properly labeled.

Examples:

  • AI systems interacting with humans (e.g., chatbots, virtual assistants)
  • Emotion recognition systems (when permitted)
  • Biometric categorization systems (when permitted)
  • AI-generated or manipulated content (deep fakes, synthetic media)
  • AI-generated text for public information purposes

Key Checkpoints:

  • Ensure users know they're interacting with an AI system (unless obvious)
  • Inform subjects when using emotion recognition or biometric categorization
  • Disclose when content (image, audio, video) is artificially generated/manipulated
  • Label AI-generated text for public information (unless human-reviewed)
  • Document transparency measures implemented
Minimal/No Risk

The majority of AI systems with minimal regulatory requirements beyond voluntary codes of conduct (Article 95).

Examples:

  • AI-enabled spam filters
  • AI-enabled inventory management
  • Basic recommendation systems
  • AI for games and entertainment
  • Most standard business applications of AI

Key Checkpoints:

  • Follow voluntary codes of conduct (Article 95)
  • Consider adopting standards for responsible AI
  • Document compliance with general data protection and consumer protection laws
General-Purpose AI

AI models with significant generality capable of performing various tasks that can be integrated into downstream systems (Chapter V).

Examples:

  • Large language models
  • Multimodal foundation models
  • General-purpose image generation models
  • Models with high-impact capabilities (training computation > 10^25 FLOPs)
  • Models designated as systemic risk by the Commission

Key Checkpoints:

  • Create and maintain technical documentation (Annex XI)
  • Provide information to downstream AI system providers (Annex XII)
  • Establish copyright compliance policy
  • Publish summary of training content
  • For systemic risk models: perform evaluations, assess/mitigate risks, ensure cybersecurity, report incidents

Comprehensive Compliance Checklist

Use this detailed checklist to ensure your AI systems meet all applicable requirements of the EU AI Act. Implementation details will vary based on your system's risk category and specific use case.

EU AI Act Implementation Timeline

Follow this recommended timeline to achieve compliance with the EU AI Act requirements through a structured approach.

Aug 2024

Entry into Force

The EU AI Act (Regulation EU 2024/1689) entered into force 20 days after publication (approximately August 1, 2024).

1
Feb 2025

Prohibitions & General Provisions

Compliance required for prohibitions (Article 5) and general provisions (Chapter I) from February 2, 2025 (6 months after entry into force).

2
Aug 2025

GPAI & Governance

Obligations for GPAI models (Chapter V), governance structures, and penalties apply from August 2, 2025 (12 months after entry into force).

3
Aug 2026

General Application

Full application of the AI Act, including most high-risk system requirements, from August 2, 2026 (24 months after entry into force).

4
Aug 2027

Product Safety Components

Extended deadline for high-risk AI systems that are safety components of products covered by Annex I legislation (36 months after entry).

5
Aug 2030

Legacy Systems

Extended compliance deadline for certain high-risk AI systems placed on market/service before the main application date.

6

How Arelis AI Platform Simplifies EU AI Act Compliance

Our comprehensive compliance platform provides the tools, automation, and expertise you need to navigate the complex requirements of the EU AI Act.

Role & Scope Assessment

Interactive tool to determine your organization's roles (provider, deployer, etc.) and which parts of the EU AI Act apply to your AI systems.

Automated Risk Classification

AI-powered assessment against Articles 5-6 and Annexes I-III criteria, providing accurate risk categorization with detailed justification.

Compliance Documentation Generator

Automated creation of required technical documentation, EU declarations, and Fundamental Rights Impact Assessments (FRIAs).

GPAI Model Compliance

Specialized tools for GPAI model providers to meet documentation, information sharing, and systemic risk assessment requirements.

Human Oversight Workflows

Configurable oversight mechanisms with role-based access, intervention protocols, and comprehensive audit logging.

Transparency Compliance Tools

Solutions for automatic disclosure and content labeling to meet specific transparency obligations under Article 50.

Continuous Monitoring Dashboard

Real-time visibility into AI system performance, compliance status, and automated alerts for potential issues requiring attention.

Regulatory Update Tracking

Automated monitoring of EU AI Act interpretations, guidelines from the AI Office and Board, and implementation timelines.

Frequently Asked Questions

Get answers to common questions about EU AI Act compliance and how Arelis can help your organization.