The Regulation (EU) 2024/1689 (hereinafter, “the Regulation” or “the AI Act”) is the first instrument providing uniform rules on artificial intelligence (AI) in the EU. It entered into force on August 2, 2024 and provides for a phased implementation until August 2, 2026.

The AI Act lays down an harmonized legal framework for the development, the placing on the market, the putting into service of AI systems in the EU, with the main objective of ensuring their use in accordance with the fundamental values of the Union, promoting the uptake of human centric, trustworthy, and safe technology.

The Regulation classifies AI systems through a risk-based approach to people’s fundamental rights, health and safety:

  1. Minimal or no risk

Most AI applications fall into this category, such as recommendation systems, virtual assistants, spam filters. They are not subject to particular restrictions, but may follow voluntary Codes of conduct.

  1. Limited risk

These AI systems present certain risks, particularly of manipulation or falsification (chatbot and deep fakes). They must comply with transparency requirements, such as informing users that they are interacting with an AI.

  1. High-risk

This group includes AI systems that, if not properly designed or managed, may entail significant risks to the security or fundamental rights of individuals.

These are, for example, AI systems used in healthcare, transport, justice or for creditworthiness assessment purposes (such as credit scoring systems) or in recruiting processes.

They are subject to strict compliance, documentation, testing, traceability and monitoring requirements.

  1. Unacceptable risk (prohibited)

This category includes deliberately manipulative systems (likely to significantly alter the behavior of individuals), social scoring, facial recognition in real-time public spaces, predictive crime assessment, emotional detection and biometric categorization. They are prohibited due to violation of fundamental rights.

In order to clarify the content of these prohibitions, on February 4, 2025, the European Commission adopted Guidelines which specify the scope of the bans, exemplify the practices that are strictly forbidden and those that may be permitted, clarifying scope of application, the permitted derogations and actors involved.

As of August 2026, non-compliance with the prohibition of the AI practices will be subject to administrative fines of up to 35 million EUR or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

To complete the regulatory framework of the AI Act, the European Commission also published on March 11, 2025 the third draft of the General-Purpose AI Code of Practice (GPAI), which includes duties for GPAI providers on transparency, risk assessment and compliance with copyright laws.

In parallel, the Commission endorsed a template for the transparency in GPAI Model training data, still addressed to companies producing GPAI systems. GPAI system providers will be required, in addition to drafting and keeping up-to-date the technical documentation of the AI system and making information and documentation available to AI system sellers wishing to integrate the general-purpose AI system into their AI system, to implement a policy of compliance with the Union’s copyright laws and to draft and make publicly available a sufficiently detailed summary of the content used for training the general-purpose AI model, according to a template provided by the AI Office.