What is the EU AI Act?
The EU AI Act regulates the development, placing on the market, putting into service and the use of AI systems in the EU. The EU AI Act is a “horizontal” legislation. This means that it applies to any industry using AI, without targetting a specific sector. The EU AI Act follows a risk-based approach, categorising AI systems based on their potential to harm health, safety, and fundamental rights.
What is the territorial scope of the AI Act?
The AI Act applies to:
- Providers and deployers of AI systems established in the EU.
- Providers and deployers outside the EU if the output of their AI system is in the EU. This means the impact of the AI system must occur within the EU.
Which AI systems classify as “prohibited” under the EU AI Act?
The EU AI Act bans all AI systems deemed to pose an “unacceptable risk.” These AI systems comprise practices which:
- Deploy subliminal or purposefully manipulative or deceptive techniques;
- Exploit vulnerable groups (children, people with disabilities, persons with a specific social or economic situation);
- Enable social scoring;
- Predict individual criminal behavior without objective justification;
- Use emotion recognition in workplaces and educational institutions (except for medical or safety reasons);
- Create or expand facial recognition databases through scraping biometric data from the internet or CCTV footage for facial recognition databases;
- Categorise individuals based on their biometric data to deduce or infer sensitive attributes (race, religion, sexual orientation). There are exceptions related to law enforcement;
- Use real-time remote biometric identification in public spaces for law enforcement (with limited exceptions).
Note that the provisions on prohibited AI practices come into effect on the 02 February 2025.
What are high-risk AI systems?
These systems pose significant risks to health, safety, or fundamental rights. Broadly, these fall within two categories of high-risk AI systems in the EU AI Act:
- AI systems that are products / safety components of products covered by specific EU health and safety harmonisation legislation. Such products include medical devices, machinery or toys. Further, these require a third-party conformity assessment. These are listed in Annex I to the EU AI Act; and
- Specific AI systems, listed in Annex III to the EU AI Act, comprising of the following systems:
- Remote biometric identification, biometric categorisation, and emotion recognition (subject to exceptions and prohibitions);
- Safety components in the management and operation of critical infrastructure. This includes digital infrastructure, road traffic, the supply of water, gas, heating, or electricity;
- Education and vocational training;
- Employment, workers’ management and access to self-employment;
- Access to and enjoyment or essential private services and public services and benefits (including assessing creditworthiness and credit score, and risk assessment and pricing in relation to life and health insurance);
- Law enforcement;
- Migration, asylum and border control management;
- Administration of justice and democratic processes.
Are there any exceptions?
Yes, in relation to the systems falling under Annex III. The exception applies to AI systems that do not pose a significant risk of harm to health, safety or fundamental rights of persons, including by not materially influencing the outcome of a decision.
The derogation applies if the AI system is intended to:
- perform a narrow procedural task; or
- improve the result of a previously completed human activity; or
- detect decision-making patterns / deviations from those made prior and is not intended to replace or influence a previously completed human assessment, unless proper human review is in place; or
- perform a preparatory task to an assessment relevant for the systems set out in Annex III.
Nonetheless, the derogation will not apply if the AI system profiles natural persons. Accordingly, the AI system will always be a high-risk AI system.
What about AI systems that do not fall under the prohibited or high-risk categories?
Apart from the “prohibited” and “high-risk” categories, the EU AI Act refers to AI systems that are deemed to constitute “transparency risk”. These relate to interacting with natural persons, which creates the risks of impersonation or deception. These include AI systems:
- intended to interact directly with persons (such as chatbots); and /or
- that generate or manipulate image, audio or video (for example: deepfakes); and /or
- that are emotion recognition systems or biometric categorisation systems.
Any other AI system that does not fit into any of the prohibited, high-risk, or transparency risk categories and presents a low risk is deemed to be “minimal risk.”
Are General-Purpose AI Models governed by the EU AI Act?
Yes, Chapter V of the EU AI Act regulates General-Purpose AI Models, and these rules will become effective as from 02 August 2025.
The EU AI Act distinguishes between general-purpose AI models, and general-purpose AI models with systemic risk. The latter constitute:
- models having high-impact capabilities (i.e. when the AI model uses a large amount of computation for its training); or
- deemed to be systemic based on a Commission decision, or following an alert from a scientific panel, and as having capabilities as set out in (a), having regard to a set of defined criteria.
Further to the above, when general-purpose AI models have systemic risk, additional obligations apply.
The European Artificial Intelligence Office is in the process of drawing up Codes of Practice for general-purpose AI models, using the input of national experts. We are proud to announce that Kenneth Shaw from Blue Arrow is one of the chosen participants in this process.
The expected deadline of completion of the Codes of Practice is 02 May 2025.
How does the AI Act address AI systems / general-purpose AI models already on the market?
For AI systems already on the market, the AI Act applies differently depending on the risk classification:
- Prohibited AI systems: as from 02 February 2025, these systems / practices are no longer permitted on the EU market;
- High-risk AI systems: the EU AI Act applies if there are substantial modifications to the AI system’s design or intended purpose after the EU AI Act’s application date. However, providers of high-risk AI systems may start to comply on a voluntary basis;
- Other AI systems: the EU AI Act encourages voluntary compliance during the transitional period, however, full enforcement and obligations will have later deadlines (which will we will communicate to you when established).
Additionally, general-purpose AI models on the market before 02 August 2025, will have to take the necessary steps to comply with the EU AI Act by 02 August 2027.
Contact Blue Arrow today to initiate your compliance journey.