Skip to main content
Prohibited AI Practices

EU AI Guidelines on Prohibited AI

By February 11, 2025April 1st, 2025No Comments

On the 04 February 2025, the European Commission published its Guidelines on prohibited artificial intelligence practices established by the EU AI Act. The Guidelines aim to provide clarity on the prohibitions set out in Article 5 of the Act (Prohibited AI Practices), and are deemed to be non-binding. The Guidelines specify that ultimately, it is the Court of Justice of the European Union (the “CJEU”) that will give any authoritative interpretation of the EU AI Act.

We have prepared a series on these Guidelines. This series will provide a comprehensive review of the Guidelines, together with our observations. Below we provide the salient provisions of general application.

EU Prohibited AI

Prohibited AI Practices

As examined in our previous article, the following AI practices set out in Article 5 of the EU AI Act are prohibited (links to the individual articles are provided below):

These prohibitions will be analysed in more detail during the course of our series.

Material Scope of the Prohibitions

The EU AI Act establishes rules for three main activities related to prohibited AI practices: (i) their placement on the market; (ii) putting them into service; or (iii) their use. A notable distinction exists for real-time remote biometric identification systems, where prohibitions only apply to their use.

Placing on the market” refers to making an AI system available on the EU market, whether paid or free of charge. This covers various distribution methods, including:

  • Cloud services
  • API access
  • Direct downloads
  • Physical copies
  • Systems embedded in physical products

Putting into service” has two key aspects:

  1. Supplying an AI system for first use to a deployer in the EU;
  2. Developing and using an AI system in-house.

The Act specifies that the “intended purpose” of an AI system is determined by the provider’s specifications. This must be detailed in documentation including user instructions, promotional materials, and technical documentation.

An example given is that of a non-EU company offering a RBI to European customers through an online API. This would constitute “placing on the market,” regardless of whether it’s a paid or free service.

Use” is interpreted broadly to encompass any deployment or integration of an AI system after it enters service.

Key points about AI system usage under the EU AI Act:

  • Covers the entire post-market lifecycle;
  • Includes integration into broader services, processes, or infrastructure;
  • Encompasses both intended use and misuse;
  • Applies regardless of provider terms or contractual limitations.

The Act creates a dual responsibility framework:

  1. Providers must anticipate and account for reasonably foreseeable conditions of use; and
  2. Deployers bear responsibility for ensuring lawful usage.

A notable example is the EU AI Act’s treatment of emotion recognition AI in workplaces. Such systems are prohibited (except for medical or safety purposes) regardless of whether the provider explicitly forbids such use in their terms. This highlights how the Act’s prohibitions apply directly to deployers, independent of their contractual arrangements with providers.

This approach ensures comprehensive coverage of AI systems throughout their lifecycle, while establishing clear lines of responsibility between providers and deployers.

Prohibited AI vs High-Risk AI

The prohibited AI practices above are not to be considered in isolation. High-risk AI systems set out inAnnex III should be referred to. The Guidelines clarify that certain high-risk AI systems may be classified as “prohibited” if the relevant conditions under Article 5 are met. If, on the other hand, an AI system falls under an exception from a prohibition in Article 5, it shall qualify as high-risk. An example here would be credit-scoring or the assessment of risk in health and life insurance. These are deemed high-risk (Annex III, 5(b) and (c)) when they do not satisfy all the conditions in Article 5(1)(c) (social scoring).

Further, AI systems that fall under Annex III (high-risk AI systems), but which satisfy the derogation set out in Article 6(3), are not precluded from falling within the scope of the Act, and indeed, within the ambit of prohibited AI practices.

Horizontal Application

The EU AI Act is horizontal in application (i.e. it applies across all sectors from the protection of fundamental rights, consumer protection and employment). The EU AI Act is deemed to both complement as well as provide additional safeguards to existing legislation. Additionally, if a particular practice is not caught under the EU AI Act, it does not mean that it is exempt from the provisions of any other primary or secondary Union law.

An example in this respect would be a system which is not subject to the provisions of the EU AI Act, but is still illegal as it does not have a legal basis for the processing of personal data in terms of the GDPR.

Here the Guidelines go on to give a practical example: “an AI-enabled emotion recognition system used in the workplace that are exempted from the prohibition in Article 5(1)(f) of the AI Act, because they are used for medical or safety reasons, remain subject to data protection law and Union and national law on employment and working conditions, including health and safety at work, which may foresee other restrictions and safeguards in relation to the use of such systems.”

The EU AI Act and Data Protection

Given the example above, the relationship between the EU AI Act and GDPR is of paramount interest. The Guidelines list data protection as one of the two legal bases which supports the EU AI Act. This is due to the fact that AI systems often process information defined as “personal data.” The most relevant pieces of legislation in this respect are the General Data Protection Regulation, Directive (EU) 2016/679 (“GDPR”), the Law Enforcement Directive, Directive (EU) 2016/680 (“LED”), and Regulation (EU) 2018/1725 on data protection rules for EU institutions, bodies, offices and agencies (“EUDPR”).

There have been several opportunities for the CJEU to clarify the rules set out in the legislation above, and relevant guidelines have also been published by the European Data Protection Board. These include guidelines, notably, on “profiling” which shall be discussed in part 2.

The EU AI Act and other EU Legislation

Other legislation of note which remains applicable include:

  • EU consumer protection and safety legislation, for example:
    • social scoring practices by traders may be considered “unfair” and a breach of consumer law (Directive 2005/29/EC concerning unfair business-to-consumer commercial practices in the internal market);
    • the use of AI systems to infer emotions may also have to apply with Medical Device Regulation) (Regulation (EU) 2017/745) if it is used for medical diagnosis or treatment.
  • The Digital Services Act (Regulation (EU) 2022/2065);
  • Other applicable EU laws or national laws on general liability.

The EU AI Act must also be interpreted in light with EU Treaties, the Charter, and any international conventions to which the EU is a party.

In part 2 we will examine the provisions related to subliminal, manipulative or deceptive AI systems, as well as AI systems that are exploitative of vulnerabilities (Articles 5(1)(a) and (b)).

Leave a Reply