In the following article, we will examine, in terms of the Guidelines, the prohibited AI systems set out in Article 5(1)(a) of the EU AI Act.
The prohibition in Article 5(1)(a) refers to AI systems that are placed on the market, put into service or used, and which:
- deploy subliminal, purposefully manipulative or deceptive techniques;
- has the objective or effect of materially distorting the behaviour of a person /s; and
- is reasonably likely to cause significant harm.
All the above must be satisfied in order to Article 5(1)(a) to apply.
Subliminal, Manipulative or Deceptive Techniques:
Firstly, Article 5(1)(a) prohibits three different types of manipulative techniques, one of which must be present to qualify as a prohibition:
- subliminal techniques beyond a person’s consciousness;
- purposefully manipulative techniques; and
- deceptive techniques.
Subliminal Techniques: Although the Act does not define “subliminal” techniques, the Guidelines provide the following examples below. Note that these are merely examples of subliminal techniques:
- visual subliminal messages;
- auditory subliminal messages;
- sub-visual and sub-audible cueing;
- embedded images;
- misdirection;
- temporal manipulation.
Purposefully Manipulative Techniques: These are also not defined in the Act but are techniques which are designed to control or influence behaviour in ways that undermine individual autonomy and free choice.
An example given in this regard is “personalised manipulation”, where AI systems create highly persuasive messages based on personal data or exploit individual vulnerabilities. In our view, this would encompass practices used in the iGaming industry, and could be coupled with the prohibition in Article 5(1)(b) which refers to vulnerabilities of individuals.
Importantly, even if providers do not intend to manipulate users, their AI systems can still fall under this prohibition if they haven’t taken appropriate preventive measures. An example in this respect is an AI system that “may learn manipulative techniques because the data on which it is trained contain many instances of manipulative techniques, or because reinforcement learning from human feedback can be “gamed” through manipulative techniques.”
Deceptive Techniques: Similarly not defined in the Act, the Guidelines specify these techniques involve presenting false or misleading information to deceive individuals and influence their behaviour.
The Guidelines make a link with the provider’s obligation to ensure that individuals are informed that they are interacting with AI, as well as the deployer’s obligations in terms of Article 50(4) of the EU AI Act, to label deep fakes and certain AI-generated text on matters of public interest. The labelling of “deepfakes” and chatbots reduces the risk of deception.
An example given in the Guidelines of deceptive techniques that can be deployed by AI is an AI chatbot that impersonates a friend with a synthetic voice and tries to pretend it is that person, thereby causing harm / scam.
A generative AI system that incidentally presents false / misleading information and hallucinates, will not necessarily be considered as deceptive within the meaning of Article 5(1)(a). This is particularly the case where the provider has informed users about the system’s limitations and has integrated the necessary safeguards.
Combination of techniques:
Article 5(1)(a) also refers to subliminal, purposefully manipulative, or deceptive techniques, or combinations of the said techniques. The effect of these techniques in combination could significantly influence behaviour of individuals, and lead to manipulations.
Material Distortion of Behaviour:
A further condition for the prohibition to apply is that the deployed techniques must have “the objective, or the effect, of materially distorting the behaviour of a person or groups of persons.”
It must be stated here that intent is not a requirement, as the Act also covers practices that may only have an “effect” of causing material distortion.
What is material distortion of behaviour?
The following elements must subsist:
- “appreciable impairment“: the reduced ability to make informed and autonomous decisions, beyond lawful persuasion. The effect causes individuals to behave in a way or to take a decision they would not have otherwise taken.
- “informed decision“: the individual requires an understanding and knowledge of all the relevant information, including the risks and benefits of their choice.
The Guidelines also indicate that for the interpretation of the concept of material distortion of behaviour, consumer protection law and its interpretation may be relevant. The Court of Justice of the EU and relevant Commission guidelines on the Unfair Commercial Practices Directive states that there is no need to prove a consumer’s behaviour has been distorted. It is sufficient to establish that a commercial practice is likely of impacting a consumer’s decision-making process.
The Concept of Significant Harm:
The types of harm that would fall under this prohibition are as follows:
- physical harm: injury or damage to a person’s life, health and property.
- psychological harm: adverse effects on one’s mental health and psychological and emotional well-being.
- financial and economic harm: financial loss, financial exclusion and economic instability.
There may also be a combination of the above harms, for example an AI system that causes physical harm may also lead to psychological harm.
The threshold for the harm is that it must be “significant.” This often depends on context and the Guidelines specify that the following must be taken into account:
- the severity of the harm;
- context and cumulative effects:
- scale and intensity;
- affected persons’ vulnerability;
- duration and reversibility.
Finally, there must be a causal link between the manipulative / deceptive technique and the reasonable likelihood of the harm occurring. It is not necessary to prove that the harm has actually occurred.
Compliance Guidelines for AI Providers / Deployers:
To avoid falling under this prohibition, AI providers or deployers should:
- ensure transparency about how their AI system operates;
- provide clear disclosures about the system’s capabilities and limitations;
- implement the appropriate user controls and safeguards;
- comply with relevant legislation; and
- follow industry standards and state-of-the-art practices.
Important Note:
If harm results from external factors beyond the provider’s control and reasonable foresight, it may not necessarily fall under this prohibition. However, providers must always demonstrate that they have taken all the appropriate preventive measures.