In this article, we will examine the prohibited systems in Article 5(1)(b) of the EU AI Act.
The prohibition in Article 5(1)(b) refers to AI systems that are placed on the market, put into service or used, and which:
- exploits vulnerabilities due to age, disability or socio-economic situations;
- has the objective or effect of materially distorting the behaviour of a person/s; and
- is reasonably likely to cause significant harm.
Similarly to the conditions set out in Article 5(1)(a), all the above must be satisfied. There must also be a causal link between the exploitation, the material distortion and the significant harm which has resulted or is reasonably likely to arise.
The conditions pertaining to material distortion and causing harm have already been discussed in our article found here. Hereunder, the condition relating to exploitation shall be examined.
Vulnerabilities:
“Vulnerabilities” are not defined in the EU AI Act. The Guidelines state that this could be understood to include “a broad spectrum of categories, including cognitive, emotional, physical, and other forms of susceptibility that can affect the ability of an individual or a group of persons to make informed decisions or otherwise influence their behaviour.” However, it is essential to note that the susceptibility must be the result of the person belonging to the vulnerable groups, relating to their age, disability or socio-economic situation (the “Exploited Groups”).
Exploitation:
This concept refers to the use of vulnerabilities in a manner that is harmful for the Exploited Groups, which are addressed in turn below:
- Age:
This prohibition aims to prevent AI systems from exploiting cognitive and other limitations that children and older people may have. Children are referred to as persons below the age of 18, and who may therefore be susceptible to manipulation as they are limited in their ability to assess and understand the real intentions behind AI – driven interactions critically. An example provided here is a game using AI to analyse children’s behaviour and preferences, and on the basis of which it creates personalised and unpredictable rewards through addictive loops to encourage excessive play and compulsive usage.
Older people might suffer from reduced cognitive abilities (even if not suffering from dementia), and might struggle with the complexities of modern AI technologies, making them more vulnerable to scams.
- Disability:
Disability includes a wide range of long-term physical, mental, intellectual, and sensory impairments. An example provided in the Guidelines is a therapeutic chatbot aimed to provide mental health support and coping strategies to persons with mental disabilities. Such a chatbot could possibly exploit their limited intellectual capacity to influence them to behave in ways that are harmful to themselves or others.
- Specific socio-economic situation:
“Specific” is not to be interpreted as a unique individual characteristic, but a legal status or membership to a vulnerable social or economic group. This would cover situations (some of which are set out in Recital 29 of the EU AI Act) such as extreme poverty, and ethnic or religious minorities. Both long and short-term characteristics are covered here, including for example, temporary unemployment.
Persons falling under this category will have fewer resources and lower digital literacy, which makes it harder for them to discern or counteract exploitative AI practices. An example given in the Guidelines is an AI-predictive algorithm which can be used to target individuals in low-income postcodes in a dire financial situation, with advertisements for predatory financial products.
Assessment of Significant Harm
The evaluation of significant harm under Article 5(1)(b) requires careful consideration of multiple dimensions of potential damage. Significant harm encompasses:
- Physical impacts
- Psychological effects
- Financial consequences
- Economic repercussions
For vulnerable populations, these harmful effects often manifest more severely and in interconnected ways due to their increased susceptibility to exploitation. What might be considered an acceptable level of risk for the general adult population could pose unacceptable dangers to children, elderly individuals, persons with disabilities, or socio-economically disadvantaged groups.
Key Differences from Article 5(1)(a):
Unlike Article 5(1)(a), this provision:
- Doesn’t require proof that the system “appreciably impairs” informed decision-making.
- This difference acknowledges that vulnerable groups inherently have reduced capacity for informed decisions.
- While group harms are not explicitly mentioned, both individual and group harms should be considered when assessing potential damage, particularly given the wording in Recital 29 of the EU AI Act, which refers to both.
Scope of Significant Harm Assessment
As examined in our previous article, the AI system must be reasonably likely to cause significant harm. In this respect, an assessment of significant harm should consider:
- Direct impacts on vulnerable individuals;
- Potential external effects on others not directly targeted by the AI system;
- Broader impacts on vulnerable groups as a whole.
This interpretation aligns with the AI Act’s overall safety objectives and its goal of protecting vulnerable populations from exploitative AI practices.