The EU AI Act follows a risk-based approach, categorising AI systems based on their potential to harm health, safety, and fundamental rights. The practices deemed to be of unacceptable risk fall under the prohibited AI practices in article 5 of the EU AI Act.
The EU AI Act sets out eight prohibited AI practices. For the purposes of this article, we are categorising the prohibited practices into two broad sections for ease of reference. These are prohibited systems which are more likely to be used by businesses, and those systems used by governments and law enforcement.
A. Prohibited AI Systems used by Businesses
Broadly, the EU AI Act prohibits the placing on the market, the putting into service or the use of AI systems in the EU to materially distort people’s behaviour in a manner that causes or is likely to cause physical or psychological harm.
In particular, these are:
1. Subliminal, manipulative or deceptive
AI systems that use subliminal techniques beyond a person’s consciousness. These also refer to the use of purposefully manipulative or deceptive techniques. The effect will be of materially distorting a person’s behaviour and impairing their ability to make an informed decision. The result will be persons taking a decision that they would not have otherwise taken, which may lead to significant harm.
These AI systems may deploy subliminal components such as audio, image, and / or video stimuli that persons cannot perceive. They are beyond human perception.
AI systems under this category may use other techniques through machine-brain interfaces or virtual reality. These techniques include subverting or impairing a person’s autonomy, decision-making or free choice. Such techniques are insidious as people may not be consciously aware of them. Even if they are aware, individuals may still be deceived, or they will not be able to control or resist them.
2. Exploitative
These are AI systems that exploit vulnerabilities of specific persons or groups due to their age, disability, or a social / economic situation. The objective here would be the material distortion of the behaviour of the person/s, which may lead to significant harm.
3. Untargeted scraping of facial images
AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV.
4. Emotion recognition in the workplace or educational institutions
This refers to AI systems capable of inferring emotions in these environments based on an individual’s biometric data.
The EU AI Act also defines an “emotion recognition system”, which goes further than mere inference. It in fact includes systems intended to identify emotions or intentions based on biometric data. The exception here is emotion recognition systems for medical or safety reasons.
5. Certain AI systems for biometric categorisation
These systems categorise natural persons based on their biometric data (ex: one’s face or a fingerprint). They will deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
An exception applies in the area of law enforcement in relation to labelling or filtering of lawfully acquired biometric datasets. This includes sorting of images according to hair or eye colour, for example.
B. Prohibited AI Systems used by Governments & Law Enforcement
Further to the above, the EU AI Act also prohibits:
6. Certain AI systems for social scoring
This is based on the evaluation or classification of natural persons. The system will take into account the social behaviour, or known, inferred, or predicted personal or personality characteristics of individuals over a period of time. Such systems may violate the right to dignity, non-discrimination, and the values of equality and justice.
The prohibition applies when the social score leads to detrimental or unfavourable treatment to persons:
(i) In a social context that is unrelated to the context in which the data was generated / collected;
(ii) That is unjustified or disproportionate to their social behaviour or gravity.
7. Certain AI systems for predictive policing
These are AI systems used to assess or predict the risk of a person committing a criminal offence, based solely on the profiling of the person or on an assessment of personality traits. This practice will evidently erode the presumption of innocence.
An exception here is when AI systems are used to support the human assessment of a person’s involvement in a criminal activity. This must be based on objective and verifiable facts linked to a specific criminal activity.
8. Certain AI systems for real-time biometric identification in public places for law enforcement
These systems are possibly problematic due to their intrusion into the private lives of individuals, creating the feeling of constant surveillance. This in turn dissuades the exercise of certain fundamental rights. Biased results churned out from systems under this category are also possible and may discriminate individuals on the basis of age, ethnicity, race, sex or disability.
There are exceptions to this prohibition as follows:
(i) The targeted search for specific victims of abduction, human trafficking or sexual exploitation, and the search for missing persons. This is without prejudice to Article 9 of Regulation (EU) 2016/679 (GDPR) for the processing of personal data for purposes other than law enforcement.
(ii) The prevention of a specific, substantial and imminent threat to life or physical safety of persons or a genuine and present / foreseeable threat of a terrorist attack.
(iii) The localisation / identification of a person suspected of having committed a criminal offence, and only for the purpose of conducting a criminal investigation, prosecution or executing a criminal penalty. This will only apply to specific offences listed in Annex II to the EU AI Act, and punishable by a custodial sentence or detention order for a period of at least four years.
Non-Compliance
Non-compliance with the prohibition of AI practices is subject to administrative fines of up to €35 million or up to 7% of the company’s total worldwide annual turnover for the preceding financial year, whichever is the higher.
How can we help you?
At Blue Arrow, we can help your business categorise the AI systems that you currently use. Get in touch today and we can assist you with your compliance strategy.