The EU AI Act includes crucial provisions regarding AI-enabled social scoring systems, with Article 5(1)(c) specifically addressing practices that could lead to social control and surveillance. While AI-enabled scoring may serve beneficial and legal purposes such as improving safety, certain applications of these systems may raise significant concerns about fundamental rights and EU values.
Scope
The EU AI Act’s prohibition targets AI systems that evaluate or classify individuals or groups based on their social behaviour or personal characteristics. This prohibition applies to both public and private sectors, covering any system that generates scores, rankings, or labels for natural persons. It’s important to note that while AI-enabled scoring can be beneficial for legitimate and lawful purposes, the Act specifically targets practices that could lead to discriminatory outcomes or unfair treatment.
As inferred above, the reason behind this prohibition is that certain social scoring practices, apart from leading to discriminatory and unfair outcomes for individuals / groups, can also result in social control and surveillance practices.
Conditions to satisfy the social scoring prohibition:
For an AI system to be prohibited in terms of Article 5(1)(c) it must be placed on the market, put into service, or used, and:
- It must be intended or used for the evaluation or classification of natural persons or group of persons over a certain period of time based on:
- their social behaviour; or
- known, inferred or predicted personal or personality characteristics.
And
- The social score created with the assistance of the AI system must lead or be capable of leading to the detrimental or unfavourable treatment of persons or groups in one or more of the following scenarios:
- in social contexts unrelated to those in which the data was originally generated or collected; and / or
- treatment that is unjustified or disproportionate to their social behaviour or its gravity.
All the above conditions must be fulfilled and will be considered in turn hereunder.
Evaluation or classification of natural persons / groups over a certain period of time:
- Evaluation or classification of natural persons / groups:
This conditions refers to evaluation or classification of natural persons (legal entities are excluded) with the assignment of a score based on social behaviour or their personal or personality characteristics. The score may take several forms, for example a number, a ranking, or a label.
The scope is broad and covers both the public and private sector.
When we look at the terminology used in the legislation, “evaluation” and “classification” serve distinct but related purposes. The term “evaluation” implies making judgments about individuals or groups, actively assessing their qualities or actions. “Classification,” on the other hand, takes a broader approach by organising people into categories based on objective traits like age or height, without necessarily making judgments.
The concept of “evaluation” becomes particularly significant because it encompasses “profiling” as defined in the GDPR. In the context of data protection, profiling involves gathering information about individuals or groups to analyse their behavioural patterns and characteristics. This information is then used to sort people into categories and predict their future actions. When AI systems engage in this type of profiling, they may fall under the restrictions outlined in Article 5(1)(c) of the EU AI Act.
The SCHUFA I judgment provides valuable insight into how these concepts apply in practice. In this case, the Court of Justice of the European Union (CJEU) reviewed a system that assessed creditworthiness. The court determined that the system engaged in “profiling” under GDPR Article 4(4) because it attempted to predict an individual’s future behaviour, specifically their likelihood to repay loans, by comparing their characteristics to those of similar individuals. The system operated on the premise that people with matching characteristics would likely exhibit similar behaviours. This type of predictive profiling analysis also falls within the scope of Article 5(1)(c) of the EU AI Act’s restrictions.
- Over a certain period of time:
The temporal aspect of assessment under the EU AI Act is significant. Rather than focusing on isolated evaluations or one-time ratings from specific situations, the EU AI Act considers behaviours and characteristics over an extended period of time. This broader scope is crucial to prevent any attempts to circumvent the prohibition through overly narrow assessments.
- Based on social behaviour or known, inferred or predicted personal or personality characteristics:
When examining how these AI systems gather and process information, it is to be noted that they may draw from both direct and indirect sources. The data might come straight from individuals themselves, or it might be collected through various indirect means, including surveillance, third-party sources, or through analysis of other information. This data collection focuses on two main areas: social behaviour and personal characteristics (analysed below).
What is “social behaviour”?
Social behaviour, as defined in this context, encompasses a wide range of human activities and interactions. This includes how people engage with their community, such as participating in cultural events, as well as their conduct in business settings, for example, their history of debt repayment. The EU AI Act recognises that this behavioural data will often come from multiple sources, creating a comprehensive picture of an individual’s social interactions.
Personal or personality characteristics:
In terms of personal and personality characteristics, the EU Act makes an important distinction. Personal characteristics refer to concrete, factual information about an individual – their gender, sexual orientation, address, income, profession, and financial status. Personality characteristics, on the other hand, involve more complex assessments that might reflect judgments made by the individuals themselves, others, or AI systems. The EU AI Act sometimes refers to these as personality traits, and the Guidelines emphasise that the terms “personality characteristics” and “personality traits” should be interpreted consistently.
Known, inferred or predicted characteristics:
The EU AI Act also distinguishes between three types of characteristics based on how they’re determined.
- known characteristics: this information will be directly inputted into the AI system and can typically be verified.
- inferred characteristics: these are derived from analysing other information, with an inference being made by the AI system.
- predicted characteristics: represent estimates based on patterns, acknowledging that these predictions have less than 100% accuracy.
The concept of inferred data is also used in the context of profiling in GDPR, and as stated in the Guidelines, may be a source of inspiration for interpreting the concepts in Article 5(1)(c) of the EU AI Act.
The social score must lead to detrimental / unfavourable treatment in unrelated social contexts and / or unjustified or disproportionate treatment to the gravity of the social behaviour:
Causal link between the social score and the treatment:
The connection between social scores and their consequences is crucial. The EU AI Act doesn’t require the AI system to be the sole cause of unfavourable treatment. It could be part of a broader decision-making process that includes human assessment. However, the AI component must play a significant role in producing the social score.
Unfavourable treatment can range from simple disadvantages (like increased inspections) to more serious detriments causing actual harm. The EU AI Act is particularly concerned with situations where scoring systems use data from unrelated contexts or impose disproportionate consequences.
Detrimental / unfavourable treatment in unrelated social contexts and / or unjustified or disproportionate treatment:
The social score must result (or be capable of resulting) in detrimental or unfavourable treatment either:
- in social contexts unrelated to the contexts in which the data was originally generated or collected; or
- unjustified or disproportionate to the social behaviour or its gravity.
The following terms are to be interpreted as follows:
- “unfavourable treatment“: as a result of the scoring, the person/s must be treated less favourably compared to others without necessarily requiring a particular harm or damage (for example, people are singled out for additional inspections in case of fraud suspicions).
- “detrimental“: requires the person/s to suffer certain harm and detriment from the treatment.
Practical Application:
Consider a social welfare agency using AI to detect fraud. If the AIsystem uses irrelevant factors like a spouse’s nationality or social media behaviour to assess fraud risk, it would likely fall under the Article 5(1)(c) prohibition in the EU AI Act. However, using relevant, legally collected data to verify benefit allocation would be permissible as it serves a legitimate purpose.
Another example involves child protection services. An AI system that profiles families based on various social behaviours and leads to disproportionate actions (like removing children from homes for minor infractions such as missed doctor’s appointments) would fall under this prohibition.
Important Exceptions:
Not all scoring systems are prohibited. For instance, financial credit scoring systems remain permissible (albeit high-risk) when they assess creditworthiness based on relevant financial and economic circumstances, provided they comply with consumer protection laws and incorporate appropriate safeguards for fair treatment.