Skip to main content
Prohibited AI Practices

Biometric Categorisation for “Sensitive” Characteristics

By April 1, 2025No Comments

The EU AI Act continues its strong stance on protecting fundamental rights by explicitly prohibiting certain biometric practices deemed unacceptable under any circumstances. Among these is the use of AI systems for biometric categorisation based on sensitive characteristics. Article 5(1)(g) of the EU AI Act prohibits AI systems that use biometric data to categorise individuals according to protected attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

The prohibition targets systems that infer or deduce these characteristics from biometric data, such as facial features, gait, or voice, often without the knowledge or consent of the individual concerned. Such systems are considered inherently invasive and discriminatory, posing significant threats to privacy, human dignity, and the principle of non-discrimination.

It’s important to note that the prohibition does not apply to the categorisation or labelling of biometric data within datasets that have been lawfully acquired under Union or national law. For example, datasets used by law enforcement agencies for legitimate investigative purposes are not captured by this rule, provided their collection and use are compliant with applicable legal frameworks.

When does the prohibition apply?

Under Article 5(1)(g) of the EU AI Act, the prohibition on biometric categorisation based on sensitive characteristics applies only where a specific set of conditions are all met. These are: (a) the AI system must be placed on the market, put into service for this specific purpose, or used; (b) the system must be a biometric categorisation system; (c) individual persons must be categorised; (d) based on their biometric data; and (e) to deduce or infer sensitive characteristics, such as race, political opinions, religious or philosophical beliefs, trade union membership, sex life, or sexual orientation.

All five conditions must be fulfilled simultaneously for the prohibition to take effect. This ensures the rule is precise in scope, yet robust enough to prevent deeply intrusive and discriminatory practices.

Biometric Categorisation:

Biometric categorisation systems, as defined in the EU AI Act, are AI systems that assign individuals to specific categories based on their biometric data. Unlike biometric identification or verification, where the goal is to confirm who a person is, categorisation is about assigning people to a group with certain pre-defined characteristics. For example, an advertising display that adapts content based on the viewer’s perceived age or gender is engaging in biometric categorisation, even if the person is never formally identified.

According to Article 3(40) of the EU AI Act, these systems fall within the scope of the prohibition unless they are both ancillary to another commercial service and strictly necessary for objective technical reasons. This exclusion is interpreted narrowly. A feature is considered “ancillary” only if it is intrinsically linked to the main service and cannot function independently. Simply integrating a biometric feature into a broader service does not automatically remove it from the EU AI Act’s scope, especially if the feature processes sensitive characteristics without a genuine technical necessity.

The types of biometric data used in categorisation can include physical features, like facial structure or skin tone, as well as behavioural traits, such as gait or keystroke patterns. Categorisation based on such data becomes particularly problematic when it involves sensitive characteristics protected under EU non-discrimination law, such as race or political beliefs.

To illustrate, a virtual fitting tool that uses facial data to help users preview clothing on an e-commerce platform may be allowed, as the biometric categorisation is merely a technical function supporting the main retail service. By contrast, an AI system that analyses uploaded profile images to infer users’ political leanings for microtargeted political advertising would not qualify for this exemption. Even if the feature supports a broader service, it is not strictly necessary in a technical sense, and would therefore be prohibited.

For the AI Act’s prohibition on biometric categorisation to apply, individuals must be categorised personally and not merely as part of a group. This means that the AI system must assign specific biometric attributes to identifiable natural persons, rather than drawing generalised conclusions about a crowd or population segment.

Crucially, the categorisation must be based on biometric data such as facial features, body shape, skin tone, or other physical or behavioural characteristics. This includes systems that perform so-called “attribute estimation” inferring traits like age, gender, or ethnicity based on biometric analysis of individual features like facial structure or hair colour.

If the system does not assign biometric-based categories to individuals, for instance, if it only analyses group-level trends without linking them to specific persons, then the prohibition under Article 5(1)(g) does not apply. However, where a system does perform individual-level categorisation using biometric data, and especially where it seeks to deduce sensitive characteristics, it falls squarely within the scope of the prohibition.

Deducing Sensitive Characteristics: The Core of the Prohibition

The prohibition under Article 5(1)(g) of the AI Act applies specifically to AI systems that use biometric data to deduce or infer sensitive characteristics, namely, race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. These categories align closely with those protected under EU fundamental rights and anti-discrimination law.

The key issue is not simply that individuals are being categorised, but that the categorisation targets deeply personal and protected aspects of identity. The EU AI Act recognises that inferring such characteristics based on biometric traits can lead to discriminatory profiling, stigmatisation, or other harms, and thus draws a firm legal line around these practices.

What Falls Outside the Scope of the Prohibition

The AI Act’s prohibition on biometric categorisation does not apply to AI systems used for labelling or filtering lawfully acquired biometric datasets, particularly where this is done to ensure fairness, accuracy, or compliance with other legal requirements. This is especially relevant in contexts such as law enforcement or high-risk AI development, where datasets may need to be categorised to ensure that different demographic groups are adequately and fairly represented.

For example, if biometric data is labelled by sensitive attributes like race or gender in order to detect and correct bias in training data, such labelling is not considered a prohibited practice under Article 5(1)(g). In fact, the EU AI Act may require such steps as part of the obligations for high-risk AI systems to prevent discriminatory outcomes.

The key distinction is this: the use of biometric categorisation to train or balance a dataset is permitted, while the deployment of AI systems that infer sensitive characteristics of individuals in real-world applications is strictly prohibited.

Exemption for Law Enforcement Datasets

The Guidelines also clarify that the prohibition in Article 5(1)(g) of the EU AI Act does not apply to the labelling or filtering of lawfully acquired biometric datasets used in the context of law enforcement. This exemption ensures that authorities can continue to manage and organise biometric data for legitimate purposes, such as ensuring fair representation in training data or supporting investigative functions, provided that such activities are carried out within the bounds of existing data protection and fundamental rights frameworks.


Leave a Reply