Under Article 5(1)(f) of the EU AI Act, the use of AI systems to infer emotions is prohibited in workplaces and educational institutions, unless it is specifically intended for medical or safety purposes. This reflects the EU’s strong concern about the intrusiveness and scientific validity of emotion recognition technologies, also known as “affect technologies”, which aim to detect and interpret emotional states through various data sources such as facial expressions, voice, or body language.
The use of such systems in settings where individuals are already in positions of vulnerability, such as employees or students, raises particular ethical and legal risks. These include concerns about privacy, human dignity, and freedom of thought, especially given the often questionable accuracy, limited reliability, and cultural variability of emotion recognition methods.
While their use is prohibited in specific contexts, emotion recognition systems outside this narrow scope are classified as “high-risk” and are subject to transparency and oversight requirements under the EU AI Act. Notably, these systems may still have a place in areas like healthcare or safety, where they can support medical treatment, mental health monitoring, or emergency responses, provided their deployment meets strict regulatory standards.
Conditions for prohibition to apply:
To fall within the scope of the prohibition, four conditions must be met: (a) the system must be placed on the market or used for this specific purpose; (b) it must be designed to infer emotions; (c) it must operate in the workplace or an educational institution; and (d) it must not be intended for medical or safety applications.
The Act distinguishes between emotion recognition systems and AI systems used to infer emotions, though in practice, both are covered. Inference includes both the identification of emotions based on biometric data, like facial expressions, voice tone, or body language, as well as the deduction of emotional states through analytical processes, including machine learning.
The definition of “emotions” in this context is broad and includes a wide range of feelings such as happiness, anger, sadness, embarrassment, and satisfaction, among others. The prohibition also extends to the inference of “intentions” and cannot be bypassed by simply relabelling emotions as attitudes or moods.
Importantly, the Guidelines clarify what is not included in this prohibition. It excludes the detection of physical states like pain or fatigue, and the mere observation of obvious expressions or gestures, such as the observation that a person is smiling, or monitoring a raised voice, unless they are being used specifically to identify or infer emotional states.
This ensures the EU AI Act targets genuinely intrusive uses of emotion AI, particularly in environments where individuals may have limited ability to object or opt out.
Further, emotion inference in workplaces and education applies specifically to systems that rely on biometric data. According to the definition in Article 3(39) of the EU AI Act, emotion recognition systems must identify or infer emotions or intentions based on such data, which can include both physical and behavioural attributes.
Physiological biometrics refer to relatively static physical features like fingerprints, facial structure, iris patterns, or even DNA and scent. Behavioural biometrics, on the other hand, capture patterns in how individuals move or act, such as gait, voice, typing rhythms, eye movements, or heart rate. These can involve both voluntary and involuntary motions and are often used to infer emotional or cognitive states through repeated patterns.
The prohibition does not cover systems that do not rely on biometric data. For example, an AI system analysing written text to determine tone or sentiment is outside the scope of the prohibition, as it does not involve biometric processing. The EU AI Act takes a broad view of biometric data, covering any data used to detect emotions, categorise individuals, or support other forms of analysis tied to identity or behaviour.
Limitation of Prohibition to Workplace and Educational Institutions:
The EU AI Act’s prohibition on emotion recognition is deliberately limited to the areas of the workplace and educational institutions, where individuals may face power imbalances that undermine their ability to meaningfully consent. The term “workplace” is interpreted broadly and covers not only traditional employment settings but also temporary, remote, and mobile workspaces, including the recruitment stage. Similarly, “educational institutions” include both public and private entities across all levels of education, regardless of whether learning occurs in person or online. The prohibition also applies during admissions or entry into educational programmes.
Importantly, the Act allows a narrow exception for the use of emotion recognition in these contexts, but only when strictly necessary for medical or safety reasons. These exceptions must be interpreted restrictively. Therapeutic uses, for example, must relate to certified medical devices, and systems intended to monitor general wellbeing, such as stress or burnout, do not qualify. The same goes for systems aimed at protecting property or preventing fraud, which fall outside the scope of the “safety” exception, which is strictly concerned with protecting life and health.
Where these exceptions apply, their use must be limited in scope, duration, and scale, with adequate safeguards in place. Data collected for medical or safety reasons may not be repurposed for performance monitoring or HR decision-making. Even when permitted, emotion recognition systems in these settings are likely to be classified as “high-risk” under the EU AI Act and must meet additional transparency and compliance obligations.
By carving out this narrow exception, the AI Act seeks to balance the potential benefits of emotion AI in specific, justifiable contexts while firmly restricting its use in scenarios that could undermine personal autonomy and dignity.
Protection of Workers:
It’s also worth noting that Member States remain free to introduce or maintain national laws that offer stronger protections for workers. Article 2(11) of the EU AI Act expressly allows Member States to adopt more favourable rules safeguarding individuals, particularly employees, against potentially intrusive uses of AI in the workplace. This flexibility ensures that the EU AI Act can coexist with, and even support, higher national standards where they exist.
Out of Scope:
The EU AI Act makes it clear that not all emotion recognition systems are prohibited. The ban under Article 5(1)(f) applies only to systems that infer emotions based on biometric data in workplace and educational settings. Systems that do not use biometric data, or that assess physical rather than emotional states, such as pain or fatigue, are excluded from the prohibition.
Emotion recognition systems used outside the workplace or education, such as in commercial, public safety, or healthcare contexts, are not banned under this article. For instance, AI systems used in retail to personalise customer experiences based on voice or typing patterns, or in marketing through intelligent billboards, are not captured by Article 5(1)(f). However, such systems may still fall under other restrictions in the EU AI Act, such as those prohibiting manipulation or exploitation under Articles 5(1)(a) and 5(1)(b), or may be subject to broader EU rules, including data protection and consumer laws.
Crowd control systems, which assess general emotional trends in large groups (for example, noise levels at an event), are also outside the scope unless they are specifically used to assess the emotions of individuals in prohibited contexts. Similarly, systems used in healthcare settings, by medical professionals or emergency services, are not prohibited, even if employees are incidentally monitored. However, safeguards must still be in place to ensure that such incidental data is not misused or allowed to impact workers unfairly.
In conclusion, from the above, it is clear that while the EU AI Act imposes strict limits on emotion recognition in contexts where individuals are especially vulnerable, it takes a risk-based approach elsewhere, focusing on how and where the technology is used rather than prohibiting it outright.