Skip to main content
Prohibited AI Practices

Real-Time Remote Biometric Identification Systems for Law Enforcement

By April 1, 2025No Comments

The EU AI Act sets out the framework for the regulation of biometric technologies, including one of the most high-risk and controversial applications: real-time biometric identification (RBI) in publicly accessible spaces for law enforcement purposes. Under Article 5(1)(h) of the EU AI Act, the use of such systems is prohibited by default, subject only to narrow and exhaustively defined exceptions.

Real-time RBI systems enable the automatic identification of individuals as they move through public areas by comparing live biometric data, typically facial images, against databases. These systems can operate continuously, capturing data without consent, raising deep concerns about mass surveillance, privacy, and fundamental freedoms.

Recognising this, the EU AI Act limits their use to three specific scenarios, detailed in Article 5(1)(h)(i)–(iii). In the absence of enabling national legislation, law enforcement authorities and their partners are categorically prohibited from deploying these systems in public spaces.

When does the prohibition apply?

For the prohibition under Article 5(1)(h) of the EU AI Act to apply, five cumulative conditions must be met: the system must (a) be an RBI system; (b) be used (not just placed on the market or developed); (c) operate in real-time; (d) be deployed in publicly accessible spaces; and (e) be used for law enforcement purposes. Each of these elements shall be examined in turn below.

The RBI system:

At the heart of this prohibition is the concept of remote biometric identification. According to the EU AI Act, RBI refers to the use of AI systems that can identify individuals without their active involvement, typically at a distance, by comparing their biometric data to entries in a reference database. This includes facial recognition systems used in surveillance cameras, as well as technologies that analyse voice, gait, keystrokes, or other behavioural signals.

Crucially, this form of identification is distinct from biometric verification or authentication, which involves matching an individual’s biometric data with their own stored record, such as unlocking a phone or accessing a secure building. Verification requires active participation and is not covered by the prohibition in Article 5(1)(h).

Remoteness:

The “remoteness” of RBI systems is what raises the most concern under the EU AI Act. These systems often operate in the background, without people’s knowledge or consent, and can process the biometric data of large groups simultaneously. Even notifying individuals of surveillance is not enough as remoteness implies that individuals are not consciously stepping into a process or interacting with the system in a way that enables informed, individual engagement.

For example, surveillance cameras mounted on ceilings in a metro station that scan and identify commuters using facial recognition software would constitute a real-time RBI system. Similarly, a system that captures and analyses voice patterns in public to identify persons of interest would also fall within the scope.

In contrast, systems used for controlled access to secure areas, like a facial recognition scanner or biometric access via fingerprints at an office building, would not be considered “remote” under the EU AI Act, and thus fall outside the scope of the prohibition.

Reference database:

Identification requires a reference database containing biometric data against which comparisons can be made. Without such a database, biometric identification is not possible. An example in this respect is the Schengen Information System which can be used as a reference database for facial recognition purposes for missing persons.

Real-time:

The concept of real-time is central to the prohibition in Article 5(1)(h) of the EU AI Act. A system is considered to operate in real-time when it captures, processes, and compares biometric data instantaneously or with minimal delay. The aim is to prevent circumvention of the prohibition by using systems with only a short delay, but which still effectively function as real-time tools.

While the EU AI Act does not define what constitutes a “significant delay,” the threshold is functional: if the person being identified has likely left the area by the time the data is processed, the use may fall outside the real-time scope and instead be considered retrospective. The distinction is largely deemed to be “temporal“, as the same devices can often support both real-time and post-event identification.

Real-time systems are typically used to enable immediate action or monitoring, such as tracking individuals moving through a crowd. For example, screening all visitors at a concert venue as they enter constitutes real-time use. By contrast, analysing video footage after the event to identify someone involved in an incident would be considered retrospective biometric identification, which is not prohibited but remains subject to high-risk AI requirements and safeguards.

Even one-off, on-the-spot identifications may fall under the prohibition. For instance, if law enforcement covertly captures an image and runs an immediate database search, this could be considered real-time RBI depending on the timing and intent.

In publicly accessible spaces:

The EU AI Act defines publicly accessible spaces broadly, as any physical location, whether publicly or privately owned, that is accessible to an undetermined number of people, regardless of capacity limits, security checks, or access conditions such as tickets, registration, or age requirements.

This means that even if access is controlled (for instance, through ticket purchase or ID checks), the space may still be considered publicly accessible if it is open to a wide and undefined group of people. Common examples include streets, parks, train stations, shops, cafés, cinemas, sports venues, and event spaces like trade fairs or concert halls.

Ownership is irrelevant. A privately owned venue like a shopping mall, can still fall within the definition if it admits the public. However, certain areas remain outside the scope. For instance, secure workplaces with restricted access, prisons, and areas dedicated to border control (such as customs zones at airports) are not considered publicly accessible under the Act.

It’s also worth noting that online spaces, such as websites, social media platforms, or virtual meeting room, are explicitly excluded. The prohibition only applies to physical spaces.

The EU AI Act requires a case-by-case assessment to determine whether a space qualifies as publicly accessible. For example, a gated residential area with restricted entry would not meet the criteria, but a park within that area that is open to the public during certain hours might. Similarly, a government office may include both publicly accessible areas (like a reception lobby) and restricted ones (like internal staff offices).

For law enforcement purposes:

The prohibition applies specifically to the use of real-time RBI systems for law enforcement purposes, regardless of who is operating the system. What matters is the purpose, not the nature of the entity deploying it.

The EU AI Act adopts a broad definition of law enforcement, aligned with the Law Enforcement Directive (LED). It includes any activity carried out for the prevention, investigation, detection, or prosecution of criminal offences, or the execution of criminal penalties. It also covers pre-crime measures, such as safeguarding public security and preventing threats before an offence occurs, such as crowd surveillance at demonstrations, sporting events, or public gatherings.

Crucially, it is not only traditional police or criminal justice authorities that fall within this scope. The EU AI Act makes clear that law enforcement activities may be carried out by:

  • Public authorities, such as police or prosecutors;
  • Other bodies or entities, including private actors, entrusted by national law with public powers for law enforcement purposes;
  • Entities acting on behalf of law enforcement, under their instruction and supervision, even if only for specific tasks.

For example, a public transport company asked to implement surveillance measures at a train station under police direction may be considered to be acting on behalf of law enforcement. This use would fall within the prohibition if the RBI system operates in real-time and in a publicly accessible space.

However, if a private entity, such as a bank, carries out monitoring independently for its own compliance or anti-fraud purposes, that activity would not be considered law enforcement under the AI Act and therefore would not trigger the prohibition under Article 5(1)(h).

This distinction is essential. It ensures that only those uses of biometric identification that are functionally linked to public enforcement powers are subject to the strictest regulatory limits, preventing backdoor surveillance practices under the guise of private or commercial activity.

Exceptions to the prohibition:

While the general rule under Article 5(1)(h) of the EU AI Act is a prohibition on the use of real-time RBI systems in publicly accessible spaces for law enforcement purposes, the EU Act allows for three strictly limited exceptions. These are listed in Article 5(1)(h)(i)–(iii) and relate to specific law enforcement objectives, namely: (i) locating victims of abduction, human trafficking, sexual exploitation, and the search for missing persons; (ii) preventing imminent threats to life or safety, or a genuine and present, or genuine and foreseeable threat of a terrorist attack; or (iii) locating suspects of serious criminal offences which are set out in Annex II of the EU AI Act.

However, these exceptions are not self-executing. Article 5(1)(h) does not, on its own, provide a legal basis for deploying RBI systems. Rather, such use must be explicitly authorised under national legislation that complies with the detailed safeguards and conditions set out in Articles 5(2) through 5(7) of the EU AI Act. These include strict requirements on necessity, proportionality, prior authorisation, time and geographic limitations, and oversight.

In this respect, if a Member State has not enacted domestic legislation that authorises the use of real-time RBI systems for one or more of these objectives in line with the AI Act, any such use has become prohibited as from 2 February 2025. This ensures that there can be no deployment of real-time biometric surveillance without clear legal backing and robust safeguards rooted in national law.

Safeguards for exceptions:

Even where such use is permitted, strict safeguards must apply under Articles 5(2) to (7). Article 5(2) highlights that the use of RBI in an exceptional case as specified above, shall only be deployed for the purposes of confirming the identity of the specifically targeted individual. Other safeguards which follow include requirements around necessity and proportionality, authorisation procedures, transparency, and impact assessments. Moreover, any authorised deployment must still comply with the broader obligations applicable to high-risk AI systems, as set out in Article 6(2) and Annex III of the Act.

Conclusion:

The EU AI Act establishes one of the most robust legal frameworks globally for regulating biometric surveillance technologies, particularly real-time biometric identification systems used in public spaces. By default, the use of these systems for law enforcement purposes in publicly accessible areas is strictly prohibited, reflecting the significant risks they pose to fundamental rights, such as privacy, freedom of assembly, and non-discrimination.

However, the EU AI Act recognises that in exceptional circumstances, their use may be justified, provided it is backed by national legislation and accompanied by stringent safeguards. These exceptions are narrowly defined and subject to conditions designed to prevent abuse and ensure transparency, proportionality, and oversight.

The approach taken by the EU AI Act is clear, and that real-time biometric surveillance must not become a tool of generalised monitoring. Instead, its use is permissible only when it serves a pressing public interest, is strictly necessary, and operates within a clear and accountable legal framework.

Leave a Reply