Skip to main content
Prohibited AI Practices

Individual Risk Assessment & Predictive Policing

By April 1, 2025No Comments

The EU AI Act introduces a clear prohibition on certain uses of artificial intelligence that are considered to pose an unacceptable risk to fundamental rights. Among these is the use of AI systems that predict the likelihood of a person committing a criminal offence based solely on profiling or a personality assessment. This prohibition is set out in Article 5(1)(d) of the Act and is grounded in the principle that individuals should be judged based on actual conduct, not on data-driven assumptions about their future behaviour.

The core concern behind this prohibition is the potential for AI systems to make unjustified and potentially discriminatory predictions. These systems often rely on large datasets, including police records and socio-economic data, combined with theories on criminology, to forecast the likelihood of the commission of criminal acts by individuals. While such tools may offer law enforcement agencies increased efficiency and proactive capabilities, they also carry significant risks. In particular, relying exclusively on profiling or personality traits can overlook important individual circumstances and entrench historical biases in policing data.

Conditions to be Satisfied:

Importantly, the prohibition under Article 5(1)(d) applies only when three cumulative conditions are met. First, the AI system must be placed on the market, put into service, or used for the specific purpose of predicting criminal behaviour. Second, it must assess or predict the risk of an individual committing a crime. Third, and most crucially, that prediction must be based solely on profiling or the assessment of personality traits and characteristics.

However, the EU AI Act does not outlaw all forms of crime prediction technology. Where AI is used to assist a human assessment that is already based on objective and verifiable facts directly tied to a specific criminal activity, it may fall outside the scope of the prohibition. In such cases, these systems are not banned but instead classified as “high-risk” under the AI Act. This classification triggers a range of compliance obligations designed to ensure transparency, accountability, and respect for fundamental rights.

Out of Scope:

The following are not within the scope of the prohibition:

  • location-based or geospatial predictive or place-based crime predictions
  • AI systems that support human assessments based on objective and verifiable facts linked to a criminal activity
  • AI systems used for crime predictions and assessments in relation to legal entities (example: a tax authority using a system to analyse large amounts of data on transactions to assess the risk of a company committing tax fraud)
  • AI systems used for individual predictions of administrative offences.

Conclusion:

The approach here is that the Act does not prohibit all forms of crime prediction technology, highlighting the EU’s nuanced approach. While it restricts speculative and potentially harmful AI-based predictions, it still allows room for responsible use of AI in support of law enforcement, provided human judgement and factual grounding remain central to any decision-making process.

Leave a Reply