According to research by Statewatch, the British Ministry of Justice is working on a system to predict homicides using sensitive data from hundreds of thousands of individuals - including victims and people with mental health conditions.
The British Ministry of Justice (MoJ) is collaborating with several police authorities to develop a prediction system designed to identify people who might commit murder in the future. The initiative, originally called "Homicide Prediction Project," was launched under Prime Minister Rishi Sunak and later renamed "Sharing Data to Improve Risk Assessment."
Research platform Statewatch obtained internal project documents through freedom of information requests, including a data protection impact assessment, risk analyses, and a data-sharing agreement with Greater Manchester Police (GMP). These documents reveal that data from up to half a million individuals has been processed - including information from suspects, victims, witnesses, and missing persons.
The datasets come from various sources including the Police National Computer, the Ministry of Justice itself, and police authorities like the GMP. The collected information includes names, birth dates, ethnicity, and police identification numbers. Particularly notable is the use of "Health Markers" - indicators regarding mental health, addiction behavior, suicide attempts, self-harm, disability, and other forms of vulnerability. According to the MoJ, these characteristics supposedly have high predictive power for the risk of future violent crimes.
Algorithmic forecasts based on sensitive data
Officially, the project remains in a research phase. Its stated goal is to use new data sources to improve existing risk assessments in corrections and probation. Data analysis is conducted within the Ministry of Justice's "Data Science Hub." However, internal documents already mention future operational use of the system.
Statewatch has criticized the project. The NGO warns of a "chilling and dystopian" system that classifies people as potential murderers based on algorithms - often before they've committed any crime. Particularly problematic is the inclusion of data about mental illness, addiction, or disability, as well as the use of information about people who have contacted police, such as victims of domestic violence.
Existing MoJ prediction tools like the "Offender Assessment System" (OASys), used in sentencing and probation decisions, have already faced criticism. The ministry's own studies show these systems assess white offenders much more reliably than Black individuals or those of mixed ethnic background. According to Statewatch, previous ministry analyses on developing murder risk profiles based on prior offenses suggest systematic bias.
NGO calls for immediate development halt
Sofia Lyall, a researcher at Statewatch, sharply criticized the project, saying, "Building an automated tool to profile people as violent criminals is deeply wrong. The state's access to highly sensitive health data is "highly intrusive and alarming," she said.
The Ministry of Justice disputes these allegations. A spokesperson stated that only data from convicted individuals would be used and that the project serves research purposes only. They aim to investigate whether additional data from police sources could improve risk assessment. A report is in preparation.
Brexit, EU AI Act, and high-risk systems
Had the British murder prediction system been developed in an EU member state, it would likely violate central provisions of the EU AI Act passed in 2024. This law categorizes AI systems according to risk potential and imposes strict requirements for high-risk applications - such as those in law enforcement. Systems that process sensitive data, exploit social vulnerabilities, or can deliver discriminatory results are considered particularly problematic.
The system developed in Britain uses precisely such sensitive data - including mental health, addiction, or disability information - and according to the Ministry of Justice’s own research, shows significant bias in evaluating ethnic minorities. In the context of the EU AI Act, these would be clear indicators of a high-risk and most likely prohibited application. Additionally, there's a lack of evidence regarding appropriate risk assessments, transparency measures, or effective human oversight - all requirements that would be mandatory under EU law.