The use of data analytics and machine learning in policing has plenty of potential benefits, but it also presents a significant risk of unfair discrimination, security think tank Royal United Services Institute for Defence and Security Studies (RUSI) has warned.

A new report, Data Analytics and Algorithmic Bias in Policing, outlines the various ways that analytics and algorithms are used by police forces across the United Kingdom. 

This includes the use of facial recognition technology, mobile data extraction, social media analysis, predictive crime mapping, and individual risk assessment. The report focuses on the latter two and the risks they pose, given the predictive nature of these uses.

The study notes that if bias finds its way into these technologies, it could lead to discrimination against protected characteristics, such as race, sexuality or age. This is a result of human bias in the data used to train these systems.

One police officer interviewed, for example, noted that young black males are more likely to be stopped and searched than young white males.

“Algorithms  that are trained on police data ‘may replicate (and in some cases amplify) the existing biases inherent in the dataset,’” the authors explained. “The effects of a biased sample could be amplified by algorithmic predictions via a feedback loop, whereby future policing is predicted, not future crime”.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The study notes that those from disadvantaged backgrounds are more likely to engage with public services, meaning police forces will have access to more data on people from certain demographics. In turn, due to the data available, predictive technologies are more likely to calculate that these individuals a greater risk. 

The report, based on interviews with police forces, civil society organisations, academics and legal experts, was commissioned as part of the Centre for Data Ethic and Innovation’s investigation of algorithmic bias in policing.

Use of AI in policing faces scrutiny

There have been numerous reported cases of biases finding their way into artificial intelligence (AI) algorithms.

Last year Amazon decided to scrap its predictive AI recruiting tool after it was found to be biased against female candidates.

Likewise, there has been concerns raised over the accuracy of facial recognition technology. An independent report recently found that the technology used by the Metropolitan police force incorrectly identifies suspects 81% of the time.

Is there a place for AI in policing?

Despite these concerns, the report does state that in some cases the use of algorithms in policing has resulted in a reduction in the number of criminal offenses committed.

“In relation to predictive mapping, empirical evidence has demonstrated that the deployment of predictive mapping software could increase the likelihood of detecting future crime events when compared to non-technological methods, result in net reductions in overall crime rates,” the authors said.

Research has shown that random police patrols leads to little success in detecting and stopping crime. However, using technology to identify hot-spot areas has been shown to result in a reduction in crime in both the deployment location and surrounding areas.

According to Luka Crnkovic-Friis, CEO of AI company Peltarion, the sensitive nature of this use of AI means that caution must be taken, but the issue of bias shouldn’t discredit the technology and its potential benefits:

“The existence of bias does not discredit the benefits of using AI solutions, but in sensitive applications such as policing we have to be extra careful.”

According to RUSI, there is also a risk of automation bias — where officers become over-reliant on automation tools and discount other relevant information — developing.

Crnkovic-Friis believes that addressing flaws in training data, and improving understanding of how the technology functions, rather than dismissing it entirely, is key to overcoming the issues presented by the study.

“The key thing is to be aware of the limitations and to have checks and balances in place to assess the veracity of the outputs from AI, while making sure that the input data is monitored for undesirable biases,” Crnkovic-Friis explained. “While AI can provide predictions, the outputs are only going to be based on the data it has been trained on. Without understanding of how the technology works and without ethical guidelines of what constitutes unacceptable bias, the risk of building a flawed system is high.”

Many of the applications being used by police forces are still primitive, and are only currently being trialled and tested. However, as the technology becomes more widely available, more data is collected, and inaccuracies are weeded out, the outputs will only improve.

“The nature of AI means that the more data it has and the more it is used the better the outputs. It is good however to be cautious and to account for the fact that bias can exist. Awareness is certainly a great first step to correction and perfection over time,” Crnkovic-Friis said.

Rather than a case against the use of predictive policing, there is also a case for more AI, or more specifically further development of the algorithms and data that make it possible. Lowering the bar for AI adoption and putting AI in the hands of more people is a means to that end, Crnkovic-Friis believes, and would help to create better outputs that “reflect the interests of everyone”.

“Ultimately, in order to truly address bias issues, AI needs to become accessible to and understood by more people. We need to lower the bar for AI adoption and to put the technology in the hands of more individuals and organisations, so it can reflect the interests of everyone.”


Read more: Outside China, London is the most surveilled city in the world