An independent report has concluded that facial recognition technology being used by London’s Metropolitan Police is incorrect 81% of the time.
According to the report, carried out by academics from the University of Essex’s Human Rights Centre and first reported by The Guardian and Sky News, four out of five suspects flagged by the technology were not wanted by the police.
This data was gathered during six live trials carried out by the Metropolitan Police in Soho, Romford and Stratford, which the researchers were granted access to.
The researchers have called for the force to stop the use of facial recognition technology, which it says would likely be ruled unlawful by a court due to breach of privacy.
However, the Metropolitan Police refutes the report’s findings, claiming that its technology makes a mistake in just one out of every 1,000 cases.
Duncan Ball, the Met’s deputy assistant commissioner, said: “We are extremely disappointed with the negative and unbalanced tone of this report…We have a legal basis for this pilot period and have taken legal advice throughout.”
Met Police using “hugely inflated and deceptive” figures
The reason for the discrepancy between the report’s findings and the Met Police’s own figures is a result of the way that they calculate the technology’s error rate.
“The Met’s 0.1% error rate figure is calculated by dividing the number of incorrect matches by the total number of people whose faces were scanned. The University of Essex study’s 81% error rate divides the number of incorrect matches by the total number of reported matches,” Paul Bischoff, privacy advocate for Comparitech.com, explained.
If 100 people were to walk past a facial recognition setup and 10 were identified by the technology, but only three were actual police suspects, the University of Essex would count this as a 30% success rate. The Metropolitan Police say, as 100 people were scanned and seven were incorrectly matched, the technology is therefore 93% accurate.
“The University’s report is much more in line with how most people would judge accuracy,” Bischoff told Verdict. “The Met’s figure is hugely inflated and deceptive.”
A lot of training and tweaking needed before facial recognition technology can be trusted
US cities San Francisco, California, and Somerville, Massachusetts, have banned the use of facial recognition technology in policing over its inaccuracy and the potential for bias. A number of other cities are considering implementing a ban.
Javvad Malik, security awareness advocate at KnowBe4, is unsurprised to hear of yet more high rates of inaccuracy.
“Facial recognition, especially outside of controlled environments is still very much a developing area of research and therefore it’s not surprising to hear of potentially low accuracy rates,” Malik explained.
Despite the Met Police’s insistence that the public “would absolutely expect us to try innovative methods of crime fighting in order to make London safer”, the report highlights the level of progress that still needs to be made before the technology can be trusted to be accurate.
Until then, Malik says, facial recognition cannot be relied on entirely to spot criminals.
“Such technologies will need a lot of training and tweaking before they even get to close to an acceptable level where it can automatically be trusted. Until such a time, such technologies should always be used to augment and never to replace the human analyst.
“It is, therefore, important that organisations such as the police force don’t rely entirely on facial recognition to apprehend criminals.”