A lack of clear government action has created a UK “policy void” when it comes to using automated facial recognition technology in CCTV analytics, according to a leading cyberlaw academic.

Andrew Charlesworth, professor of law, innovation and society at the University of Bristol, called for an informed debate into the use of artificial intelligence (AI) in video surveillance.

UK police are increasingly using automated facial recognition on CCTV footage to identify persons of interest.

The technology uses algorithms to detect landmark facial features then runs the images against a police database using AI.

South Wales Police successfully identified 173 persons of interest at the 2017 Champions League final. However, they reported 2,297 false positives.

Privacy organisations such as Big Brother Watch point to these stats as reasons not to use automated facial technology.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

A recent report by the group found that automated facial recognition technology used by police falsely identified 98% of people in UK cases.

However, Charlesworth, in a white paper named CCTV, Data Analytics and Privacy: The Baby and the Bathwater, said that public debate over the issue had become “distorted”.

He warned that the two sides of the argument had become polarised, fuelled by the government’s lack of stringent regulations.

He wrote that the “UK Government’s apparent reluctance to provide a detailed regulatory strategy […] has created a regulatory policy void.”

The UK Government’s long-awaited biometrics strategy, released in June, was criticised for not being comprehensive enough.

Treating personal images the same as personal data

In addition to tighter regulations, Charlesworth called for technological solutions to prevent the misuse of automated facial recognition.

“The issue with the use of facial recognition technology is the systems which underlie it, such as police databases,” he said.

“We need to design the actual technology so that it controls the flow of data and how it is stored and deleted.”

Charlesworth proposes better data management strategies that comply with existing privacy laws, such as GDPR.

“We need to design the actual technology so that it controls the flow of data and how it is stored and deleted,” he said.

“This should be reliable, transparent and in full compliance with data protection legislation because images are just another form of personal data.

“In my opinion, this is crucial if analytic technologies are to be accepted by the public as legitimate security tools which will help to keep them safe without breaching their human rights.”

Automated facial recognition currently “a proverbial sledgehammer to crack a nut”

The report was commissioned by Cloud-based video surveillance system company Cloudview.

Its CEO and founder James Wick echoed Charlesworth’s conclusions, saying that the government is “too timid” to support the positive side of biometric identification when used responsibly.

“Right now we are seeing case after case of biometric technology being used as the proverbial sledgehammer to crack a nut,” he said.

“The public is rightly reluctant to hand over their digital data, but the solution is not to ban the technology but to ensure that it’s used properly.”

“This means limiting use to where it’s genuinely needed, and then having effective processes such as privacy impact assessments, which are designed into the technology and properly tested so that our democratic freedoms and human rights aren’t abused.”