1. Comment
  2. Comment
February 7, 2020updated 25 May 2021 11:36am

Facial recognition, data harvesting and the end of privacy

By GlobalData Thematic Research

A New York Times investigation has catapulted Clearview AI, a US-based facial recognition (FR) company, into the center of the privacy debate. Clearview AI identifies individuals from non-consensually collected images, which threatens to eradicate individual privacy in its entirety.

The company has built a database of over one billion images taken from social media, without user consent, and contrary to the websites’ terms of use. The tool can identify any individual on the street from just one image thus obliterating public privacy. The revelations should propel swift regulatory action.

Clearview AI provides FR services to law enforcement. Previously, law enforcement FR drew from official sources such as mug shots and driver’s licenses, which take time to analyze. Clearview AI claims to analyze its entire database in under a second.

The New York Times story has had a significant impact. New Jersey has barred its police force from using the service, Twitter has sent Clearview a cease and desist. The company is facing a class-action lawsuit in Illinois, and Democratic Senator Ed Markey has sent a letter to Clearview questioning the company’s accuracy and intent. The New York Police Department (NYPD) has contested Clearview’s claims that it used the technology to identify a suspected terrorist.

Clearview AI responded to citizen outrage by emphasizing that the app will not be made public, emphasizing that it exists to help law enforcement. This does not address the reasons for concern. Without public debate or authorization, the company has crossed the Rubicon and made it possible to identify private civilians at any time. Other companies, including Google, have resisted developing similar platforms for fear of negative implications. But now the technology exists, it can always be replicated.

The implicit argument that individuals have nothing to fear if they have nothing to hide is fundamentally flawed. There are many individuals who fear discrimination and prejudice. In its current form, FR will exacerbate these fears, due to widespread gender and racial biases in the algorithms. An MIT Media Lab study of FR showed a much higher error rate for individuals with darker skin and up to 35% for women with darker skin.

Earning citizen trust is one of the biggest barriers to the use of computer vision for public safety. Any company affiliated to a political movement will exacerbate mistrust. Clearview AI’s CEO Hoan Ton-That has been photographed sat with Mike Cernovich – frequent presenter on The Alex Jones Show on InfoWars – and controversial journalist Chuck Johnson. Clearview AI co-founder Richard Schwartz is a former aide to Rudy Giuliani. Peter Thiel, PayPal founder and libertarian, was an early backer.

Just one rogue officer with the app could radically undermine the privacy of hundreds of private individuals. It is interesting to compare to the condemnation of the Chinese state’s use of facial recognition for minor offences such as jaywalking, with arguments in the US that the technology is safe, but only when controlled by law enforcement.

Regulation is a necessity to safeguard privacy. Both enterprise and law enforcement must be prevented from unduly compromising individual privacy. Robust public debate can help develop ethical guidelines for the use of FR. The government, tech companies and citizens must collaborate to establish ethical boundaries in which technological innovations must remain. Fines should be firmly enforced for violations. This may be the last opportunity to prevent the creation of dangerous, albeit profitable, tech tools without consideration of the implications.

Topics in this article: