1. Comment
May 26, 2022updated 13 Jun 2022 10:20am

Clearview AI fined for breaching UK data protection laws

The UK Information Commissioner Office (ICO) recently fined Clearview AI just over GBP7.5 million ($9.4 million) for breaching UK data protection laws. Clearview AI’s facial recognition law enforcement platform includes over 20 billion facial images scraped from public websites, including social media. Public bodies then use this database to identify potential suspects using facial recognition technology.

Operating within the loopholes in data protection laws is common practice for global tech companies, but it is not sustainable. Clearview has been found in clear breach of data protection policies in the UK and EU, despite its objection to the charges. Global AI regulation and standards are needed more than ever to protect personal data in globally operating AI systems.

Where does the data protection confusion lie?

Clearview AI’s facial recognition database includes images from across the world, including the UK. This makes Clearview’s data protection compliance murky, and it operates within several loopholes. The database contains over 20 billion images scraped from public sources, including Facebook and Instagram. These images are then searchable using facial recognition.

Web scraping is not technically illegal. Following the Cambridge Analytica scandal, where Facebook users were profiled and targeted with political advertising, Facebook has clamped down on the exploitation of personal data. Web scraping, however, is one of the only existing loopholes that allows for incredibly private information to be exploited.

However, despite the scraped data originally being posted publicly, Clearview has still breached data protection laws. By processing the data using facial recognition, it is made available in a new format, which the user has not explicitly agreed to.

Clearview AI argues that it is not subject to local data protection laws, and CEO Hoan Ton-That commented that “the UK Information Commissioner has misinterpreted my technology and intentions.” However, processing user data from individuals based in the UK, despite not operating in the UK itself, makes Clearview AI subject to the UK GDPR.

Clearview AI is at odds with the GDPR

Clearview AI has breached the GDPR in several ways, despite no longer operating in the UK. Clearview AI’s services were previously used by the Metropolitan Police, the Ministry of Defence, and the National Crime Agency. While these organizations no longer use Clearview AI, any data that was scraped in the UK can still be accessed in other countries. The ICO indicated that Clearview AI harvested a “substantial amount” of UK data.

The company breached Article 14 of the GDPR (which the UK absorbed into UK law following Brexit), which indicates that when personal data is processed by a controller from a public source, the data subject must be notified. The company also breached Article 9, which refers to the processing of special categories of data, such as biometric data. There are also data retention issues, as UK Information Commissioner John Edwards commented, Clearview AI “failed to have a process in place to prevent the data from being retained indefinitely.”

Operating within data protection loopholes across jurisdictions has allowed Clearview AI to capitalize on personal data, however, there is no doubt its actions are not “misinterpreted” and are in clear breach of the GDPR.

A global AI standard could protect data subjects

Currently, AI is unregulated. The EU has drafted an AI regulation framework, but there are no enforceable AI regulations in any jurisdictions. Data protection standards also vary between jurisdictions. Right now, it is unclear whether Clearview AI will pay the fine, and it is difficult to enforce national legislation on a company that no longer operates in that jurisdiction, despite processing UK data.

UK Information Commissioner John Edwards suggests that people “expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement. Working with colleagues around the world helped us take this action and protect people from such intrusive activity.” Global AI standards could protect data subjects and prevent loopholes, and national laws, from being exploited by global companies.