Facial recognition (FR) technology is under scrutiny by the EU. The EU is considering a temporary ban on the use of FR in public spaces until safeguards are in place to mitigate the risks. Brussels’ initiative comes amid revelations about the use of the technology for monitoring crowds in public areas. These included London’s King’s Cross and, more recently, outside Cardiff City’s stadium.

If the measure is approved, companies selling FR technologies will not be able to sell their products for use in public areas. However, stores and businesses will still be allowed to buy FR products for private use. Also, a ban wouldn’t come into effect overnight, given the broad agreement required among member states. In trying to impose limits on the indiscriminate use of FR, Brussels is leading the way with an ethics-based approach governing AI usage and once more, as with GDPR, it is setting the standard for regulation worldwide.

Inaccuracy is an issue with facial recognition

While surveillance technology can help monitor critical threats, it can also be highly intrusive when combined with artificial intelligence (AI) and data analytics. The main limitations of the technology include its inaccuracy, potential privacy violations, and the lack of technical standards.

FR training data is often incomplete or unrepresentative of the general population. Therefore, FR based on that information contains inherent biases. A study from MIT Media Lab showed that the accuracy of FR technology differs across gender and races. The darker the skin, the more errors arise, up to 35% for images of darker-skinned women. There are significant risks that FR used in law enforcement and border, airport, and retail security will be unreliable. Where misidentification can lead to arrest or incarceration, the level of accuracy must be as high as possible.

Lack of consent is a stumbling block

The use of live FR technology involves the processing of personal data, specifically biometrics, to accurately identify an individual. As such, data protection law, like the General Data Protection Regulation (GDPR), applies whenever FR is used. Under GDPR, personal data can only be processed if an individual gives their explicit consent. However, it is unlikely that individuals, including those not on a watchlist, will ever be asked to provide consent where FR is used for law enforcement purposes.

Facial recognition needs common rules to ensure its applications are secure and the privacy of consumers is protected. In 2019 a debate on FR standards was held at the UN’s International Telecommunication Union (ITU). Chinese companies lobbied for their standards to be adopted for FR, video monitoring, and city and vehicle surveillance.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Overall, FR represents a step-change from older technologies like closed-circuit television (CCTV) to such an extent that it makes the regulatory void particularly alarming. Significantly, Brussels’ draft proposal on the FR ban points to the right under GDPR for EU citizens “not to be subject of a decision based solely on automated processing, including profiling.” A GDPR-based approach to FR, led by the EU, will serve as the basis of the first-ever attempt to govern AI usage.