Share

The word ‘biometrics’ is likely to send a shiver down many a reader’s spine, conjuring up Black Mirror-esque visions of China’s biometric ID surveillance state, or perhaps more mundanely, the detailed health metrics tracked by Apple Watches, Fitbits and Whoops.

Biometric technologies, however, are becoming increasingly ubiquitous across a wide range of industries. Back in August, the Guardian reported that the UK Home Office is covertly backing the rollout of Facewatch’s facial recognition cameras in retail outlets, allowing companies to scan and identify members of the public (largely without their consent or knowledge).

Within the financial services industry, use cases are also growing. Earlier this month, American Express announced that it was piloting face and fingerprint biometrics to its SafeKey authentication tool for online transactions, while ATMs in Singapore with iProov biometrics and liveness detection have been awarded for effectively leveraging emerging technologies.

And, last month, speech recognition company Aculab in collaboration with Frank Reply and Brose presented VoiceSentry, a biometrics-based solution for automotive vehicle security. The uniqueness of a person’s voice, they say, “reduces the risk of theft and unauthorised access.”

This growing ubiquity of biometrics, the technology’s use cases and the potential for bad actors along with the future impact of generative AI on identification were discussed recently on GlobalData’s Instant Insights podcast.

FTC to scrutinise unfair biometrics practices

The wave of recent biometric privacy-related class action litigation should, however, be a cause for concern for companies. In the summer, Meta-owned Instagram agreed to pay a $68.5m settlement, after a class action lawsuit (Parris v. Meta Platforms, Inc.) alleged that Instagram collected and stored users’ biometric data in violation of Illnois’ Biometric Information Privacy Act (BIPA).

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Another class action lawsuit filed in July in Illinois names X Corp., parent of Twitter, for violations of BIPA. It claims that X collected, stored and used biometrics “without providing the requisite written notice, obtaining the requisite informed written consent, or providing the requisite data retention and destruction policies”.

And, last month, BNSF Railway settled its own biometrics privacy case, after having been ordered in its first trial to pay a staggering $228m in damages.

Back in May 2023, the US Federal Trade Commission (FTC) issued a warning to companies that use biometric identifiers in their interactions with customers – in a sign of enforcement actions to come. The now “pervasive” use of biometric technologies, it said, “raise[s] significant concerns with respect to consumer privacy, data security, and the potential for bias and discrimination.”

In the policy statement, the FTC highlighted the risk of biometric databases being used for the creation of “deepfakes” or being the target of malicious attacks. Deepfakes, in particular, have proven highly effective in hacking face biometric algorithms. According to a study by Idiap Research Institute, 95% of facial recognition systems are unable to detect deepfakes.

Biometric technologies may also produce discriminatory outcomes if liable to perform differently across different demographic groups. The FTC pointed to research published by the National Institute of Standards and Technology (NIST), which found that many facial recognition algorithms produce significantly more false positive “matches” for images of West and East African and East Asian faces than for images of Eastern European faces.

In many cases, consumers are not even aware that their biometric information is being harvested.

The FTC advises: “In light of the evolving technologies and risks to consumers, the Commission sets out … examples of practices it will scrutinize in determining whether companies collecting and using biometric information or marketing or using biometric information technologies are complying with Section 5 of the FTC Act [unfair or deceptive acts or practices].”

Under Section 5, a practice is unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or competition. This includes deceptive marketing claims about biometrics’ accuracy, insufficient attention to known or foreseeable risks, and the surreptitious collection of biometric information.

Biometric providers in race with third parties

Speaking on Instant Insights: Biometrics explained: Applications, bad actors, and AI implications to Emma Taylor, a Thematic Analyst at GlobalData, iProov’s Head of Product Anthony Lam concedes that biometrics is fraught with potential danger, however.

Arguing that there are ways of improving biometrics’ accuracy, Lam said: “There are technological ways we can make it more accurate. For example, if you’re comparing one-to-one, say, if you’re comparing you with your driving licence, it’s a one-to-one comparison. But if I’m matching you with the database, that’s one to many – that accuracy is a bit lower.”

Lam also concedes, however, that the dawn of generative AI has increased security risks for adopters of biometrics: “It’s helped the bad guys generate attacks better and quicker. You’re now able to generate faces and voices that can be similar to the real person. Generative AI has made it easier for third-party actors to break systems.”

“However, as people developing these biometrics systems, it’s now pushed us even further to strengthen the systems, to defend against these new attacks. We’re always trying to stay one step ahead, and if we can’t, we’re trying to close the gap – it’s just an arms race with the third-party actors.”