Following a recent audio deepfake of US President Joe Biden, GlobalData analyst Emma Christy warned that the rising ubiquity of generative AI (GenAI) deepfakes could cause a wave of voter distrust even in valid news sources. 

The New Hampshire attorney general’s office is investigating a series of calls made to voters in the state encouraging them not to vote in the upcoming US primary election which precedes November’s presidential election.  

The AI generated voice in the recordings is designed to sound like the President Biden and has included well known sayings used by the President. 

The New Hampshire attorney general’s office described the calls as attempted voter suppression and confirmed in a statement that the voice was generated using AI. 

“These messages appear to be an unlawful attempt to disrupt the New Hampshire Presidential primary election and to suppress New Hampshire voters. New Hampshire voters should disregard the content of this message entirely,” the statement read. 

Analyst Emma Christy warned that the deepfake may have a larger impact on voter behaviour and media distrust despite the fake call being recognised early. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

“Audio deepfakes are troubling, as it is easier and cheaper to replicate voice without the corresponding video, and they are difficult for even technology to detect,” said Christy.

“A significant number of people will be unable to discern deepfake audio from reality with catastrophic implications for countries holding elections this year,” she added.

Research by University College London (UCL), published August 2023, suggested that participants were only able to identify whether a voice was AI generated with around 70% accuracy, leaving huge leeway for deepfake voices to go undetected. 

UCL also stated that despite GenAI historically requiring huge swathes of data to replicate a voice, the technology has since developed to only require a few small clips of a person’s voice to mimic it with accuracy. For public figures, like the US President, who makes public recorded speeches that are broadcast this creates an easy opportunity for their vocal likeness to be imitated. 

As deepfake technology becomes ubiquitous, Christy warns that voters could begin to distrust even legitimate media and news sources as the line between real and synthetic content becomes blurred.

Christy was particularly concerned about the possibility of a ‘liar’s dividend’ occurring as voters become hyperaware of AI generated content.

Whilst the deepfake of Joe Biden occurred via phone calls and voice messages to voters, media outlets that report on deepfakes may face scrutiny if voters are unable to tell a deepfake video or voice from reality.

“Combining social analytics for voter profiling with automatically generated content could exacerbate the problems experienced in recent elections,” said Christy.

Adding: “Tech companies must work with regulators to devise content warnings for AI-manipulated content and push for more stringent regulation of disinformation before deepfake deceptions undermine democracy.”