Google has announced it will require advertisers to disclose election ads with digitally altered content depicting realistic-looking people or events, in an effort to combat election misinformation. 

Google said on Monday (1 July) that advertisers will be required to select a checkbox in the “altered or synthetic content” section of its campaign settings.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

The selected checkbox will generate in-ad disclosures on election content for mobile phones, computers and TV. 

The move comes after the World Economic Forum warned that AI disinformation would pose the most significant threat to the world in 2024.

Widespread adoption of AI and GenAI technology has made deciphering between what is real and what is fake harder. 

Deepfakes are generally video or audio content that has been altered to misrepresent someone. AI has made generating deepfakes much cheaper and easier to produce on a mass scale.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

The technology has become so powerful that some deepfakes are almost indistinguishable from reality, especially when partnered with generated audio.

Many deepfakes are made in a lighthearted tone, however, without disclaimers it will be hard for voters to decipher what is real and what is fake.

As GlobalData forecasts the AI market to reach $900bn by 2030, deepfakes are likely to rapidly improve with time as the technology becomes more sophisticated.

In 2023, Meta said it would make advertisers disclose if AI or other digital tools were used in election advertisements on its platforms.