Over the last decade political campaigning has shifted from traditional media outlets towards social media platforms.

However, the nature of political debate across those social media platforms is about to see yet another evolution. Some are even calling the 2024 US presidential Election: The AI election.

According to Andrea Tesei at Europe’s Centre for Economic Policy Research, the 2008 US presidential election was the first email election; 2016 and 2020 were the social media platform election years; and 2024 will be the first US presidential election to truly harnesses the persuasive powers of AI.

Tesei’s research on consecutive US presidential elections leads him to conclude that what differentiates the next US election cycle is AI’s potential to engage in the micro-targeting of voters. “If we are really able to know who swing voters are – in the spirit of the Cambridge Analytica case of 2016 – but even more effectively, it would become easier to convince swing voters,” says Tesei. But how effective this will be in manipulating the outcome of an election is still unclear, he adds.

The enduring public narrative that social media helped elect President Trump in 2016 is a view compounded by Trump’s own campaign team. Trump’s 2016 election digital media director was quoted in Wired that same year saying: “Facebook and Twitter were the reasons we won this thing. Twitter for Mr. Trump. And Facebook for fundraising.”

However, research shows that thirty million Americans were exposed to fake news during the 2016 election campaign, according John Bates Clark medal winning economist and Stanford professor Matthew Genzkow’s Social Media and Fake News in the 2016 Election. And the impact of that exposure to fake news had a minimal impact in shifting the voting patterns of swing voters, according to Genzkow’s research.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

Social media’s polarising effect

The 2016 US presidential election coincided with the widespread adoption of social media and the ramping up of Big Tech’s attention-based revenue model. Social media platforms’ relentless quest for user attention and Trump’s polarising entry into political life created a perfect storm for political and social disruption.

The polarising effect of ‘media bubbles’ has only increased in the intervening period. And with AI enabled micro-targeting of swing voters the potential for changing election outcomes may have just reached a tipping point. Generative AI’s spectacular entry into the public area, began with the launch of OpenAI’s ChatGPT in November 2022. Rapid advances in generative AI, since, have raised serious concerns about the use of deep fakes in political messaging, which GlobalData defines as visual or audio content manipulated or generated using AI to deceive the audience.

Unlike previous studio special effects – used in past elections – deep fakes are becoming increasingly hard to distinguish from original content. This constitutes a critical issue, especially for media and news organisations that build their brands and reputations on trust, according to GlobalData’s 2023 Artificial Intelligence in Media thematic research report.

In April 2023, President Trump’s campaign released what was touted as the first ever fully AI generated political ad. The clip portrayed a dystopian view of a US governed by President Biden and Vice President Harris for another term. The AI generated images were easily identified as fake to the discerning viewer but demonstrated how AI could be used to create messaging.

And in June 2023, reports emerged of campaigners for presidential candidate Governor De Santis using the AI generated voice of his election rival Donald Trump, demonstrating the potential for AI enabled audio doctoring for campaigning purposes.

Will American democracy survive AI?

Marry AI’s potential for creating deep fake content and its ability to tailor that content to users’ voting profiles and therein lies a clear and present danger for the democratic process.

GlobalData thematic analyst Josep Bori describes a scenario in which the whole campaigning cycle is automated by segmenting the electorate then tailoring content and messaging to each specific segment in a continuous delivery cycle. He describes this as automated content creation process as political messaging ‘on steroids’.

Bori says that within 1-3 years generative AI standards will have reached 95% accuracy. “Its use for electorally manipulative purposes will therefore be entirely intentional in the areas of microtargeting, deep fakes and misinformation,” he adds.

Whether AI enabled political messaging shifts voting patterns remains to be seen, as Tesei contents, but what it clearly does is defeat the purpose of democracy, says Bori. “You’re telling everyone what they want to hear rather than what you would do if you were in power. And you are doing it simply to win.”

It may be immoral but it’s not illegal, Bori points out. Though no laws have been broken (yet), the situation represents a major challenge for regulators. “How will a regulator track down thousands of hyper-customised messages to swing voters,” says Tesei.

Of the various platforms, GlobalData senior thematic intelligence analyst Amelia Connor-Afflick says while Elon Musk’s X, formerly Twitter, has undergone profound changes since Musk’s takeover with declining user numbers, it is well-established and one of the most popular platforms for political discussion. Alternative platforms like Threads are not strong alternatives ahead of the 2024 elections, adds Connor-Afflick.

Recent changes including Musk’s decision to “more of less axe X’s content moderation team” raise the risk of populist ideas and potential dis/misinformation which calls for greater regulation, warns Connor-Afflick.

Facebook played a significant role in targeting swing voters in 2016, with the Cambridge Analytica scandal. Since then, Meta has set up an Oversight Board, a human rights team, and partnered with academic researchers in an attempt to understand its impact on the 2020 US election, according to Connor-Afflick.

But Global Data analyst, Laura Petrone, says unfortunately, social media companies don’t seem to have learned any lessons from previous US presidential elections. ”At the end of a decade of incredible growth, the social media industry has not adequately confronted its biggest challenge: how to tackle misinformation and hate speech in an ever-evolving regulatory environment, all while trying to make money,” says Petrone. In addition, Petrone says the current slowdown in these companies’ growth leaves little room for bold investments in content moderation.

There are reasons to believe that Putin’s best hope of winning the war in Ukraine lies in US politics, say Petrone. “Russia will seek to interfere in the 2024 presidential elections, targeting US voters with misinformation. Issues over algorithmic amplification of misinformation and microtargeted political ads will likely resurface,” she says.

Indeed if the 2020 election is anything to go by, Petrone’s prediction is highly likely. The US National Intelligence Council’s assessment of foreign threats to the 2020 Federal Election was declassified in March 2021. The NIC asserted that: “Russian President Putin authorised, and a range of Russian government organizations conducted, influence operations aimed at denigrating President Biden’s candidacy and the Democratic Party, supporting former President Trump, undermining public confidence in the electoral process, and exacerbating sociopolitical divisions in the US.”

Governments have been stepping up efforts to regulate online content. Over the last few years, several initiatives from governments worldwide have attempted to stem the flow of online misinformation, especially through laws that apply during elections. Europe passed the world’s first comprehensive regulatory framework for AI in June 2023. MEPs included AI systems designed to influence voters in political campaigns and in recommender systems used by social media platforms on its list of “high-risk” areas.

While the potential for bad actors to disrupt the political process with AI enabled campaigning increases in 2024, Simon Thompson, head of data science at UK IT services company GFT, is positive that AI tools may also become available to help voters detect and decode AI generated content. “This will help thwart the efforts of the bad actors trying to dupe voters. It is likely that in future, such tools will become increasingly necessary to help combat the influx of unverified AI content,” says Thompson. Indeed, in late August Google debuted its AI image watermark tool SynthID designed to alert users if an image is generated rather than original content.

In the meantime, while such tools are still in development, the volume of misinformation and the inability to discern what is AI generated may even drive candidates to become more focused on in-person events such as rallies and hustings, says Thompson.

Whether social media was the cause or effect of the bifurcation of US political and social life, it remains one of the primary vehicles for the dissemination of political ideas to today’s electorate. Those on both sides of the political spectrum agree on one thing: misinformation on social media may not shift election results but it does have the potential to harm the political process and engender mistrust in the democratic process.