Google said on Tuesday (12 March) that it will restrict its AI chatbot Gemini from answering questions about this year’s many global elections.

The move comes after the World Economic Forum (WEF) reported that AI election disinformation poses the most significant threat to the world in 2024.

Concerns about AI driven misinformation have skyrocketed with the widespread accessibility of GenAI applications such as Google’s Gemini and OpenAI’s ChatGPT.

“In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we are restricting the types of election-related queries for which Gemini will return responses,” a company spokesperson said.

In 2024, elections will be held in India, Mexico, the UK, the US, and Europe.

According to a recent WEF report, AI-driven disinformation will be a more significant threat this year than climate change, conflict, or economic fragility.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Google’s AI products have been under fire for misinformation. In 2023, Gemini, formerly Bard, notoriously provided an inaccurate answer during its launch demo.

Google’s AI chatbot was also forced to pause its image-generation feature last month after users found it was creating images with overt historical inaccuracies.

CEO Sundar Pichai labelled the chatbot’s responses as “biased” and “completely unacceptable”.

GlobalData forecasts that the overall AI market will be worth $909bn by 2030, registering a compound annual growth rate (CAGR) of 35% between 2022 and 2030.

In the GenAI space, revenues are expected to grow from $1.8bn in 2022 to $33bn in 2027 at a CAGR of 80%.