Misinformation is the spread of false information where the underlying purpose is either malicious or simply due to inaccuracy or honest mistakes.

False or misleading information can shape elections, influence public health decisions, and erode trust in institutions. While scholars debate the best way to counteract it, recent events demonstrate real-world consequences of misinformation and the urgent need for coordinated responses.

Misinformation in politics

Political misinformation is equally influential. Online platforms can potentially improve access to and sharing of credible information. They act as inclusive public spheres where all voices can be heard with equal access to information, essential for a functioning democracy. However, these information spaces are being polluted with misinformation. This is particularly problematic in the run-up to elections, as voters need access to accurate and reliable information to make well-informed voting decisions.

A misinformation event with significant implications for democracy was the January 6, 2021, attack on the US Capitol and its aftermath. Misinformation spread that the Democrats had “stolen” the 2020 Presidential election, with then-former President Donald Trump promoting the lie on social media alongside right-wing media outlets such as Fox News.

Shortly after, Trump supporters marched on the Capitol building, chanting slogans like “Stop the steal,” while some carried symbols related to QAnon, a conspiracy theory and political movement originating on the far-right of US politics. According to an internal Meta document, users reported approximately 40,000 “false news” posts per hour on the day of the attack.

Globally, similar patterns have emerged. In Brazil, misinformation about the electoral system was used to cast doubt on the 2022 presidential election, contributing to mass protests and unrest. In the United Kingdom, the Brexit referendum was marked by misleading claims, such as the widely circulated promise that leaving the European Union would redirect £350m ($470m) a week to the National Health Service (NHS). This figure was later proven inaccurate, but only after it had become highly effective as a political slogan.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Laws and regulatory responses

Since 2018, several initiatives from governments worldwide have attempted to stem the flow of online misinformation. However; a simple regulation is not a viable solution in democratic countries, as laws that curb fake news can threaten freedom of expression. Therefore, most initiatives are guidelines, proposals, or laws that apply to a specific period.

The EU’s Digital Services Act aims to tackle online harm by regulating online platforms and search engines. It does not specifically regulate misinformation but complements the EU’s Code of Practice on Disinformation. The code is a multistakeholder initiative to which some of the largest social media companies and search engines voluntarily signed up to tackle misinformation. In the UK, the Online Safety Act introduced standards for internet companies to protect users from online harm. It established a misinformation committee to advise the designated communication watchdog, Ofcom, which regulates the internet, and detailed provisions for tackling foreign interference misinformation. Ofcom will require internet companies such as Meta and Google to publish explicit statements about the content and behavior they deem acceptable on their sites.

Technology and the acceleration of misinformation

What makes misinformation particularly potent today is not only its content but its delivery. Social media platforms amplify stories that generate strong emotional reactions, regardless of their accuracy. Unlike traditional media, which operates through editors and fact-checking systems, digital media often prioritise speed and engagement over reliability.

Major technology companies have introduced measures to slow down the spread of misinformation. One prominent example is X (formerly Twitter). Its approach to combating misinformation underwent a major transformation following Elon Musk’s takeover. In 2023, the use of Community Notes to fact-check posts became widespread. This crowd-sourced feature allows users to add context to potentially misleading posts on the platform. Users vote on submitted notes. If a note receives enough positive votes from people with different viewpoints, it is published and displayed to everyone.

Meanwhile, Meta announced in early 2025 that it was removing its US third-party fact-checking program, shifting toward a “Community Notes” system, similar to X. As of January 2025, Meta also loosened restrictions on “topics that are part of mainstream discourse,” reducing the kinds of content policies that were tied to political/health/social controversies. There has been pushback from civil society and human rights organizations, who argue that removing third-party fact checkers in favor of community notes could allow more harmful misinformation.

Beyond the falsehoods

Misinformation is not simply a matter of people believing the wrong facts—it is a social and political force that shapes collective behaviour and influences decision-making on issues as vital as health and governance.

Addressing the problem requires not only technological fixes but also a broader societal commitment to strengthening information systems and rebuilding confidence in reliable sources.