Alex Jones’ $1 billion trial is a historic moment in respect of misinformation: the judgment clearly sets a legal precedent for those who would follow suit. However, those on the political fringes claim that this trial represents an assault on free speech itself. Misinformation has thrived online, and social media companies have been slow to implement policies to prevent the spread of fake news. Regulators’ patience has worn thin.

Jones was ruled against for defaming the families of victims of the 2012 Sandy Hook Elementary school shooting, in which 20 children and seven adults were murdered. Peddling misinformation is a lucrative business model and Jones’ net worth was estimated to be between $135 million and $270 million. This billion-dollar ruling reflects the immense harm his misinformation inflicted on the Sandy Hook families, and how he profited from it.

Misinformation thrives on social media

Estimates suggest that the global cost of misinformation was $78 billion in 2020. However, the damage caused by misinformation spreads far beyond the monetary impact, affecting society’s trust in governments and institutions. The insurrection at the US Capitol in January 2021 demonstrated the threat misinformation poses to democracy, while the COVID-19 pandemic saw people lose faith in the scientific process as a result of fake news online.

Social media has made the impact of misinformation greater than ever, as it can now reach a global audience almost instantly. Governments around the world recognize the threat of misinformation and have begun taking a proactive approach to social media regulation.

Social media companies have traditionally taken an apathetic stance on misinformation

Social media companies have often cited freedom of speech to shield themselves from responsibility. “Facebook shouldn’t be the arbiter of truth,” said Mark Zuckerberg in a May 2020 FOX News interview. But regulators are increasingly holding social media companies accountable for the content posted on their sites and how that content is disseminated. Sophisticated algorithms can harness big data and artificial intelligence (AI) to target content at those most receptive to it, often resulting in echo chambers that amplify misinformation and harmful rhetoric. Regulators want greater transparency on the design of these algorithms, but this is closely guarded by social media companies.

In response, social media companies are self-regulating to avoid further government intervention. Social media companies rely on human moderators to flag inappropriate content, and they are phasing in AI for automated moderation due to the sheer amount of uploaded content. AI cannot match the accuracy of a human moderator, but human moderators are exposed to graphic content on a near-daily basis, resulting in high rates of mental health issues and burnout. In May 2020, Meta (formerly Facebook) agreed to pay a settlement of $52 million to moderators who developed post-traumatic stress disorder while on the job. In March 2022, former TikTok moderators sued the company for inadequate mental health support, resulting in psychological trauma.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Not all social media companies are willing to self-regulate. Indeed, many market themselves as ‘free-speech’ platforms where content is uncensored. In October 2022, Donald Trump’s Truth Social was added to the Google Play Store despite previously being barred for posts that promoted violence. In the same month, Kanye West announced his agreement to buy the right-wing social media platform Parler. This follows West’s Twitter and Instagram accounts being locked for antisemitic posts. These fringe platforms offer a means for people to escape the self-regulation of major social media companies, highlighting the need for a standardized regulatory body.

Self-regulation will do little to assuage governments

There is disagreement on how regulatory policies should be imposed on social media, as over-zealous regulation can genuinely infringe on people’s freedom of expression. But the growing consensus among the international community is that something must be done.

Social media companies time and again have valued profits above all else; they cannot be trusted to act in the public’s best interest. Given social media’s global impact, governments around the world must work in cooperation to hinder the spread of misinformation while protecting individuals’ right to freedom of speech.