Facebook content moderators have reported watching live-streamed suicides or beheadings, leading to headlines suggesting it is one of the worst jobs in tech.

Unsurprisingly, social media can have damaging consequences for employees faced with disturbing content every day. It raises the question: are social media companies doing enough to support their moderators and users’ mental health?

Mixed messages for Facebook moderators

Social media giants use content moderation for posts that include images, text, audio, and video. Facebook and Google have hired thousands of contractors over the last few years, but results have been disappointing (Facebook admits to a content moderation error rate of 10%).

The work has had a psychological impact on many moderators, who are required to view disturbing content for eight hours a day, often working nights and weekends on temporary contracts.

In 2018, former Facebook moderators, several of whom had developed post-traumatic stress disorder (PTSD) symptoms, sued the company for failing to provide a safe workspace. In May 2020, Facebook agreed to pay $52m to current and former moderators and provide more counselling to them while they work. However, weeks later, the company updated its contracts and told many moderators to work an extra 48 minutes per day to review online child abuse.

Facebook’s content moderator guidelines were leaked in March 2021 following abusive posts towards the Duke and Duchess of Sussex. The guidelines highlighted that public figures were permissible targets for death threats. The company claimed it wants to “allow discussion, which often includes critical commentary of people who are featured in the news.” According to Facebook, a public figure is anyone mentioned in the title, subtitle, or preview of five or more news articles in the last two years. The only exception is anyone under 13 years old.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

It is clearer than ever that not only has employee well-being been overlooked by Facebook, but the safety of its users is no longer top of the company’s priorities.

Social media companies need to work on creating a safer environment

Social media content moderators urgently require improved working conditions and a stringent recruitment process. Adequate resilience training, mental health screening, and real-time support would create a safer work environment. However, the wider problem is that the social media business model encourages shocking and extreme content.

Most social media platforms are ad-funded and rely on ad sales for most of their revenue (in 2019, nearly 99% of Facebook’s revenue came from ads). The aim is to hold the user’s attention for as long as possible, with no particular care for either the quality of the information provided or the user’s privacy. The long-term solution would be to force social media companies to implement self-regulation policies in line with human rights standards to increase accountability and reduce the need for human moderators. However, this is easier said than done.

Algorithms cannot replace human judgement

Content moderation algorithms are another tool that social media companies have at their disposal. Such algorithms can detect content that breaks a company’s rules and remove it from the platform without human involvement. While these systems are well-equipped to eliminate images of specific symbols, like a swastika, they cannot replace human judgment regarding violent, hateful, or misleading content.

These algorithms can also create new problems. Research published by the New America think tank in 2019 found that algorithms trained to identify hate speech were more likely to flag social media content created by African Americans, including posts that discussed personal experiences with racism in the US. This suggests that algorithms can obtain biases in their filtering processes.