Hate groups still run rampant across social media platforms, a new report has warned. The social networks are particularly being criticised for their over-reliance on artificial intelligence and for cutting their number of human moderators, counter to experts’ recommendations.

“Prohibiting individuals promoting hate, fake news and conspiracy theories should be number one on their agenda but the simple fact is that it is not,” Jack Williams, head of new business and marketing at Atomic London, told Verdict.

The annual State of Hate report from charity HOPE not hate assessed the state of far right extremism in the UK. The report’s section on online hatred unveiled a damning laundry list of social media developments over the past year. It included neo-nazi groups using Instagram to recruit teenagers, Qanon conspiracists amassing tens of thousands of followers on platforms like Twitter and YouTube, abuse against minorities across platforms like Snapchat and Mumsnet, and the extensive proliferation of fake news.

A Facebook spokesperson said that the company does “not want hate on our platform” adding that it had “removed a number of accounts belonging to The British Hand and National Partisan Movement”, which are two far-right organisations recruiting teens on Instagram named in the report.

“We’ve banned over 250 white supremacist organisations from Facebook and Instagram, and will continue removing content that praises, supports or represents these groups,” said the spokesperson. “That includes content containing swastikas and other hate symbols. Last year, we removed nearly one million pieces of content tied to hate organisations from Instagram and we’re always investing in technology to find and remove it faster.”

Joe Mulhall, senior researcher at Hope not hate, told Verdict that these bans do almost nothing to prevent the proliferation of hate online.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

“There are no quick fixes,” Mulhall said. “Deplatforming extremists is an important and useful step but it’s not a long term solution.”

The web giants often say they use AI and other automated tech to combat hate speech and misinformation. In November, Facebook for instance said in a blog that AI “is a critical tool to help protect people from harmful content”, enabling the Menlo Park-headquartered behemoth to “scale the work of human experts, and proactively take action, before a problematic post or comment has a chance to harm people.”

Usually the networks’ AI tools have been combined with human moderators to some extent. These human checkers often review thousands of posts per day, often depicting the very worst that humanity has to offer and with very little mental health support to cushion the psychological toll. But as Covid-19 forced these workers to leave their offices, companies like Twitter, Google and Facebook started to rely on AI-based moderation more heavily. As a consequence, more fake news and hate speech slipped through the cracks.

“AI-based fact checkers can be useful but they are never enough and at the moment people are still required to deal with the more nuanced cases,” said Mulhall. “The far-right seek to get around platform rules and employ coded imagery and language. To deal with this platforms need well trained expert moderators and they need more of them.”

Using AI to fight fake news is a flawed strategy because of the very nature of the content, experts argue.

“Hate speech detection is a subjective problem and the definitions keep evolving,” Dhruv Ghulati, founder and CEO at AI-powered fact-checking platform Factmata, told Verdict. “Language and the types of insults and conspiracies we see keep adapting, and no one conspiracy is really similar to the other. Algorithms by definition cannot be 100% accurate, else they would be a set of conditional rules. So I believe we will always have a sliver of content that slips through the gaps.”

Others argue that the proliferation of harmful content can only be prevented if social media platforms do more to collaborate with other businesses, researchers and governments.

Verdict has reached out to Google, Twitter and Snapchat, but they did not return with comments before the publication of this story.

The latest blow

The report is the latest critique against Silicon Valley tech giants’ inability to fight fake news, conspiracies and racism. Over the course of 2020, experts repeatedly warned about how misinformation campaigns about Covid-19, politics and minorities thrived online. However, despite repeated calls to action, only limited prevention efforts – such as Facebook freezing political ads in the run-up to the November US vote – were initiated.

Members of the US Senate challenged both Facebook and Twitter’s CEOs in late 2020 over whether their steps to prevent the spread of claims of voter fraud had been far-reaching enough.

This criticism culminated in January with the Capitol Hill insurrection, which had been fuelled by a tsunami of misinformation washing over the web. Donald Trump fanned the flames, suggesting that the election had been stolen. He was kicked off both Twitter and Facebook following the storming of Capitol Hill.

Since then, Facebook and Twitter have both publicly launched initiatives to handle misinformation.

Facebook announced a partnership with fact-checking charity Full Fact to provide people with “additional resources to scrutinise” online content. While Full Fact is described by both organisations as providing “third party” fact checks, it receives the majority of its funding from Facebook.

Twitter introduced a new system in January to enable users to flag tweets with false information and then added a three-strike system for accounts spreading conspiracy theories about coronavirus vaccines in March.


Read more: Will US Capitol riots spark further change in tackling misinformation?