1. Comment
  2. Comment
September 6, 2021

Twitch needs to flip the switch on hate raids

By GlobalData Thematic Research

Twitch must act swiftly to prevent hate raids or lose streamers and viewers to competitors. Hate raids involve flooding a streamer’s chat with abusive messages from bot accounts. Current measures, including human moderators, are insufficient to tackle the growing abuse broadcasters incur on the popular streaming site used predominantly by gamers.

On 1 September 2021, streamers on Amazon-owned Twitch boycotted the platform to protest its inaction over hate raids. Marginalised creators from an ethnic minority or LGBTQIA background are the prime targets of these hate raids. In August 2021, Black Twitch streamer RekltRaven set up the #TwitchDoBetter hashtag on Twitter to call out the giant streaming platform for failing to protect its content creators.

In May 2021, Twitch launched 350 new tags related to gender, sexual orientation, race, and disabilities. These tags aimed to make it easier for users to find creators who they feel represent them. These tags, however, have also placed a target on the backs of marginalised streamers who are then subject to racist, sexist, and homophobic abuse online.

Bots can help prevent hate raids

Twitch’s chat moderation tool AutoMod uses a mix of artificial intelligence (AI) tools, such as machine learning (ML) and natural language processing (NLP), to root out inappropriate content. The AutoMod can automatically ban certain inappropriate words in chat and ban abusive users. It can also identify attempts to circumvent filters using characters and emojis.

However, AI tools are not mature enough to accurately identify and tackle all hate messages, nor can they prevent spamming on chats before it occurs during a livestream. As a result, spam and hate messages often slip through the net. Therefore, streamers have called for these tools to be coupled with further measures, such as requiring proper identification before a user can set up an account.

Twitch streamers employ human moderators (or ‘mods’), who ensure that behaviour and content standards are met by removing offensive posts and spam in the chat. Streamers also take matters into their own hands – disabling chat altogether or only allowing subscribed followers to participate after a hate raid occurs.

A Twitch streamer exodus is likely

Twitch is the most popular streaming platform. According to TwitchTracker, there are currently over 9 million creators on the platform, up 31% from 2020, with an estimated average of 2.8 million concurrent viewers. Its popularity means streamers are heavily reliant on the platform for exposure and revenue.

Twitch reportedly takes a 50% cut of a streamer’s revenue, which has prompted many to promote their alternative accounts on competitor platforms that offer better deals. YouTube Gaming, for example, takes a 30% revenue cut, and Facebook Gaming has a no-commission policy until 2023. Streamers are encouraging their followers to subscribe to their channels on other platforms with the #SubOffTwitch hashtag in conjunction with the 24-hour boycott.

In addition to lower commission rates, YouTube and Facebook have made deals with established streamers to broadcast on their platforms exclusively. This enables them to draw traffic away from Twitch. Many popular streamers, such as Dr Lupo and Valkyrae, who have millions of followers, have signed exclusive deals with YouTube in the recent past. If Twitch fails to deal with hate raids and harassment effectively, a streamer exodus could be imminent.

Twitch must implement more sophisticated AI and user authentication tools to minimize online abuse. Streamers and moderators alone cannot deal with the hate raids and harassment taking place on the platform. This, coupled with the higher commission rate, may drive more content creators and viewers into the arms of the competition. Ultimately, Twitch’s revenue and reputation are at risk from hate raids.