A wave of racial abuse, posted by a significant minority, littered major social media platforms following England’s defeat in the EUFA European football championship (the ‘Euros’). This led to many users questioning social media platforms’ ability to monitor online harassment.

In the same week, Voice over Internet Protocol (VoIP) social media platform Discord announced that it is buying Sentropy, an AI platform that identifies abusive text posted by users in online communities. Discord has a poor history of online malpractice and is investing to tackle online harassment and trolling.

Big Tech social media platforms must increase investment in moderating online content and tackling harassment. With 1.69 bn Facebook users and over 1 bn Instagram users, these Big Tech platforms have a duty of care to protect users online. Favouring profits over user care will not sit well with GenZ consumers who will look beyond Facebook, Instagram, and Twitter.

Facebook needs to adapt its moderation strategy

Facebook already uses AI and moderators to fight online harm. Despite this, Facebook’s community guidelines indicate the onus is often on the user: “in certain instances, we require self-reporting because it helps us understand the person targeted feels bullied or harassed.”

Facebook also has different community guidelines for public figures and private individuals. The guidelines suggest Facebook allows “critical commentary” of public figures but will remove attacks that are severe. However, in the aftermath of the Euros, black players received significant racial abuse online. The Center for Countering Digital Hate, an NGO, found Instagram failed to remove 94% of accounts targeting footballers with racism. Users also questioned why Facebook and Instagram were able to add Covid-19 fact checks to posts but could not flag potentially racist posts.

Facebook and Instagram have previously invested in AI to monitor posts. In 2016, Facebook introduced DeepText to monitor the textual content of posts, but it was mainly used to eliminate spam. Instagram has recently started using DeepText to combat online trolls and harassment. A moderating team works with the AI. However, the barrage of racism after the football final suggests improvements and investment is needed to protect Facebook’s vast online communities.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Contextually aware AI can help detect racial abuse

In the same week as the Euros final, VoIP and instant messaging platform Discord bought Sentropy, an AI software company tackling abusive text online. Discord has had a poor reputation in the past, with far-right groups using the platform. The acquisition is part of its “multilevel” approach to moderation.

The Sentropy Detect API scores text strings for specific classes of abuse, to monitor, categorize, investigate, and moderate user generated content. The ‘IDENTITY_ATTACK’ feature identifies an attack based on a shared identity, such as ethnicity, nationality, race, religion, gender, age, or sexual orientation. An attack is defined as anything from a violent threat, to a seemingly ambiguous phrase with undertones of derogatory slurs.

AI trained to be contextually aware of online abuse helps content moderators direct their efforts where it is most needed.

Moderators should work in tandem with improved AI

Facebook already has 15,000 content moderators in its arsenal, making up 24% of its workforce. The company needs invest more in automated systems to moderate content, to help content moderators prioritize harmful posts, and stop hate crimes in online communities.

Sentropy CEO John Redgrave said, “We all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.”

Big Tech should invest in content moderator tools, and change their online guidelines on racial abuse, or risk losing a significant proportion of their target users.