After the Buffalo shooting, New York state’s top prosecutor opened an investigation into the role social media companies played in the attack. The US Department of Justice is also investigating the mass shooting as a hate crime. Once again, these platforms face criticism for spreading violent content and, some would argue, for their largely inadequate moderation efforts. But how do tech companies tackle extremist content online, and is it sufficient?
Looking for a tech solution to a tech problem
The shooting has shed light on the lack of cooperation among social media platforms, which undermines moderation efforts. Twitch, the gaming platform used by the gunman to livestream the massacre, said it took down the video within minutes of the violence starting, but edited versions of the video still proliferated elsewhere including on Facebook and Twitter. Facebook’s parent company Meta did not remove a link to the copied video for over 10 hours, by which time it had been shared more than 46,000 times. Additionally, TikTok users shared search terms that would take viewers to the full video on Twitter.
Tech experts have pointed out that the technology used to identify live-streamed violent content needs to be shared across the industry to stop the redistribution of edited videos. In an interview with The Guardian, Hany Farid, a professor of computer science at UC Berkeley, claimed that the technology to stop redistribution across multiple platforms already exists. It is called ‘hashing’ and it creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded. The failure to contain violent material highlights tech companies’ lack of motivation in fixing the problem and in investing in the adequate technology.
Recommendation systems are key to platforms’ monetization model
Secondly, social media are increasingly scrutinized for failing to deal with algorithms that steer users towards harmful content. The Buffalo shooting showed that people with extremist views are exposed to increasingly radical content and encouraged to watch videos similar to those they have already watched, as a result of recommendation engines that potentially feed addictions. Recommendation systems across social media platforms play a crucial role in keeping users engaged, ultimately collecting and monetizing their data. These so-called microtargeting practices use algorithms designed to improve the customer experience and, more importantly, generate profits. According to Hootsuite, 70% of what people watch on YouTube is determined by its recommendation algorithm.
Regulators and policy-makers are putting new legislation in place aimed at addressing the root causes of online harm. While this scrutiny will not result in the prosecution of companies for failing to deal with algorithms, it will put these companies under immense pressure.
Regulation is catching up with social media
The Buffalo shooting also shows that the self-regulation of tech companies is not enough and that some form of legislation is needed to tackle online harm.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThree years after the Christchurch mosque shooting in New Zealand (which was initially live-streamed on Facebook) social media platforms’ pledges to do more to limit violent content appear meaningless. However, things are slowly starting to change. The Online Safety Bill and the Digital Services Act—pieces of legislation being introduced in the UK and EU, respectively—are targeting criminal activity online.
Although the liability exemption regime of these platforms is not removed and they cannot be held directly responsible for user-generated content, the new rules will force them to be more transparent and accountable in their attempts to tackle online hate and misinformation.
Related Company Profiles
Meta Platforms Inc
TikTok