The first two weeks of 2021 saw significant activity in the battle against misinformation, following the pro-Trump riots at the US Capitol. Trump’s incitement of violence from supporters protesting Biden’s US election win prompted Twitter, Facebook, and YouTube to ban his accounts from their sites.
Parler, the social network popular with Trump supporters and reportedly used to organise the Capitol protests, was later dropped from Google and Apple’s app stores and by Amazon Web Services (AWS). These initiatives were all aimed at further controlling the spread of fake news and resultant offline violence. Trump was also impeached for the second time after encouraging his supporters to act on his claims of election fraud.
Social media companies under pressure
After years of zero liability for content published on their platforms under Section 230 of the US Communications Decency Act, social media companies are under increasing pressure to regulate content. The recent US elections and Covid-19 have fuelled the spread of fake news and conspiracy theories on social media. Self-regulation efforts currently include flagging fake news, collaborative work with human fact-checkers, content moderation algorithms, and bans on political advertising during election campaigns.
Yet, these companies are currently unaccountable, and measures are not applied uniformly. With no regulatory framework in place requiring an explanation for their actions, Big Tech companies are free to act as they wish. Restricting content and self-regulating misinformation raises concerns over free speech regardless of platform terms of service. Aside from illegal content, giving a handful of companies sole control over what users can or cannot post could ultimately threaten one of the central tenets of democracy.
The power of Big Tech
Big Tech already wields significant power over the consumer. They store, sell, and use personal data to target ads and content to users via advanced algorithms. These systems could be partly at fault for the spread of false information to vulnerable and impressionable users. There is little public information on how these algorithms work, which only adds to the lack of accountability. Transparency here would shed light on the mechanisms through which content is spread.
More importantly, self-regulation and penalties only bring a short-term solution to the issue of misinformation, with new accounts and platforms likely to pop up when old ones disappear. After Parler’s shutdown, Gab was one of several platforms that saw a huge rise in user numbers.
Regulation on misinformation is tricky
Ultimately, misinformation has significant social implications, and governmental action will ensure a uniform and longer-term strategy to tackle the issue. Clear regulatory frameworks will provide much-needed guidelines under which online platforms can act while protecting user rights and limiting backlash to self-regulation measures.
Regulating misinformation does raise free speech concerns, and previous laws have pushed companies to censor content rather than incur fines. For example, Germany’s requirement that sites quickly remove illegal speech could prompt companies to prohibit the posting of content altogether. Russia’s new misinformation law falls dangerously close to outright censorship, punishing anyone found guilty of spreading false information with fines and prison time.
The EU presents a different approach with its Digital Services Act. Although a long way off, the proposed regulation will establish transparency, accountability, and compliance mechanisms for social media platforms.