The UK government’s long-awaited Online Safety Bill toes the line between safety and censorship

By Elles Houweling

As was touted in the Queen’s Speech on Wednesday, the UK government has now presented its draft Online Safety Bill to regulate harmful content on the internet. The 146-page document published on the GOV UK website imposes a duty of care on digital service providers to moderate user-generated content in a way that prevents users, especially the young and vulnerable, from being exposed to illegal or harmful information online.

The UK government introduced the draft bill by saying that it “marks a milestone in the government’s fight to make the internet safe.” Critics, however, cite issues of freedom of expression, arguing that the regulation encourages platforms to over-censor content.

The draft bill identifies the safeguarding of young and vulnerable people as its primary purpose while also upholding democratic debates online. The scope of the proposals covers three overarching categories: content that is harmful to children, content that is harmful to adults and content illegal in nature. This includes posts that encourage self-harm, racist content, hate crimes and misinformation.

Beyond that, special regulations will be put in place to tackle user-generated fraud online. The bill indicates that companies will now have to take responsibility for tackling fraudulent content, including romance scams or fake investment opportunities. Fraud via advertising, emails or cloned websites will not be in the scope because the bill focuses on harm committed through user-generated content.

As was introduced in a government white paper in 2019, the bill hands Ofcom a new “legal duty of care”, enabling it to force websites to remove content deemed “harmful”. Ofcom is the UK regulator for communications services that moderates TV and radio content, as well as broadband, home phones and mobile services.

Per the organisation’s website, it won’t be responsible for regulating or moderating individual pieces of online content. Platforms should implement appropriate systems and processes to moderate content, and Ofcom will only take action against them if they fall short of this responsibility.

Freedom of Expression

The bill mentions explicitly a safeguard that protects “content of democratic importance.” This includes content promoting or opposing government policy or a political party ahead of a vote in Parliament, election or referendum, or campaigning on a live political issue.

It also points out that companies will be forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation.

Content published by editorial organisations does not fall within the scope of the bill. Equally, anything dubbed as “journalistic content” will receive special protection under the new regulation. However, the bill recognises that this distinction may, in reality, be difficult to make when it concerns user-generated content. It thus stipulates that citizen journalists will have the same protection as professional journalists.

The draft Online Safety Bill is still subject to scrutiny by a joint committee of MPs before a final version is formally introduced to Parliament.

Need for transparent algorithms

Tasking user-to-user media providers with content moderation has become a heated debate in various countries. It has been argued that algorithms are not apt to detect harmful information adequately.

Other governments have also become increasingly aware of the dangers that harmful content on social media may pose. Notably in the US, following the insurrection on the Capitol in December of last year, there has been a push to reign in power held by social media platforms.

Testimonies given to the US House of Representatives in March by the CEOs of Facebook, Twitter and Alphabet highlight significant flaws in their self-regulation of content removal: namely that algorithms push misinformation towards susceptible parties.

Moreover, with the rise of new types of social media platforms that are mainly audio-based, such as Clubhouse, algorithms will be less apt to detect harmful content or misinformation.

With rising concerns about misinformation and core democratic values at stake, regulators on both sides of the Atlantic are running out of patience. GlobalData’s thematic analysis shows that numerous countries across the globe have started taking action against online misinformation.

GlobalData’s senior analyst, Laura Petrone, points out that the long-awaited online safety bill was a step forward to holding social media companies responsible for what they publish. “The riots on Capitol Hill and the Covid-19 pandemic have given this issue of harmful content, even if legal, more urgency and pushed online platforms to self-regulate in a way that has never been seen before.”

However, Petrone also noted athough the draft legislation has just been released, it has already come under fire. “Everyone agrees that action to mitigate online harmful speech must be balanced with the right to freedom of expression, but this is not an easy task. This is especially true with the threat of massive fines against the platforms, which can incentivise them to censor content rather than risk a fine. Ofcom will be able to issue fines of up to £18m or 10% of global turnover, whichever is higher, if companies fail to comply with the new rules.”

She added: “The bill also introduces a duty of care for social media platforms to take action against harmful content deemed ‘democratically important’. This is a quite unprecedented step for a government and shows that increasingly the proliferation of harmful content and false narratives is viewed as a threat to a democracy’s wellbeing.”