Digital platforms might soon be required to proactively remove ‘legal but harmful content’ following radical government-backed amendments to the UK Online Safety Bill. This would mark the first attempt by a democratic country to revise the existing liability exemption regime, which ensures that online platforms cannot be held liable for the content that their users post on their platforms.

From a general duty of care to more explicit standards

The bill is the result of three years of discussion around the Online Harms white paper published by the Department for Digital, Culture, Media, and Sport (DCMS) in 2019, with the aim of creating “a system of accountability and oversight for tech companies beyond self-regulation.” A first draft of the bill, which was published in May last year, put a ‘duty of care’ on large social websites to remove harmful or illegal content and protect children. But it was largely left up to the tech giants themselves to police, with oversight from media regulator Ofcom.

The revised version of the bill will give Ofcom the power to force internet platforms like Meta and Google to use technology to proactively seek out and remove both illegal content—like terrorism propaganda and child sexual exploitation—and legal content that the government decides is harmful to children.

UK faces backlash from the tech industry

These radical amendments, which are backed by the UK government, have been met with a storm of criticism within the tech industry. Coadec, a trade body for tech start-ups, claimed that the changes would make the UK a “global outlier in how liability is policed and enforced online” and added that the UK would be a “significantly less attractive place to start, grow, and maintain a tech business”. TechUK, a trade group representing Apple, Google, Microsoft, and Meta argued that there was no consultation with the industry on the new proposals and that the plan goes against international legal norms.

Regulation across the world

Over the last few years, several initiatives from governments worldwide have attempted to stem the flow of online misinformation. In 2018, Germany passed the Network Enforcement Act, under which platforms can be fined for hosting unlawful content, including defamation and incitement to hatred, and for not removing such posts within 24 hours. Amid allegations of Russian meddling in its 2017 presidential election, France approved legislation that gave authorities the power to remove fake content spread via social media and to even block the sites that publish it in the three months before an election.

More recently, the EU’s proposed Digital Services Act has the potential to introduce important innovations, such as establishing a minimum level of accountability for digital companies, forcing them to present transparency reports for content moderation, and conducting annual risk assessments. However, none of these legislations go as far as to remove the liability exemption regime, as envisaged by the Online Safety Bill.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Getting the balance right

Critics warn that increasing the liability of tech companies for content that is deemed harmful but not illegal would be a departure by the UK from the US and European models of internet regulation. Attempts to mitigate online harm need to get to grips with the right to freedom of expression, one of the cornerstones of a democracy. The main risk is that by requiring internet platforms to remove illegal content swiftly, laws that tackle online harm might incentivize them to censor speech rather than risk a fine.