Growing awareness surrounding the online harms children face has led to increasing pressure to ban social media for under-16s. Australia’s bans came into effect in December 2025, and several European countries, including France, Spain, Italy, Greece, and Germany, are considering or trying to pass similar restrictions, while the UK is currently consulting on a potential ban. However, policymakers, experts, and campaigners are divided on whether blanket bans truly protect children from experiencing harm online, with many seeing them as punishing children for Big Tech’s failure to make online spaces safer.
Blanket bans could have unintended consequences
Social media is already deeply embedded in young people’s lives. It can provide community and connection, peer support, exposure to different cultures, and is often used as a source of news. A ban could strip away these connections, particularly for marginalised or isolated teens who may feel more comfortable online than in school or at home.
Many children’s charities oppose a ban. Peter Wanless from the National Society for the Prevention of Cruelty to Children (NSPCC) has argued that it would “penalise children for the failure of tech companies”. Alongside 41 other children’s and online safety organisations, the NSPCC signed a letter stating that while children under 13 should not have any access to social media, the focus for those over 13 should be on robust regulations rather than outright bans.
Enforcing blanket bans could have several unintended consequences. Firstly, a ban would be difficult to enforce and could push young people into even more dangerous, unregulated online spaces. Secondly, a sudden “switch-on” at 16 could make teenagers more vulnerable, exposing them to the full range of online risks without the gradual development of digital and media literacy. This is especially concerning in the UK, where 16-year-olds can now vote. Exposure to misinformation, deepfakes, and online political extremism without prior preparation could shape how—and why—they vote.
Online spaces should be safer by design
While outright bans may not be the answer, that does not mean that social media should remain unregulated. According to the children’s charity Internet Matters, 67% of children in the UK experience harm online. This is an incredibly concerning statistic and demonstrates the urgent need to make online spaces safer for young people. Implementing stringent regulations to make platforms safer by design could help to reduce online harm. This would include limiting design features that make social media addictive, such as endless scrolling, hyper-personalised algorithms, notification overload, and “streaks” that encourage daily engagement. Additionally, the Online Safety Act (OSA) should be strictly enforced, and platforms that fail to swiftly remove harmful or illegal content should be held to account.
Furthermore, children should be equipped with the skills and tools needed to safely navigate the digital world. Children should be taught digital literacy skills in school, along with coping strategies for when they encounter distressing content.
Support should also be available for parents, caregivers, and teachers, enabling them to encourage safer social media use rather than relying on legal bans.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataThe success of social media bans
At this stage, it is too early to say whether Australia’s ban has been successful. However, one in three teenage Australians told the mental health organisation Headspace that they would look for ways to get around it, and press reports suggest that teens are bypassing age assurance checks by changing the birth date on their account or tricking facial age-estimation tools. This should raise significant concerns for policymakers who want to follow Australia’s lead and lend weight to the view that regulation should prioritise making platforms safer rather than banning their use altogether.
An online regulation strategy focused on education, content moderation, and support is more likely to reduce online harm than a blanket under-16s ban, while still allowing teenagers to benefit from social media. As Peter Wanless from the NSPCC has said: “Children deserve to have age-appropriate experiences online rather than being cut off from it altogether.” Our world is increasingly digital, and it seems both unrealistic and unfair to completely restrict children’s access. Instead, the internet should be a safe place for everyone.

