Should the UK ban social media for children?

Earlier this month, Australia implemented a world-first ban on social media accounts for children under 16. In this article, IF Senior Researcher Conor Nakkan takes a closer look at the policy and considers whether the UK should follow Australia’s lead.

No more TikTok?

From December 10 2025, children under 16 in Australia will no longer be able to hold social media accounts on platforms like Instagram, Snapchat, Facebook, YouTube, or TikTok. In practice, this means that tech companies must deactivate all existing accounts held by under-16s. They will also be required to take all “reasonable steps” to ensure that children are unable to create new accounts.

Importantly, responsibility for compliance with the new rules sits with the platforms themselves, not with parents or children. Serious and systemic breaches could result in fines of up to £24.5 million.

But should the UK implement a similar policy? Is this an effective and proportionate approach to protecting children from online harms? Or does it go too far, unfairly restricting children’s rights and autonomy?

Protecting children?

The Australian Prime Minister, Anthony Albanese, has argued that “social media companies have a social responsibility. That responsibility starts with the protection of Australian children.” As Albanese has made clear, the primary justification for the policy is that social media is harming children, and that the government should intervene.

These claims are supported by a growing body of research. The social psychologist Jonathan Haidt has documented the sharp deterioration in young people’s mental health since the late 2000s in the United States. In The Anxious Generation, he argues that this decline is linked to the widespread adoption of smartphones and social media. He describes this as the shift towards phone-based childhoods.

Similar trends can be observed in Australia. Between the late 2000s and the early 2020s, the share of young Australians reporting a mental disorder rose by around 40 per cent for males and 60 per cent for females. Hospitalisations for self-harm and suicide deaths also increased substantially over the same period.

Beyond these broad trends, there are also more acute and specific harms. Surveys point to high rates of cyberbullying, online sexual exploitation, and exposure to violent or pornographic content. Many children encounter such material for the first time around the age of 13, often unintentionally.

Taken together, this evidence raises a basic question for policymakers. Should children be allowed to enter into contractual relationships with tech firms whose business models are explicitly designed to maximise engagement and extract their attention?

Will it lead to more harm than good?

Some critics of the ban have argued that it will only push children onto other platforms, which may be more nefarious or unsafe. Unsurprisingly, this line of argument has been taken up most enthusiastically by some of the largest tech companies. Snapchat, for example, has claimed that “disconnecting teens from their friends and family doesn’t make them safer — it may push them to less safe, less private messaging apps.”

But this is not the compelling objection its proponents suggest. The Australian government has been clear that the scope of the policy is not fixed. If children begin to migrate to new social media platforms, those platforms can be brought within scope of the ban.

Others worry that the ban could make some children feel more isolated, particularly those in rural areas or from neurodivergent or LGBTQ+ communities, who may rely on online spaces to find like-minded peers.

There may well be a difficult adjustment period for some of these children. But this does not, by itself, demonstrate that the policy is misguided. The aim is not to deny the value of online connection, but to weigh it against the growing evidence of harm during a particularly important stage of development. The fact that some children benefit from social media does not mean that unrestricted access is the right default for everyone. And, of course, these children can still communicate with their friends via WhatsApp, text, and other messaging services.

Will it even work?

Another common objection is that the ban will not work in practice. Stories have already surfaced of children finding ways to circumvent age restrictions, whether by using VPNs or borrowing adult credentials. Critics take these examples to demonstrate that the ban, however well intentioned, is not practicable.

But we might reasonably ask: what policy has ever been implemented with perfect compliance? Underage drinking, illegal drug use, and copyright infringement all persist despite legal restrictions. That fact alone is not a decisive argument against regulation.

More importantly, a policy does not need to be perfectly enforced to be effective. Even partial compliance can shift norms, reduce exposure, and relieve social pressure. One of the core problems parents face today is that keeping children off social media feels like a losing battle when “everyone else is on it.” A clear legal minimum age helps change that dynamic. It gives parents firmer ground to stand on and places responsibility where it belongs, with the companies that profit from children’s attention.

Is it too paternalistic?

Others have objected to the ban on the grounds that it undermines children’s autonomy and rights. Two 15-year-olds, supported by an advocacy group called the Digital Freedom Project, intend to challenge the new laws in the High Court. They argue that the ban violates their implied right to freedom of political communication and expression.

What should we make of these kinds of arguments? Well, there is no denying that the policy is paternalistic. The Australian government has decided that it knows what is in children’s best interests, regardless of their potential views on the matter.

But it is worth stepping back and asking whether this kind of paternalism is really so objectionable. Governments already prevent children from doing many things they might otherwise want to do, from buying alcohol to smoking cigarettes or gambling. These restrictions are widely accepted. And this is not because children’s preferences do not matter. Rather, it is because children are still developing the cognitive capacities to make fully informed decisions about their own lives.

The same logic applies here. While some teenagers may be perfectly capable of understanding how social media algorithms work and weighing the costs and benefits for themselves, many are not. Given what we know about adolescent brain development, it does not seem unreasonable for governments to err on the side of caution.

Is it intergenerationally unfair?

A final line of objection might claim that we are unfairly taking something away from one generation that other generations had access to or benefitted from. In other words, we are applying certain rules and restrictions to one generation, which were not equally applied to other generations. If this were the case, the policy could be considered intergenerationally unfair.

But this kind of argument rests on a false premise. The social media environment facing today’s children is radically different from anything previous generations experienced. The proliferation of algorithm-driven platforms designed to maximise engagement has occurred largely within the last decade or so.

As someone in my late twenties, I (thankfully) did not have a smartphone or social media accounts until I was around 16. That was not too long ago. The claim that banning under-16s from social media is depriving them of some long-standing generational entitlement simply does not stand up.

Should the UK ban social media for children?

As we have seen, the arguments against the social media ban are not particularly compelling. So, should the UK follow Australia’s lead?

We think the answer is yes. For a start, we know that young people in the UK, like their peers in Australia, are facing a mental health crisis. Rates of anxiety, depression, and self-harm among young people have risen sharply over the past decade. This has already impacted their overall wellbeing, as well as their educational and labour market outcomes.

The choice of 16 as the cut-off point also makes sense in a UK context. Sixteen-year-olds will be able to vote in the next general election. This reflects a judgement that, by this age, young people generally have the capacity to make informed decisions about matters of public importance.

Seen in that light, the policy seems to draw a reasonable line. Below 16, children are still developing the skills needed to navigate highly manipulative and extractive online environments. Above 16, we increasingly treat young people as capable of weighing risks and exercising judgement for themselves.

Finally, it is important to emphasise what the policy does not do. It does not cut under-16s off from the internet entirely. They can still message friends, access information, and watch content online. It only restricts their ability to hold social media accounts on platforms that monetise their attention while they are in a formative period of cognitive development.

Help us to be able to do more 

Now that you’ve reached the end of the article, we want to thank you for being interested in IF’s work. We’re really proud of what we’ve achieved so far. And with your help we can do much more, so please consider helping to make IF more sustainable. You can do so by following this link: Donate.