Banning young people from social media sounds like a silver bullet. Global evidence suggests otherwise

Banning young people from social media sounds like a silver bullet. Global evidence suggests otherwise

Around 98% of Australian 15-year-olds use social media. Platforms such as TikTok, Snapchat and Instagram are where young people connect with friends and online communities, explore and express their identities, seek information, and find support for mental health struggles.

However, the federal government, seeking to address concerns about young people’s mental health, has committed to ban under-16s from these platforms from later this year.

There is no doubt social media presents risks to young people. These include cyberbullying, posts related to disordered eating or self-harm, hate speech, and the basic risk of spending long hours scrolling or “doomscrolling”.

But is banning young people really the answer? We reviewed 70 reports from experts in Australia, the United Kingdom, the United States and Canada to understand what they recommend – and found broad agreement that a ban may not address the real problems.

Humans preventing harm

The overall verdict is that we need a much more thoughtful response than just a ban: only a coordinated approach between governments, regulators, tech companies and young people themselves will address youth mental health and online safety.

We should be asking what we can do to make online spaces safer for young people, not jumping straight to removing them entirely.

Content moderation is one area in need of urgent attention. Young people regularly report being exposed to harmful and age-inappropriate content on social media, while platforms replace moderation staff with cheaper AI systems.

Automated processes have their place, but many recommendations in our review emphasised the importance of human moderators to keep up.

Data and endless advertising

A second issue exists around the collection and use of user data. Tech platforms have built their business model around user engagement and ad revenue.

To keep users scrolling (and watching ads), companies collect large amounts of user data to deliver highly personalised feeds.

Many experts advocate against the widespread collection and use of young people’s data, particularly for delivering advertising materials that promote dieting, unregulated supplements and cosmetic procedures. Posts like these often appear in an endless stream, interspersed between non-harmful and entertaining content.

Starting with safety

Alongside greater regulation of advertising material, many experts emphasised the need to consider “safety by design”.

In other words, social media should be designed from the outset to prevent harming users. It may mean the end of “addictive” features such as infinite scrolling, frequent push notifications, and auto-play videos.

Regulators also need the tools and power to hold platforms to account.

That includes financial penalties, more transparent reporting from big tech companies, and taking proactive steps to keep harmful material off these platforms – not just taking down content after the fact.

Age-checking tech troubles

Our review did find a small number of reports that recommend barring young people from social media. However, experts questioned the feasibility of age verification technology and raised privacy concerns.

The federal government has passed the buck to social media companies for actually implementing age verification of users.

Platforms must take “reasonable steps” to restrict access by under-16s. It is unclear what these steps will be, but the prospect of facial recognition or digital ID checks raises serious privacy concerns.

Others argue that banning under-16s from social media will drive them to less regulated online spaces, including online forums such as the notorious 4Chan, where some pages have an explicit “no rules” policy.

It is also important to acknowledge that many young people find important support and communities on social media. Taking away social media may present risks to mental health in these circumstances.

Listening to young people

An age ban sounds decisive but comes with its own set of questions.

In the absence of social media, where do young people questioning their sexual or gender identity go to find information and support? What would a ban mean for young people who engage with news on social media?

There is little evidence about what impact a ban will have on young people, particularly those from diverse backgrounds.

What’s more, young people have had minimal input into the policy. They have the insight to offer practical, real-world insights into what works and what does not.

A blanket ban does nothing to make social media platforms safer for users. It might just delay problems and expose young people to an avalanche of harm when they log on at the age of 16.

A ban brings its own risks

The push to ban social media for under-16s is driven by genuine concerns. But unless it is a part of a broader, more thoughtful approach to online safety, it risks doing more harm than good.

If we want a healthier digital environment, we can’t just lock out young people and hope for the best.

The post “Banning young people from social media sounds like a silver bullet. Global evidence suggests otherwise” by Jasleen Chhabra, Research Fellow, Centre for Youth Mental Health, The University of Melbourne was published on 05/15/2025 by theconversation.com