In its most recent battle with authorities in Australia, X (formerly Twitter) has launched legal action in the Federal Court, seeking an exemption from a new safety standard aimed at preventing the spread of harmful material online.
The standard in question is known as the Relevant Electronic Services Standard. It came into effect in December 2024, but won’t start being enforced by Australia’s online regulator, eSafety, until June this year.
Compared with the social media ban for under-16s, this standard has been a side issue in the broader topic of online safety. So what exactly is it? And will it be effective at preventing the spread of harmful material online?
What is the standard?
The Relevant Electronic Services Standard contains criteria to help address the pervasiveness of harmful and illegal material distributed online. It is particularly focused on child sexual exploitation content, depictions of extreme violence, illegal drug material, and pro-terror content.
Relevant electronic services (RES) are digital services that enable user-to-user content. This includes instant messaging, email and chat platforms. The legal definition also includes some online gaming services.
Under Australia’s Online Safety Act 2021, the communications minister may exempt some services or platforms from being defined as an RES. The minister can also set conditions on the service for exemption, such as having a robust moderation service, or being a messaging service for internal employees of a company.
Some social media platforms, such as Facebook and X, may be defined as RES. That’s because they also offer user-to-user messaging services. It is sensible, then, for the Federal Court to determine whether they fall under social media codes or RES standards, or both.
The standards require RES to implement systems, processes and technologies to detect and remove child sexual abuse and pro-terror material from their services, and to actively deter end-users from distributing this material.
There are consequences for services that fail to comply. The eSafety commissioner, Julie Inman Grant, can issue a formal warning or infringement notice, or have the courts apply a civil penalty.
What does the standard do?
The Online Safety Act 2021 imposes obligations on RES providers, particularly regarding the handling of harmful material. This material is categorised into several classes, including Class 1A and Class 1B content.
Class 1A material typically means child exploitation and pro-terror content. Class 1B material refers to extreme violence, promotion of crime, and illegal drug-related content.
The class of content is determined by referring to the National Classification Scheme. This scheme sets standards for the ratings of films.
Class 1A and 1B material is content, texts and images that would be “refused classification” under the scheme. That is, it would be material that is usually not allowed to be distributed at all. Class 2 material is what we usually consider X-rated or 18+ material.
At the moment, the eSafety commissioner can ask a RES to remove Class 1 or Class 2 content, or the service can be penalised. However, the next step has been to work with industry to develop codes that require service providers to be more proactive in preventing Class 1 content being shared between their users.
Mick Tsikas/AAP
Will the standard be effective?
X wants its platform to be treated as exempt, and governed by the similar but less stringent Social Media Code instead. Whatever the Federal Court decides, however, there are other issues to consider.
Part of the difficulty with the scheme is that it relies on harmful content coming to the attention of the eSafety commissioner. This usually happens when an end-user makes a complaint.
But our recent research, which surveyed 2,520 representative Australians and will be published later this year, found that only about 10% of users who were the target of digital harms reported them to the eSafety commissioner. Among those who had witnessed harmful content or behaviour, only 6% reported. About 40% of Australians don’t believe reporting will make any difference.
Another issue with the industry standards raised by digital rights activists is that it may require services to investigate user messages even when end-to-end encryption of messages is used. That may have serious privacy implications.

nexus 7/Shutterstock
A global treaty could help
This ties into broader problems with the online safety framework.
Much of the focus has been on managing platforms and getting platforms to police users and content – a necessary approach to avoid penalising individuals and overwhelming courts.
However, service provider policing often fails to meet the norms of due process, such as transparency and the right to appeal decisions.
It also makes platforms and messaging providers the “arbiters” of free speech and censorship, instead of governments, courts and communities.
While setting standards on platforms is one part of the solution, we need to continue developing remedies to protect users. This may include global agreements and multilateral treaties, similar to the International Covenant on Civil and Political Rights, so all countries can share the burden locally for digital harms that occur across jurisdictions, and ensure due process and the protection of privacy.

The post “What’s the obscure Australian online safety standard Elon Musk’s X is trying to dodge in court? An expert explains” by Rob Cover, Professor of Digital Communication and Director of the RMIT Digital Ethnography Research Centre, RMIT University was published on 05/21/2025 by theconversation.com
Leave a Reply