The Australian eSafety Commissioner is using the Online Safety Act to force online platforms to take down certain videos, or else.
The Anglosphere is in the midst of yet another drawn-out battle questioning the limits of freedom of speech, online safety, and the ability for online users to share information on their social media networks.
As my colleagues James Czerniawski and Mike Salem have covered extensively, the rollout of the Online Safety Act in the United Kingdom has focused on the first-order impact of users having to verify their ID when logging on to specific websites. The data and privacy risks inherent in that are clear, and considering the law has only been in effect for a few months, there will be even more second-order impacts to report on and outrage users.
Now, we look to Australia, where their version of an Online Safety Act (strangely also called the Online Safety Act) is also garnering further criticism as we come to understand exactly how these laws change our societies once implemented.
Australia’s law, passed in 2021, goes above and beyond in how it aims to “protect” Australian users from what the government deems “illegal content” online.
The act creates legal mechanisms to force “takedowns” of online content containing serious cyber abuse, including bullying, harassment, and non-consensual sharing, as well as other any other material deemed harmful. It forces online platforms to follow certain legal guidelines or otherwise face civil penalties that could reach up to millions of dollars.
Then in 2015, the government created the position of eSafety Commissioner to “educate Australians about online safety risks and help to remove harmful content”. This position gave additional teeth to the Online Safety Act, forcing online providers to block certain content at the request of the eSafety Commissioner or face massive financial sanctions.
As we’ve covered here at the Consumer Choice Center before, the Commonwealth of Australia has embarked on a number of troubling regulatory journeys that directly impact online consumers for the worse, but the latest example of the “safety requests” to (mostly American) tech firms shows just how far their legislation has gone:
Australia’s online safety watchdog has ordered social media platforms to take down posts showing the brutal stabbing murder of Ukrainian refugee Iryna Zarutska on a train in the United States, the assassination of Charlie Kirk and the beheading of a Dallas motel owner.
In a statement to news.com.au on Monday, the eSafety Commissioner said it had received multiple complaints last month about the three videos, which were then reviewed by the Classification Board and assessed as Refused Classification (RC).
Removal notices for multiple posts sharing the videos were sent to Elon Musk’s X and Facebook and Instagram owner Meta, with threats of fines of $825,000 per day for each offending post.
“RC content cannot legally be hosted, shared, distributed, sold, or accessed in Australia. Content that is classified RC is content that exceeds what can be included in the R 18+ and X 18+ ratings,” an eSafety spokesperson said.
“In practice, this means material that has been classified RC cannot legally be shared in Australia and is subject to removal notices by the eSafety Commissioner.”
What this means, effectively, is that even if online content originates and is posted abroad by foreign nationals, Australian authorities believe they have it in their mandate to block it not just from Australian citizens, but everyone else too.
Added to that, what the eSafety Commissioner considers “illegal content” is also highly subjective, ranging from surveillance videos of horrible crimes to AI-generated images that may include child sexual abuse material (CSAM).
While everyone can plausibly understand the reason and rationale to block the latter, the fact that videos that may prove disturbing but are otherwise newsworthy are also grouped into “illegal content” and thus subject to sanction and fines is beyond the pale.
eSafety Commissioner Julie Inman-Grant says she believes in free speech and defends using her powers to remove the light-rail stabbing video of Ukrainian refugee Iryna Zarutska.
She adds that she respects Australians’ right to political communication and says she wouldn’t be… pic.twitter.com/8PBgfeTQpZ— Australians vs. The Agenda (@ausvstheagenda) October 8, 2025
In a meeting of the Australian Senate Environment and Communications Committee this week, ESafety Commissioner Julie Inman Grant explained the reasons for why her office was threatening to fine the platform X for hosting the videos in question, leading more than a few senators to be perplexed as to her reasoning and wondering out loud whether it would be considered overreach.
Grant is an interesting political figure for several reasons. For one, as one can tell from her accent, she’s an American in a high position in an Australian regulatory agency. Second, she’s a former tech executive herself, having formerly been at Microsoft, Adobe, and even X when it was formerly known as Twitter.
The merits of her actions aside, one can easily see how such a highly-placed censorship czar in a liberal democracy like Australia can seem concerning to online users around the world.
While the debates continue to rage about the role of the government in limiting or moderating content online in many countries throughout the world (as I’ve written about concerning the European Union’s various regulations), the Australian example is proving once more than zealous government action will effectively always lead to greater censorship of voices not just in domestic situations, but also users far from the regulatory remit of these authorities.
No one disagrees that we need effective guardrails for what kids see and can access online. The more proper question, however, is what role do our own domestic policymakers and regulators have over what we can see versus those from abroad?
If we don’t answer that question effectively in our own democratic settings, it could mean that our online experience grows increasingly edited and moderated from forces far from our own control.
Yaël Ossowski is deputy director of the Consumer Choice Center.
Published at the Consumer Choice Center.
