Social media firms must better enforce Australia under-16 ban, watchdog says

Foto: BBC Tech
Nearly 5 million accounts have been blocked or deleted in just one month since Australia introduced its pioneering ban on social media use for individuals under the age of 16. Despite this massive scale, the local regulator, eSafety, warns that tech giants such as Meta, TikTok, and YouTube are still not doing enough to effectively enforce the law, which has been in effect since last December. The report reveals a series of shortcomings: platforms allow minors multiple attempts to bypass age verification and grant children who previously admitted to being under the limit a second chance to "prove" they are older. For users and parents worldwide, this situation serves as a key test of Age Assurance effectiveness. Although Snap has already blocked 450,000 accounts, practice shows that young people are bypassing security measures en masse, while reporting systems for guardians remains inefficient. The global consequences of this dispute are fundamental—if Australia proves that platforms possess the technical capabilities to fully secure the system, similar restrictions could rapidly become the standard in other regions. Responsibility for age verification is shifting from the hands of parents directly to the algorithms and registration processes of major corporations, which must now prove they have taken "reasonable steps" to protect the youngest users from addictive content.
The global regulatory experiment that is the Australian ban on social media use by individuals under the age of 16 is entering a phase of critical verification. Although the regulations came into force at the end of last year, the national internet regulator eSafety is warning that technology giants are not properly fulfilling their obligations. A report published by the office points to systemic loopholes that allow minors to bypass security measures, calling into question the effectiveness of the restrictions intended to protect children from harmful algorithms and toxic content.
The situation in Australia is being closely monitored by governments worldwide, including the United Kingdom, which is considering introducing similar solutions. The regulator's first report since the introduction of the block in December last year leaves no illusions: Facebook, Instagram, Snapchat, TikTok, and YouTube exhibit a range of "bad practices" in the area of age verification. The eSafety office expressed "significant concerns" regarding the way these platforms approach law enforcement, suggesting that declarations about protecting young users do not always go hand in hand with technological reality.
Leaky verification systems and loopholes for clever teenagers
The analysis conducted by eSafety revealed specific mechanisms that make the ban largely ineffective in practice. One of the most serious allegations is allowing children who declared an age under 16 before the ban was introduced to suddenly change that data without rigorous control. Furthermore, platforms allow for multiple attempts to pass the same age assurance process, which in practice becomes a method of trial and error for determined teenagers. There is also a lack of effective tools for parents to efficiently report accounts belonging to unauthorized persons.
Read also

The scale of the problem is visible in operational data. Although in January eSafety reported the removal or restriction of 4.7 million accounts in the first month of the act (from December 10), the actual presence of children online remains high. Visits by journalists to Australian schools confirm the thesis of the blockades' ineffectiveness – most students who used social media before the ban still have access to them. Some claim they were never asked for proof of age, while others openly admit to bypassing security systems.
- Meta (owner of Facebook and Instagram) claims that effective age determination is a challenge for the entire industry and suggests shifting responsibility to app stores.
- Snap declared it had blocked 450,000 accounts and promises to continue these actions every day.
- The regulations cover a total of 10 platforms, including X, Reddit, Threads, and streaming services Kick and Twitch.
- Online games have been excluded from the ban, which draws criticism due to their equally addictive nature.
The end of the grace period and transition to enforcement
The Australian eSafety Commissioner, Julie Inman Grant, announced the end of the monitoring stage and a transition to active collection of evidence of violations. The regulator does not intend merely to show that children still have accounts – the goal is to prove that platforms have not implemented "appropriate systems and processes." This is a subtle but crucial legal difference: the responsibility lies with the tech companies, which must demonstrate they have taken "reasonable steps" to prevent people under 16 from registering.

Inman Grant compares the current fight with Big Tech to historical clashes with the tobacco or automotive industries. In her view, platforms have the technical capabilities to adapt to the law almost immediately, but their financial interests and revenue potential stand in contradiction to the new regulations. This reform is seen as an attempt to reverse 20 years of ingrained digital practices, which requires time, but also persistence from oversight bodies.
"The evidence must show that the platform has not implemented appropriate systems and processes. This is more than just showing that some children still have accounts" – emphasizes Julie Inman Grant.
Cultural reset vs. digital exclusion
Despite technical difficulties, the ban has gained broad support among parents. For many of them, the new law has become a negotiating tool in disputes with children who pressure them to have a social media profile. The government ban removes the burden of being "the bad guys" from guardians, shifting responsibility to the state's legal framework. This is part of a broader "cultural reset" aimed at changing the perception of children's presence in the digital space.
On the other hand, critics point to real dangers stemming from the total isolation of young people from the network. Child welfare experts and technology specialists argue that instead of bans, the state should invest in education about threats. Voices are emerging that the restrictions hit minority groups as a side effect – youth from rural areas, teenagers with disabilities, or people from the LGBTQ+ community, for whom the internet is often the only place to build relationships and find support in a safe environment.
In my assessment, the example of Australia shows that technology will always be a step ahead of legislation if governments rely solely on bans instead of forcing software producers to design solutions that are safe by default (safety by design). As long as the business model of social media is based on maximizing time spent in front of the screen by users of any age, platforms will implement only the minimum necessary safeguards to avoid penalties, and not to realistically protect children. The real test for eSafety will not be the number of deleted accounts, but the ability to impose financial penalties so severe as to force the giants of Silicon Valley to rebuild their technological foundations.







