Australia’s online watchdog has criticised the world’s largest social media companies of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and insufficient measures to prevent new accounts. In its initial compliance assessment since the prohibition came into force, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, warning that platforms must demonstrate they have implemented “appropriate systems and processes” to stop under-16s from using their services.
Regulatory Breaches Exposed in Initial Significant Review
Australia’s eSafety Commissioner has documented a concerning pattern of failure to comply among the world’s biggest social media platforms in her inaugural review since the ban took effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have jointly failed to implement adequate safeguards to stop minors from accessing their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification systems, highlighting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings represent a significant escalation in the regulatory action, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has stressed that merely demonstrating some children still maintain accounts is inadequate; platforms must instead furnish substantive proof that they have established robust systems and processes intended to stop under-16s from opening accounts in the first place. This shift demonstrates the government’s determination to hold tech giants accountable, with potential penalties looming for companies that do not meet the statutory obligations.
- Enabling previously banned users to confirm again their age and restore account access
- Permitting repeated attempts at the same age assurance method with no repercussions
- Inadequate safeguards to prevent new under-16 accounts from being created
- Insufficient notification systems for parents and the general public
- Shortage of publicly available information about regulatory measures and account deletions
The Magnitude of the Problem
The considerable scale of social media usage amongst Australian young people highlights the compliance challenge confronting both the government and the platforms themselves. With numerous accounts already removed or restricted since the ban’s implementation, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to enforcing age restrictions have turned out to be considerably more complex than anticipated, with platforms struggling to differentiate authentic age confirmations from false claims. This intricacy has placed enforcement authorities grappling with the core issue of whether current age verification technologies are adequate to the task.
Beyond the operational challenges lies a broader concern about the willingness of platforms to prioritise compliance over user growth. Social media companies have long resisted strict identity verification requirements, citing data protection worries and the real challenge of confirming age online. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to implement the systems mandated legally. The shift towards active enforcement represents a critical juncture: either platforms will substantially upgrade their regulatory systems, or they stand to incur significant penalties that could transform their operations in Australia and potentially influence regulatory approaches internationally.
What the Figures Indicate
In the first month after the ban’s launch, Australian officials stated that 4.7 million accounts had been suspended or deleted. Whilst this figure initially appeared to demonstrate regulatory success, later review reveals a more nuanced picture. The substantial number of account removals indicates that many under-16s had successfully created accounts in the first place, demonstrating that preventive controls were lacking. Furthermore, the data raises questions about whether removed accounts constitute authentic compliance or simply users removing their profiles willingly in reaction to the new restrictions.
The restricted transparency concerning these figures has disappointed independent observers attempting to evaluate the ban’s genuine effectiveness. Platforms have disclosed minimal information about their compliance procedures, success rates, or the characteristics of removed accounts. This opacity makes it difficult for regulators and the public to evaluate whether the ban is working as intended or whether young people are just locating alternative ways to reach social media. The Commissioner’s push for thorough documentation of systematic compliance measures reflects increasing concern with platforms’ resistance to disclosing complete details.
Sector Reaction and Pushback
The major tech platforms have addressed the regulator’s enforcement action with a combination of assurances of compliance and doubts regarding the ban’s practicality. Meta, which runs Facebook and Instagram, emphasised its commitment to complying with Australian law whilst simultaneously arguing that accurate age determination continues to be a major challenge across the industry. The company has called for a alternative strategy, proposing that strong age verification systems and parental consent requirements put in place at the application store level would be more effective than platform-level enforcement. This stance reflects wider concerns across the industry that the current regulatory framework places an impractical burden on individual platforms.
Snap, the creator of Snapchat, has taken a more proactive public stance, stating that it had locked 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, sector analysts dispute whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ business models—which historically relied on maximising user engagement and growth—and the regulatory requirement to systematically remove an entire age demographic remains unresolved. Companies have consistently opposed rigorous age verification methods, citing privacy concerns and technical limitations, establishing an impasse between authorities and platforms over who carries responsibility for implementation.
- Meta argues age verification should occur at app store level rather than on individual platforms
- Snap asserts to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups cite privacy issues and technical obstacles as impediments to effective age verification
- Platforms contend they are making their best effort whilst questioning the ban’s general effectiveness
Wider Considerations Concerning the Ban’s Efficacy
As Australia’s under-16 online platform ban enters its implementation stage, fundamental questions persist about whether the legislation will achieve its stated objectives or merely push young users towards unregulated platforms. The regulator’s first compliance report reveals that despite months of implementation, substantial gaps remain—children continue finding ways to circumvent age verification systems, and platforms have had difficulty stop new underage accounts from being established. Critics argue that the ban’s success depends not merely on regulatory oversight but on whether young people will truly leave mainstream platforms or simply shift towards alternative services, secure messaging apps, or VPNs designed to conceal their age and location.
The ban’s global implications increase the complexity of assessments of its impact. Countries including the United Kingdom, Canada, and several European nations are observing Australia’s approach closely, considering similar legislation for their respective populations. If the ban fails to reduce children’s digital engagement or does not protect them from harmful content, it could damage the case for similar measures elsewhere. Conversely, if enforcement becomes sufficiently rigorous to genuinely restrict underage participation, it may encourage other nations to pursue similar approaches. The outcome will likely influence global regulatory trends for the foreseeable future, making Australia’s regulatory efforts examined far beyond its borders.
Those Who Profit and Who Loses
Mental health advocates and child safety organisations have backed the ban as a necessary intervention against algorithmic manipulation and exposure to harmful content. Parents and educators maintain that taking young Australians off platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and decrease exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks associated with social media use amongst adolescents, lending credibility to these concerns. However, the ban also eliminates legitimate uses of social media for young people—keeping friendships alive, accessing educational content, and participating in online communities around shared interests. The regulatory approach assumes harm exceeds benefit, a calculation that some young people and their families question.
The ban’s practical impact extends beyond individual users to affect content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that depend on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously used effectively. Meanwhile, the ban unexpectedly favours large technology companies with resources to build age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects go well past the simple goal of child protection.
What Follows for Compliance Monitoring
Australia’s eSafety Commissioner has signalled a notable transition from hands-off observation to active enforcement, marking a pivotal moment in the rollout of the youth access prohibition. The regulator will now collect data to establish whether services have omitted “reasonable steps” to block minors from using, a legal standard that goes further than simply noting that young people stay within these systems. This strategy necessitates demonstrable proof that platforms have implemented suitable mechanisms and protocols designed to exclude minors. The Commissioner’s office has indicated it will pursue investigations methodically, developing arguments that could trigger significant fines for breach of requirements. This move from observation to intervention reflects increasing dissatisfaction with the services’ existing measures and indicates that willing participation by itself is insufficient.
The rollout phase highlights important questions about the appropriateness of fines and the operational systems for ensuring platform accountability. Australia’s statutory provisions offers regulatory tools, but their effectiveness depends on the eSafety Commissioner’s commitment to initiate regulatory enforcement and the platforms’ capacity to respond substantively. International observers, notably regulators in the United Kingdom and European Union, will keenly observe Australia’s implementation tactics and results. A successful enforcement campaign could set a template for further jurisdictions considering equivalent prohibitions, whilst inadequate results might undermine the entire regulatory framework. The next phase will be critical whether Australia’s pioneering regulatory approach produces genuine protection for young people or stays primarily ceremonial in its influence.
