Australia Bans Social Media for Under-16s: New Online Safety Law Takes Effect Nationwide
Australia has implemented one of the world’s most ambitious online safety laws by effectively banning social media accounts for users under the age of 16. The law came into effect on December 10, 2025, after being passed as an amendment to the Online Safety Act.
It requires designated major platforms - including Instagram, TikTok, YouTube, Facebook, Snapchat, Reddit, X (formerly Twitter), Twitch, Threads, and Kick - to take reasonable steps to prevent Australians under 16 from holding accounts. Platforms that fail to comply may face penalties up to 49.5 million Australian dollars.
The Australian government has described this policy as a critical step to protect young people from cyberbullying, predatory behaviour, harmful content, addictive recommendation systems, and long-term mental health risks. Officials have framed the approach as a “developmental pause” that gives children time to grow without excessive digital pressure.
The law began enforcement at midnight on December 10, 2025, and its impact was felt almost immediately. Within hours, thousands of accounts associated with under-16 users were flagged or deactivated by various platforms. Many young users woke up to find that their profiles, posts, contacts, and creative portfolios had already begun to be removed.
The abrupt disconnection triggered strong emotional reactions from teens who had spent years building online identities. Some used shared or older accounts to say goodbye to their communities, while others expressed anxiety about feeling isolated from friends, mentors, and social groups. Young people from remote or marginalised communities reported heightened distress, as social media often serves as a primary means of connection and support.
The ban also affected teenage influencers, young musicians, and digital artists who suddenly lost access to their audiences, disrupting creative growth and collaborations overnight.
Although the government remains confident in the policy, the first phase of implementation highlighted significant enforcement challenges. The law does not prescribe a single universal method for age verification. Instead, platforms must rely on a combination of tools, such as identity checks, AI age estimation systems, and other verification technologies. These methods have shown inconsistencies in accurately identifying age, with some minors still able to retain access by entering incorrect birth years or by passing age-inference tests.
Digital safety experts warn that these gaps may drive teenagers to unregulated apps or fringe platforms that lack robust safety monitoring. Many teens are already discussing workarounds - including fake birthdays, virtual private networks (VPNs), or using older siblings’ accounts - which analysts caution could expose them to riskier online environments than those the law aims to regulate.
Technology companies have criticised the new rules as rushed and technically unrealistic. Industry representatives argue that mass account removal requires substantial operational resources and may force platforms to adopt intrusive verification systems that raise privacy concerns among both parents and digital rights advocates.
Digital rights organisations have also raised alarms, arguing that the ban could restrict freedom of expression and may inadvertently silence vulnerable groups, such as LGBTQ+ youth and neurodivergent teens, who rely on online communities for emotional support and identity exploration.
Several civil society groups have stated they will legally challenge the decision, arguing it may conflict with Australia’s constitutional protections related to communication and public participation.
Australia’s eSafety Commissioner has adopted a strict monitoring role. Platforms are instructed to submit detailed compliance reports monthly for the first six months, documenting the number of accounts removed, verification improvements, and ongoing enforcement outcomes.
The government has also suggested this law is part of a broader overhaul of digital safety norms. Additional measures under discussion include tighter privacy rules, restrictions on algorithmic targeting of minors, and controls on features such as autoplay and infinite scrolling.
The global reaction to Australia’s decision has been immediate and intense. Several countries - including Denmark, Norway, Malaysia, and certain European Union members - are evaluating whether similar rules could be adopted within their own jurisdictions.
Child safety advocates and some public figures have expressed support for Australia’s approach, framing it as a significant step toward addressing the mental health crisis facing young people. At the same time, international digital rights groups warn that the ban could encourage governments worldwide to impose broad controls over online platforms, potentially shifting the balance between safety and free expression.
Australia’s under-16 social media restrictions are now considered a major test case in global digital governance. Their outcomes are expected to play a defining role in how countries around the world address the growing tension between child safety, mental health, online freedoms, and the influence of large social platforms.
It requires designated major platforms - including Instagram, TikTok, YouTube, Facebook, Snapchat, Reddit, X (formerly Twitter), Twitch, Threads, and Kick - to take reasonable steps to prevent Australians under 16 from holding accounts. Platforms that fail to comply may face penalties up to 49.5 million Australian dollars.
The Australian government has described this policy as a critical step to protect young people from cyberbullying, predatory behaviour, harmful content, addictive recommendation systems, and long-term mental health risks. Officials have framed the approach as a “developmental pause” that gives children time to grow without excessive digital pressure.
The law began enforcement at midnight on December 10, 2025, and its impact was felt almost immediately. Within hours, thousands of accounts associated with under-16 users were flagged or deactivated by various platforms. Many young users woke up to find that their profiles, posts, contacts, and creative portfolios had already begun to be removed.
The abrupt disconnection triggered strong emotional reactions from teens who had spent years building online identities. Some used shared or older accounts to say goodbye to their communities, while others expressed anxiety about feeling isolated from friends, mentors, and social groups. Young people from remote or marginalised communities reported heightened distress, as social media often serves as a primary means of connection and support.
The ban also affected teenage influencers, young musicians, and digital artists who suddenly lost access to their audiences, disrupting creative growth and collaborations overnight.
Although the government remains confident in the policy, the first phase of implementation highlighted significant enforcement challenges. The law does not prescribe a single universal method for age verification. Instead, platforms must rely on a combination of tools, such as identity checks, AI age estimation systems, and other verification technologies. These methods have shown inconsistencies in accurately identifying age, with some minors still able to retain access by entering incorrect birth years or by passing age-inference tests.
Digital safety experts warn that these gaps may drive teenagers to unregulated apps or fringe platforms that lack robust safety monitoring. Many teens are already discussing workarounds - including fake birthdays, virtual private networks (VPNs), or using older siblings’ accounts - which analysts caution could expose them to riskier online environments than those the law aims to regulate.
Technology companies have criticised the new rules as rushed and technically unrealistic. Industry representatives argue that mass account removal requires substantial operational resources and may force platforms to adopt intrusive verification systems that raise privacy concerns among both parents and digital rights advocates.
You may also like
- 19 Assam labourers die as truck plunges into Arunachal gorge
- K'taka to fill 2.84 lakh vacant posts in phases; over 56,000 positions cleared for action: CM Siddaramaiah
- UAE introduces stricter drug laws, mandates deportation for offenders
Prada and two Maharashtra undertakings ink pact for global launch of Kolhapuri chappals- Acid-attackers forcing victims to ingest acid must be tried under Attempt to Murder: SC
Digital rights organisations have also raised alarms, arguing that the ban could restrict freedom of expression and may inadvertently silence vulnerable groups, such as LGBTQ+ youth and neurodivergent teens, who rely on online communities for emotional support and identity exploration.
Several civil society groups have stated they will legally challenge the decision, arguing it may conflict with Australia’s constitutional protections related to communication and public participation.
Australia’s eSafety Commissioner has adopted a strict monitoring role. Platforms are instructed to submit detailed compliance reports monthly for the first six months, documenting the number of accounts removed, verification improvements, and ongoing enforcement outcomes.
The government has also suggested this law is part of a broader overhaul of digital safety norms. Additional measures under discussion include tighter privacy rules, restrictions on algorithmic targeting of minors, and controls on features such as autoplay and infinite scrolling.
The global reaction to Australia’s decision has been immediate and intense. Several countries - including Denmark, Norway, Malaysia, and certain European Union members - are evaluating whether similar rules could be adopted within their own jurisdictions.
Child safety advocates and some public figures have expressed support for Australia’s approach, framing it as a significant step toward addressing the mental health crisis facing young people. At the same time, international digital rights groups warn that the ban could encourage governments worldwide to impose broad controls over online platforms, potentially shifting the balance between safety and free expression.
Australia’s under-16 social media restrictions are now considered a major test case in global digital governance. Their outcomes are expected to play a defining role in how countries around the world address the growing tension between child safety, mental health, online freedoms, and the influence of large social platforms.









