Family Screen Habits
Australia Bans Social Media for Kids: What Parents Need to Know
Australia Bans Social Media for Kids: What Parents Need to Know
Australia Bans Social Media for Kids: What Parents Need to Know
While AI and technologies grow faster day by day, Australia has passed world-first legislation banning children under 16 from accessing social media platforms. The law, which passed parliament in November 2024, takes effect on December 10, 2025, making Australia the first country to implement such a comprehensive age restriction on social media.
If you're a parent wondering what this means for your family, how it will work, and whether similar laws might come to other countries, this guide breaks down everything you need to know about Australia's groundbreaking social media ban.
While AI and technologies grow faster day by day, Australia has passed world-first legislation banning children under 16 from accessing social media platforms. The law, which passed parliament in November 2024, takes effect on December 10, 2025, making Australia the first country to implement such a comprehensive age restriction on social media.
If you're a parent wondering what this means for your family, how it will work, and whether similar laws might come to other countries, this guide breaks down everything you need to know about Australia's groundbreaking social media ban.
While AI and technologies grow faster day by day, Australia has passed world-first legislation banning children under 16 from accessing social media platforms. The law, which passed parliament in November 2024, takes effect on December 10, 2025, making Australia the first country to implement such a comprehensive age restriction on social media.
If you're a parent wondering what this means for your family, how it will work, and whether similar laws might come to other countries, this guide breaks down everything you need to know about Australia's groundbreaking social media ban.



Do You Know Your Whole Life Screen Time ?
What I am going to cover
What Is the Social Media Ban for Kids in Australia?
Why Did Australia Propose This Ban?
Which Age Groups Are Affected?
What Platforms Are Included in the Ban?
How Will the Ban Be Enforced?
What Happens Next?
What I am going to cover
What Is the Social Media Ban for Kids in Australia?
Why Did Australia Propose This Ban?
Which Age Groups Are Affected?
What Platforms Are Included in the Ban?
How Will the Ban Be Enforced?
What Happens Next?
What I am going to cover
What Is the Social Media Ban for Kids in Australia?
Why Did Australia Propose This Ban?
Which Age Groups Are Affected?
What Platforms Are Included in the Ban?
How Will the Ban Be Enforced?
What Happens Next?
What to remember
The law passed November 29, 2024 and takes effect December 10, 2025, making Australia the first country to implement such comprehensive age restrictions.
Children under 16 are banned from all major platforms including Facebook, Instagram, TikTok, Snapchat, X, YouTube, Reddit, Twitch, Threads, and Kick.
Platforms face fines up to $49.5 million for non-compliance, with no penalties for children or parents who attempt access.
No parental override exists - even with parental consent, under-16s cannot have accounts on restricted platforms.
Age verification technologies are being tested through a $6.5 million government trial examining document verification, facial age estimation, and third-party services.
The law has bipartisan support and 77% public approval despite criticism from academics, human rights advocates, and tech companies.
Implementation faces challenges including privacy concerns, technical feasibility questions, and a constitutional legal challenge filed in November 2025.
What to remember
The law passed November 29, 2024 and takes effect December 10, 2025, making Australia the first country to implement such comprehensive age restrictions.
Children under 16 are banned from all major platforms including Facebook, Instagram, TikTok, Snapchat, X, YouTube, Reddit, Twitch, Threads, and Kick.
Platforms face fines up to $49.5 million for non-compliance, with no penalties for children or parents who attempt access.
No parental override exists - even with parental consent, under-16s cannot have accounts on restricted platforms.
Age verification technologies are being tested through a $6.5 million government trial examining document verification, facial age estimation, and third-party services.
The law has bipartisan support and 77% public approval despite criticism from academics, human rights advocates, and tech companies.
Implementation faces challenges including privacy concerns, technical feasibility questions, and a constitutional legal challenge filed in November 2025.
What to remember
The law passed November 29, 2024 and takes effect December 10, 2025, making Australia the first country to implement such comprehensive age restrictions.
Children under 16 are banned from all major platforms including Facebook, Instagram, TikTok, Snapchat, X, YouTube, Reddit, Twitch, Threads, and Kick.
Platforms face fines up to $49.5 million for non-compliance, with no penalties for children or parents who attempt access.
No parental override exists - even with parental consent, under-16s cannot have accounts on restricted platforms.
Age verification technologies are being tested through a $6.5 million government trial examining document verification, facial age estimation, and third-party services.
The law has bipartisan support and 77% public approval despite criticism from academics, human rights advocates, and tech companies.
Implementation faces challenges including privacy concerns, technical feasibility questions, and a constitutional legal challenge filed in November 2025.
What Is the Social Media Ban for Kids in Australia?
The Online Safety Amendment (Social Media Minimum Age) Act 2024 introduces a mandatory minimum age of 16 for accounts on certain social media platforms. This isn't just a recommendation or guideline, it's enforceable law with significant penalties for non-compliance.
The key elements of the law:
The amendment requires platforms to take reasonable steps to enforce the ban, with fines of up to $50 million for non-compliance. This places the burden squarely on social media companies, not on parents or children.
Parents cannot give their consent to let under-16s use these platforms. Unlike many current systems where parental permission allows younger users, this ban has no parental override option.
There are no penalties for children or parents who attempt to access these platforms. The law targets the companies, not the families.
From December 10, 2025, age-restricted social media platforms will have to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account. This applies to both new accounts and existing accounts that children currently have.
The law represents a fundamental shift in how online safety is approached. Rather than putting responsibility on children to navigate safe spaces or parents to constantly monitor usage, the government is requiring platforms themselves to exclude young users entirely.

Why Did Australia Propose This Ban?
The decision didn't happen in a vacuum. Several factors influenced the government's decision, including a Joint Parliamentary Select Committee launched in May 2024 to investigate the effect of social media on Australians, and media campaigns by News Corp and the 36months movement advocating raising the minimum social media age to 16 years.
Rising Mental Health Concerns
The campaigns linked rising teenage mental health issues to social media, garnering support from parents, politicians, educators, clinicians, and 127,000 petition signatories. Australian parents, like parents worldwide, have watched anxiety, depression, and mental health crises among young people escalate alongside increased social media use.
The correlation between heavy social media use and poor mental health outcomes in adolescents has become impossible to ignore. Issues like cyberbullying, social comparison, FOMO (fear of missing out), and exposure to harmful content have contributed to what many describe as a youth mental health crisis.
Political Momentum
The ban received bipartisan support, and a poll in November 2024 suggested that 77% of Australians supported the ban. This rare political consensus across parties, combined with strong public support, created the momentum for rapid legislative action.
Prime Minister Anthony Albanese described social media as a "scourge" and said he wants people to spend more time on the footy field or the netball court than on their phones. The government framed the ban as giving children back their childhoods and parents peace of mind.

Controversy and Opposition
Not everyone supported the approach. Opposition to the ban included an open letter signed by 140 Australian and international academics arguing the ban was too simplistic and that systemic regulation is needed, while human rights advocates claimed it infringed on young people's rights, including access to information and privacy.
Critics argued that better platform regulation, improved digital literacy education, and systemic safety improvements would be more effective than an outright age ban. Some mental health organizations expressed concern that the ban could cut off young people from online support communities and resources.
Despite these objections, the Prime Minister backed the ban, which sped through Parliament and was introduced on November 21 and passed on November 29, with scarce opportunity for public consultation.
Which Age Groups Are Affected?
The ban applies to anyone under 16 years old. This is notably higher than most existing minimum age requirements on social media platforms.

Why age 16?
Most social media platforms currently require users to be at least 13, based largely on the U.S. Children's Online Privacy Protection Act (COPPA). However, research on adolescent brain development and vulnerability to social media harms suggested that 13 might be too young.
The choice of 16 reflects growing understanding that:
Adolescent brains are still developing, particularly the prefrontal cortex responsible for impulse control, decision-making, and understanding long-term consequences. This makes teenagers particularly vulnerable to the addictive design features of social media.
Social comparison and peer pressure intensify during teenage years, making the curated perfection of social media feeds especially damaging to self-esteem and mental health.
Cyberbullying has peak impact during middle adolescence when social belonging feels most critical and rejection most devastating.
By age 16, teens have more developed critical thinking skills, emotional regulation, and identity formation that may help them navigate social media more safely.
Existing users under 16:
The law doesn't just apply to new accounts. The first thing platforms have been tasked with is identifying who all the under 16-year-old users are on their platforms. Current users under 16 will lose access to their accounts when the law takes effect.
Research in September 2024 found that 84% of 8- to 12-year-olds are already on social media, and in 80% of cases, parents were aware and in 90% of cases, parents helped them set up their accounts. This means many Australian families will experience significant disruption when the ban takes effect.
What Platforms Are Included in the Ban?
As of November 21, 2025, it is eSafety's view that Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X, YouTube, Kick and Reddit are age-restricted platforms. This list covers the vast majority of social media platforms popular with young people.
The Included Platforms
Meta's platforms: Facebook, Instagram, and Threads all fall under the ban. These are among the most widely used social platforms globally.
ByteDance's TikTok: The short-form video app immensely popular with teenagers is included.
Snapchat: Known for its disappearing messages and filters, Snapchat is banned for under-16s.
X (formerly Twitter): The social media platform previously known as Twitter is included.
YouTube: The government announced it would extend its social media age limit to include YouTube following advice from eSafety Commissioner Julie Inman Grant. This inclusion was controversial given YouTube's educational content, but the platform was deemed to function as social media through its comments, live streaming, and community features.
Reddit: The discussion forum platform is included in the ban.
Twitch and Kick: Live-streaming platforms where users can broadcast and interact are also covered.
Exempted Platforms
Not all online services are included. Services used for health care and education such as Messenger Kids, WhatsApp, Kids Helpline and Google Classroom are expected to be exempt.
Gaming platform Roblox will not be banned, despite having social features. The distinction seems to be that platforms primarily focused on gaming, messaging, or education with incidental social features may not meet the definition of "age-restricted social media platforms."
Pinterest would not be included as part of the ban, presumably because its focus on visual discovery and inspiration rather than social interaction places it outside the targeted category.
The eSafety Commissioner has published assessment guidance to help platforms determine whether they fall under the ban's definition of age-restricted social media.
How Will the Ban Be Enforced?
This is perhaps the most complex and contentious aspect of the new law. Enforcing an age ban online is technically challenging, raises privacy concerns, and will require significant technological solutions.
Age Verification Technologies
The Australian government has invested $6.5 million in an Age Assurance Technology Trial to evaluate the effectiveness, feasibility, and privacy implications of tools to verify or estimate age online, with results due in mid-2025.

The trial is examining various approaches including:
Identity document verification: Users upload government-issued ID like driver's licenses or passports. This is highly accurate but raises significant privacy concerns about platforms collecting and storing sensitive identification documents.
Facial age estimation: Artificial intelligence analyzes users' faces to estimate age. This is less invasive than document verification but raises concerns about accuracy, bias, and consent, particularly regarding who could access sensitive information collected for age verification and children's capacity to consent.
Third-party age verification services: Independent services verify age without directly sharing identification documents with social media platforms, creating a privacy buffer.
Behavioral inference: Systems analyze user behavior patterns to estimate age, though this is the least accurate method.
Meta announced that from December 4 their platforms (Instagram, Facebook and Threads) would be removing users under the age of 16 ahead of the December 10 deadline, with users able to scan their faces or provide an identity document to prove their age.
Platform Responsibilities
Social media companies bear the full responsibility for compliance. The government is making a definitive statement saying: "We need to put the burden back on you, companies".
Platforms must take "reasonable steps" to prevent underage access. What constitutes "reasonable steps" is still being defined through consultations between the government and industry, guided by the eSafety Commissioner.
The eSafety Commissioner is already working with key platforms where Australian children are present in large numbers to ensure they are getting ready for the social media age restrictions.
Penalties for Non-Compliance
The eSafety Commissioner will be able to take legal action against social media companies that have not pursued reasonable steps to bar users under the age of 16, with fines of up to $49.5 million.
These substantial penalties are designed to ensure platforms take the law seriously and invest in effective age verification systems rather than treating compliance as optional.

No Penalties for Users or Parents
Critically, enforcement will be through assessing fines on social media companies for failing to take such steps, with no consequences for parents or children that violate the restrictions.
This approach recognizes that children will likely attempt to circumvent the ban, and parents may struggle to completely prevent access. The law aims to make it genuinely difficult for underage users to access platforms rather than punishing families for trying.
Privacy and Ethical Concerns
Key ethical concerns include who could access sensitive information collected for age verification, such as identity documents or face identification, children's capacity to consent, and potential disadvantages for specific groups (for example, those with face coverings).
Collecting biometric data or identification documents from potentially millions of users creates massive privacy risks. Data breaches could expose sensitive personal information. Facial recognition technology has known accuracy issues, particularly with certain ethnic groups and individuals with disabilities.
The challenge is creating systems that effectively verify age while respecting privacy rights and avoiding discriminatory impacts.

What Happens Next?
The law takes effect December 10, 2025, but significant work remains before then.
Implementation Timeline
The ban will take effect by the end of 2025, specifically December 10. This gives platforms approximately one year from the law's passage to develop and implement compliant systems.
December 10, 2025, there's not going to be some switch that's flipped off, where every user under 16 will automatically have their apps disappear. Implementation will be gradual as platforms identify underage users and implement age verification systems.
Ongoing Consultations
In May 2025, eSafety called for members of the Australian community, experts and online service providers to express their interest in being consulted on implementation of the age restrictions, including the guidelines that age-restricted social media platforms will have to follow.
These consultations are defining what "reasonable steps" means in practice, how privacy will be protected, and how enforcement will work.
International Attention
Policymakers in other nations say they're watching Australia's age ban closely and planning moves of their own to protect young users. Countries worldwide are monitoring Australia's experience to inform their own approaches to protecting children online.
Prime Minister Albanese presented the model at the UN General Assembly in September 2025, potentially influencing global standards for children's online safety.
Legal Challenges
The Sydney-based Digital Freedom Project filed a constitutional challenge in the High Court on November 26, 2025, to the law due to take effect on December 10, saying it violates the implied right to political communication in the Constitution.
Communications Minister Anika Wells referred to the challenge when she told Parliament her government remained committed to the ban taking effect on schedule, stating: "We will not be intimidated by legal challenges. We will not be intimidated by Big Tech".
The outcome of this legal challenge could affect implementation, though the government appears determined to proceed as planned.
What Parents Should Do
For Australian parents, here are practical steps to prepare:
Talk to your children now about the upcoming changes. Explain why the law exists and how it will affect them. Open communication reduces anxiety and rebellion.
Explore alternative ways to stay connected with friends and community that don't involve banned platforms. Encourage phone calls, video chats through exempt services, in-person meetups, and participation in offline activities.
Consider how your family uses technology and whether this change might actually be beneficial. Many parents report feeling relieved at having external support for limiting social media access.
Stay informed about which platforms are exempt so your children can still access educational resources, health support, and appropriate online communities.
Prepare for the adjustment period. Your child may experience genuine grief at losing access to online communities. Validate these feelings while maintaining boundaries.
Model healthy technology use yourself. If you spend hours on the same platforms you're telling your children they can't access, the message lacks credibility.
What Is the Social Media Ban for Kids in Australia?
The Online Safety Amendment (Social Media Minimum Age) Act 2024 introduces a mandatory minimum age of 16 for accounts on certain social media platforms. This isn't just a recommendation or guideline, it's enforceable law with significant penalties for non-compliance.
The key elements of the law:
The amendment requires platforms to take reasonable steps to enforce the ban, with fines of up to $50 million for non-compliance. This places the burden squarely on social media companies, not on parents or children.
Parents cannot give their consent to let under-16s use these platforms. Unlike many current systems where parental permission allows younger users, this ban has no parental override option.
There are no penalties for children or parents who attempt to access these platforms. The law targets the companies, not the families.
From December 10, 2025, age-restricted social media platforms will have to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account. This applies to both new accounts and existing accounts that children currently have.
The law represents a fundamental shift in how online safety is approached. Rather than putting responsibility on children to navigate safe spaces or parents to constantly monitor usage, the government is requiring platforms themselves to exclude young users entirely.

Why Did Australia Propose This Ban?
The decision didn't happen in a vacuum. Several factors influenced the government's decision, including a Joint Parliamentary Select Committee launched in May 2024 to investigate the effect of social media on Australians, and media campaigns by News Corp and the 36months movement advocating raising the minimum social media age to 16 years.
Rising Mental Health Concerns
The campaigns linked rising teenage mental health issues to social media, garnering support from parents, politicians, educators, clinicians, and 127,000 petition signatories. Australian parents, like parents worldwide, have watched anxiety, depression, and mental health crises among young people escalate alongside increased social media use.
The correlation between heavy social media use and poor mental health outcomes in adolescents has become impossible to ignore. Issues like cyberbullying, social comparison, FOMO (fear of missing out), and exposure to harmful content have contributed to what many describe as a youth mental health crisis.
Political Momentum
The ban received bipartisan support, and a poll in November 2024 suggested that 77% of Australians supported the ban. This rare political consensus across parties, combined with strong public support, created the momentum for rapid legislative action.
Prime Minister Anthony Albanese described social media as a "scourge" and said he wants people to spend more time on the footy field or the netball court than on their phones. The government framed the ban as giving children back their childhoods and parents peace of mind.

Controversy and Opposition
Not everyone supported the approach. Opposition to the ban included an open letter signed by 140 Australian and international academics arguing the ban was too simplistic and that systemic regulation is needed, while human rights advocates claimed it infringed on young people's rights, including access to information and privacy.
Critics argued that better platform regulation, improved digital literacy education, and systemic safety improvements would be more effective than an outright age ban. Some mental health organizations expressed concern that the ban could cut off young people from online support communities and resources.
Despite these objections, the Prime Minister backed the ban, which sped through Parliament and was introduced on November 21 and passed on November 29, with scarce opportunity for public consultation.
Which Age Groups Are Affected?
The ban applies to anyone under 16 years old. This is notably higher than most existing minimum age requirements on social media platforms.

Why age 16?
Most social media platforms currently require users to be at least 13, based largely on the U.S. Children's Online Privacy Protection Act (COPPA). However, research on adolescent brain development and vulnerability to social media harms suggested that 13 might be too young.
The choice of 16 reflects growing understanding that:
Adolescent brains are still developing, particularly the prefrontal cortex responsible for impulse control, decision-making, and understanding long-term consequences. This makes teenagers particularly vulnerable to the addictive design features of social media.
Social comparison and peer pressure intensify during teenage years, making the curated perfection of social media feeds especially damaging to self-esteem and mental health.
Cyberbullying has peak impact during middle adolescence when social belonging feels most critical and rejection most devastating.
By age 16, teens have more developed critical thinking skills, emotional regulation, and identity formation that may help them navigate social media more safely.
Existing users under 16:
The law doesn't just apply to new accounts. The first thing platforms have been tasked with is identifying who all the under 16-year-old users are on their platforms. Current users under 16 will lose access to their accounts when the law takes effect.
Research in September 2024 found that 84% of 8- to 12-year-olds are already on social media, and in 80% of cases, parents were aware and in 90% of cases, parents helped them set up their accounts. This means many Australian families will experience significant disruption when the ban takes effect.
What Platforms Are Included in the Ban?
As of November 21, 2025, it is eSafety's view that Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X, YouTube, Kick and Reddit are age-restricted platforms. This list covers the vast majority of social media platforms popular with young people.
The Included Platforms
Meta's platforms: Facebook, Instagram, and Threads all fall under the ban. These are among the most widely used social platforms globally.
ByteDance's TikTok: The short-form video app immensely popular with teenagers is included.
Snapchat: Known for its disappearing messages and filters, Snapchat is banned for under-16s.
X (formerly Twitter): The social media platform previously known as Twitter is included.
YouTube: The government announced it would extend its social media age limit to include YouTube following advice from eSafety Commissioner Julie Inman Grant. This inclusion was controversial given YouTube's educational content, but the platform was deemed to function as social media through its comments, live streaming, and community features.
Reddit: The discussion forum platform is included in the ban.
Twitch and Kick: Live-streaming platforms where users can broadcast and interact are also covered.
Exempted Platforms
Not all online services are included. Services used for health care and education such as Messenger Kids, WhatsApp, Kids Helpline and Google Classroom are expected to be exempt.
Gaming platform Roblox will not be banned, despite having social features. The distinction seems to be that platforms primarily focused on gaming, messaging, or education with incidental social features may not meet the definition of "age-restricted social media platforms."
Pinterest would not be included as part of the ban, presumably because its focus on visual discovery and inspiration rather than social interaction places it outside the targeted category.
The eSafety Commissioner has published assessment guidance to help platforms determine whether they fall under the ban's definition of age-restricted social media.
How Will the Ban Be Enforced?
This is perhaps the most complex and contentious aspect of the new law. Enforcing an age ban online is technically challenging, raises privacy concerns, and will require significant technological solutions.
Age Verification Technologies
The Australian government has invested $6.5 million in an Age Assurance Technology Trial to evaluate the effectiveness, feasibility, and privacy implications of tools to verify or estimate age online, with results due in mid-2025.

The trial is examining various approaches including:
Identity document verification: Users upload government-issued ID like driver's licenses or passports. This is highly accurate but raises significant privacy concerns about platforms collecting and storing sensitive identification documents.
Facial age estimation: Artificial intelligence analyzes users' faces to estimate age. This is less invasive than document verification but raises concerns about accuracy, bias, and consent, particularly regarding who could access sensitive information collected for age verification and children's capacity to consent.
Third-party age verification services: Independent services verify age without directly sharing identification documents with social media platforms, creating a privacy buffer.
Behavioral inference: Systems analyze user behavior patterns to estimate age, though this is the least accurate method.
Meta announced that from December 4 their platforms (Instagram, Facebook and Threads) would be removing users under the age of 16 ahead of the December 10 deadline, with users able to scan their faces or provide an identity document to prove their age.
Platform Responsibilities
Social media companies bear the full responsibility for compliance. The government is making a definitive statement saying: "We need to put the burden back on you, companies".
Platforms must take "reasonable steps" to prevent underage access. What constitutes "reasonable steps" is still being defined through consultations between the government and industry, guided by the eSafety Commissioner.
The eSafety Commissioner is already working with key platforms where Australian children are present in large numbers to ensure they are getting ready for the social media age restrictions.
Penalties for Non-Compliance
The eSafety Commissioner will be able to take legal action against social media companies that have not pursued reasonable steps to bar users under the age of 16, with fines of up to $49.5 million.
These substantial penalties are designed to ensure platforms take the law seriously and invest in effective age verification systems rather than treating compliance as optional.

No Penalties for Users or Parents
Critically, enforcement will be through assessing fines on social media companies for failing to take such steps, with no consequences for parents or children that violate the restrictions.
This approach recognizes that children will likely attempt to circumvent the ban, and parents may struggle to completely prevent access. The law aims to make it genuinely difficult for underage users to access platforms rather than punishing families for trying.
Privacy and Ethical Concerns
Key ethical concerns include who could access sensitive information collected for age verification, such as identity documents or face identification, children's capacity to consent, and potential disadvantages for specific groups (for example, those with face coverings).
Collecting biometric data or identification documents from potentially millions of users creates massive privacy risks. Data breaches could expose sensitive personal information. Facial recognition technology has known accuracy issues, particularly with certain ethnic groups and individuals with disabilities.
The challenge is creating systems that effectively verify age while respecting privacy rights and avoiding discriminatory impacts.

What Happens Next?
The law takes effect December 10, 2025, but significant work remains before then.
Implementation Timeline
The ban will take effect by the end of 2025, specifically December 10. This gives platforms approximately one year from the law's passage to develop and implement compliant systems.
December 10, 2025, there's not going to be some switch that's flipped off, where every user under 16 will automatically have their apps disappear. Implementation will be gradual as platforms identify underage users and implement age verification systems.
Ongoing Consultations
In May 2025, eSafety called for members of the Australian community, experts and online service providers to express their interest in being consulted on implementation of the age restrictions, including the guidelines that age-restricted social media platforms will have to follow.
These consultations are defining what "reasonable steps" means in practice, how privacy will be protected, and how enforcement will work.
International Attention
Policymakers in other nations say they're watching Australia's age ban closely and planning moves of their own to protect young users. Countries worldwide are monitoring Australia's experience to inform their own approaches to protecting children online.
Prime Minister Albanese presented the model at the UN General Assembly in September 2025, potentially influencing global standards for children's online safety.
Legal Challenges
The Sydney-based Digital Freedom Project filed a constitutional challenge in the High Court on November 26, 2025, to the law due to take effect on December 10, saying it violates the implied right to political communication in the Constitution.
Communications Minister Anika Wells referred to the challenge when she told Parliament her government remained committed to the ban taking effect on schedule, stating: "We will not be intimidated by legal challenges. We will not be intimidated by Big Tech".
The outcome of this legal challenge could affect implementation, though the government appears determined to proceed as planned.
What Parents Should Do
For Australian parents, here are practical steps to prepare:
Talk to your children now about the upcoming changes. Explain why the law exists and how it will affect them. Open communication reduces anxiety and rebellion.
Explore alternative ways to stay connected with friends and community that don't involve banned platforms. Encourage phone calls, video chats through exempt services, in-person meetups, and participation in offline activities.
Consider how your family uses technology and whether this change might actually be beneficial. Many parents report feeling relieved at having external support for limiting social media access.
Stay informed about which platforms are exempt so your children can still access educational resources, health support, and appropriate online communities.
Prepare for the adjustment period. Your child may experience genuine grief at losing access to online communities. Validate these feelings while maintaining boundaries.
Model healthy technology use yourself. If you spend hours on the same platforms you're telling your children they can't access, the message lacks credibility.
What Is the Social Media Ban for Kids in Australia?
The Online Safety Amendment (Social Media Minimum Age) Act 2024 introduces a mandatory minimum age of 16 for accounts on certain social media platforms. This isn't just a recommendation or guideline, it's enforceable law with significant penalties for non-compliance.
The key elements of the law:
The amendment requires platforms to take reasonable steps to enforce the ban, with fines of up to $50 million for non-compliance. This places the burden squarely on social media companies, not on parents or children.
Parents cannot give their consent to let under-16s use these platforms. Unlike many current systems where parental permission allows younger users, this ban has no parental override option.
There are no penalties for children or parents who attempt to access these platforms. The law targets the companies, not the families.
From December 10, 2025, age-restricted social media platforms will have to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account. This applies to both new accounts and existing accounts that children currently have.
The law represents a fundamental shift in how online safety is approached. Rather than putting responsibility on children to navigate safe spaces or parents to constantly monitor usage, the government is requiring platforms themselves to exclude young users entirely.

Why Did Australia Propose This Ban?
The decision didn't happen in a vacuum. Several factors influenced the government's decision, including a Joint Parliamentary Select Committee launched in May 2024 to investigate the effect of social media on Australians, and media campaigns by News Corp and the 36months movement advocating raising the minimum social media age to 16 years.
Rising Mental Health Concerns
The campaigns linked rising teenage mental health issues to social media, garnering support from parents, politicians, educators, clinicians, and 127,000 petition signatories. Australian parents, like parents worldwide, have watched anxiety, depression, and mental health crises among young people escalate alongside increased social media use.
The correlation between heavy social media use and poor mental health outcomes in adolescents has become impossible to ignore. Issues like cyberbullying, social comparison, FOMO (fear of missing out), and exposure to harmful content have contributed to what many describe as a youth mental health crisis.
Political Momentum
The ban received bipartisan support, and a poll in November 2024 suggested that 77% of Australians supported the ban. This rare political consensus across parties, combined with strong public support, created the momentum for rapid legislative action.
Prime Minister Anthony Albanese described social media as a "scourge" and said he wants people to spend more time on the footy field or the netball court than on their phones. The government framed the ban as giving children back their childhoods and parents peace of mind.

Controversy and Opposition
Not everyone supported the approach. Opposition to the ban included an open letter signed by 140 Australian and international academics arguing the ban was too simplistic and that systemic regulation is needed, while human rights advocates claimed it infringed on young people's rights, including access to information and privacy.
Critics argued that better platform regulation, improved digital literacy education, and systemic safety improvements would be more effective than an outright age ban. Some mental health organizations expressed concern that the ban could cut off young people from online support communities and resources.
Despite these objections, the Prime Minister backed the ban, which sped through Parliament and was introduced on November 21 and passed on November 29, with scarce opportunity for public consultation.
Which Age Groups Are Affected?
The ban applies to anyone under 16 years old. This is notably higher than most existing minimum age requirements on social media platforms.

Why age 16?
Most social media platforms currently require users to be at least 13, based largely on the U.S. Children's Online Privacy Protection Act (COPPA). However, research on adolescent brain development and vulnerability to social media harms suggested that 13 might be too young.
The choice of 16 reflects growing understanding that:
Adolescent brains are still developing, particularly the prefrontal cortex responsible for impulse control, decision-making, and understanding long-term consequences. This makes teenagers particularly vulnerable to the addictive design features of social media.
Social comparison and peer pressure intensify during teenage years, making the curated perfection of social media feeds especially damaging to self-esteem and mental health.
Cyberbullying has peak impact during middle adolescence when social belonging feels most critical and rejection most devastating.
By age 16, teens have more developed critical thinking skills, emotional regulation, and identity formation that may help them navigate social media more safely.
Existing users under 16:
The law doesn't just apply to new accounts. The first thing platforms have been tasked with is identifying who all the under 16-year-old users are on their platforms. Current users under 16 will lose access to their accounts when the law takes effect.
Research in September 2024 found that 84% of 8- to 12-year-olds are already on social media, and in 80% of cases, parents were aware and in 90% of cases, parents helped them set up their accounts. This means many Australian families will experience significant disruption when the ban takes effect.
What Platforms Are Included in the Ban?
As of November 21, 2025, it is eSafety's view that Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X, YouTube, Kick and Reddit are age-restricted platforms. This list covers the vast majority of social media platforms popular with young people.
The Included Platforms
Meta's platforms: Facebook, Instagram, and Threads all fall under the ban. These are among the most widely used social platforms globally.
ByteDance's TikTok: The short-form video app immensely popular with teenagers is included.
Snapchat: Known for its disappearing messages and filters, Snapchat is banned for under-16s.
X (formerly Twitter): The social media platform previously known as Twitter is included.
YouTube: The government announced it would extend its social media age limit to include YouTube following advice from eSafety Commissioner Julie Inman Grant. This inclusion was controversial given YouTube's educational content, but the platform was deemed to function as social media through its comments, live streaming, and community features.
Reddit: The discussion forum platform is included in the ban.
Twitch and Kick: Live-streaming platforms where users can broadcast and interact are also covered.
Exempted Platforms
Not all online services are included. Services used for health care and education such as Messenger Kids, WhatsApp, Kids Helpline and Google Classroom are expected to be exempt.
Gaming platform Roblox will not be banned, despite having social features. The distinction seems to be that platforms primarily focused on gaming, messaging, or education with incidental social features may not meet the definition of "age-restricted social media platforms."
Pinterest would not be included as part of the ban, presumably because its focus on visual discovery and inspiration rather than social interaction places it outside the targeted category.
The eSafety Commissioner has published assessment guidance to help platforms determine whether they fall under the ban's definition of age-restricted social media.
How Will the Ban Be Enforced?
This is perhaps the most complex and contentious aspect of the new law. Enforcing an age ban online is technically challenging, raises privacy concerns, and will require significant technological solutions.
Age Verification Technologies
The Australian government has invested $6.5 million in an Age Assurance Technology Trial to evaluate the effectiveness, feasibility, and privacy implications of tools to verify or estimate age online, with results due in mid-2025.

The trial is examining various approaches including:
Identity document verification: Users upload government-issued ID like driver's licenses or passports. This is highly accurate but raises significant privacy concerns about platforms collecting and storing sensitive identification documents.
Facial age estimation: Artificial intelligence analyzes users' faces to estimate age. This is less invasive than document verification but raises concerns about accuracy, bias, and consent, particularly regarding who could access sensitive information collected for age verification and children's capacity to consent.
Third-party age verification services: Independent services verify age without directly sharing identification documents with social media platforms, creating a privacy buffer.
Behavioral inference: Systems analyze user behavior patterns to estimate age, though this is the least accurate method.
Meta announced that from December 4 their platforms (Instagram, Facebook and Threads) would be removing users under the age of 16 ahead of the December 10 deadline, with users able to scan their faces or provide an identity document to prove their age.
Platform Responsibilities
Social media companies bear the full responsibility for compliance. The government is making a definitive statement saying: "We need to put the burden back on you, companies".
Platforms must take "reasonable steps" to prevent underage access. What constitutes "reasonable steps" is still being defined through consultations between the government and industry, guided by the eSafety Commissioner.
The eSafety Commissioner is already working with key platforms where Australian children are present in large numbers to ensure they are getting ready for the social media age restrictions.
Penalties for Non-Compliance
The eSafety Commissioner will be able to take legal action against social media companies that have not pursued reasonable steps to bar users under the age of 16, with fines of up to $49.5 million.
These substantial penalties are designed to ensure platforms take the law seriously and invest in effective age verification systems rather than treating compliance as optional.

No Penalties for Users or Parents
Critically, enforcement will be through assessing fines on social media companies for failing to take such steps, with no consequences for parents or children that violate the restrictions.
This approach recognizes that children will likely attempt to circumvent the ban, and parents may struggle to completely prevent access. The law aims to make it genuinely difficult for underage users to access platforms rather than punishing families for trying.
Privacy and Ethical Concerns
Key ethical concerns include who could access sensitive information collected for age verification, such as identity documents or face identification, children's capacity to consent, and potential disadvantages for specific groups (for example, those with face coverings).
Collecting biometric data or identification documents from potentially millions of users creates massive privacy risks. Data breaches could expose sensitive personal information. Facial recognition technology has known accuracy issues, particularly with certain ethnic groups and individuals with disabilities.
The challenge is creating systems that effectively verify age while respecting privacy rights and avoiding discriminatory impacts.

What Happens Next?
The law takes effect December 10, 2025, but significant work remains before then.
Implementation Timeline
The ban will take effect by the end of 2025, specifically December 10. This gives platforms approximately one year from the law's passage to develop and implement compliant systems.
December 10, 2025, there's not going to be some switch that's flipped off, where every user under 16 will automatically have their apps disappear. Implementation will be gradual as platforms identify underage users and implement age verification systems.
Ongoing Consultations
In May 2025, eSafety called for members of the Australian community, experts and online service providers to express their interest in being consulted on implementation of the age restrictions, including the guidelines that age-restricted social media platforms will have to follow.
These consultations are defining what "reasonable steps" means in practice, how privacy will be protected, and how enforcement will work.
International Attention
Policymakers in other nations say they're watching Australia's age ban closely and planning moves of their own to protect young users. Countries worldwide are monitoring Australia's experience to inform their own approaches to protecting children online.
Prime Minister Albanese presented the model at the UN General Assembly in September 2025, potentially influencing global standards for children's online safety.
Legal Challenges
The Sydney-based Digital Freedom Project filed a constitutional challenge in the High Court on November 26, 2025, to the law due to take effect on December 10, saying it violates the implied right to political communication in the Constitution.
Communications Minister Anika Wells referred to the challenge when she told Parliament her government remained committed to the ban taking effect on schedule, stating: "We will not be intimidated by legal challenges. We will not be intimidated by Big Tech".
The outcome of this legal challenge could affect implementation, though the government appears determined to proceed as planned.
What Parents Should Do
For Australian parents, here are practical steps to prepare:
Talk to your children now about the upcoming changes. Explain why the law exists and how it will affect them. Open communication reduces anxiety and rebellion.
Explore alternative ways to stay connected with friends and community that don't involve banned platforms. Encourage phone calls, video chats through exempt services, in-person meetups, and participation in offline activities.
Consider how your family uses technology and whether this change might actually be beneficial. Many parents report feeling relieved at having external support for limiting social media access.
Stay informed about which platforms are exempt so your children can still access educational resources, health support, and appropriate online communities.
Prepare for the adjustment period. Your child may experience genuine grief at losing access to online communities. Validate these feelings while maintaining boundaries.
Model healthy technology use yourself. If you spend hours on the same platforms you're telling your children they can't access, the message lacks credibility.
You are not the only one asking this
Is social media currently banned for kids in Australia?
Not yet. The ban takes effect December 10, 2025. Currently, most social media platforms have minimum age requirements of 13, but these are rarely enforced. Australian children under 16 can still legally access social media platforms until the December 2025 implementation date. However, platforms like Meta have begun removing underage users ahead of the deadline, with Meta announcing removal of users under 16 starting December 4, 2025. After December 10, 2025, the ban will be fully enforceable with significant penalties for non-compliant platforms.
When will the new law come into effect?
The law takes effect on December 10, 2025. The legislation passed parliament on November 29, 2024, and companies were given one year to implement age verification systems and comply with the new requirements. This implementation date is firm despite legal challenges, with the government committing to enforcing the ban as scheduled. Between now and December, platforms must finalize their age verification technologies in coordination with the eSafety Commissioner, who is leading industry consultations and technical guidelines.
What age will the ban apply to?
The ban applies to anyone under 16 years old. This is higher than most current platform minimums of 13. The choice of 16 reflects research on adolescent development, vulnerability to social media harms, and the age at which young people may have more developed critical thinking and emotional regulation skills. There are no exceptions or provisions for parental consent. Even with parental permission, children under 16 cannot legally have accounts on banned platforms after December 10, 2025.
Which social media platforms are affected?
As of November 2025, the eSafety Commissioner has identified Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X, YouTube, Kick, and Reddit as age-restricted platforms that fall under the ban. This covers the vast majority of popular social media used by young people. However, messaging services like WhatsApp and Messenger Kids, educational platforms like Google Classroom, health support services like Kids Helpline, and gaming platforms like Roblox are exempt. Pinterest is also not included in the ban. The list continues to be refined and updated as the eSafety Commissioner evaluates which platforms meet the legal definition of age-restricted social media.
How will age verification be enforced?
Enforcement is the responsibility of social media companies, who must take "reasonable steps" to prevent underage access or face fines up to $49.5 million. The government is conducting an Age Assurance Technology Trial to evaluate various verification methods including identity document upload, facial age estimation using AI, third-party verification services, and behavioral inference. Results from this trial, due in mid-2025, will guide implementation standards. Meta has announced users will be able to scan their faces or provide identity documents to prove their age. The exact verification methods will be determined through ongoing consultations between government and industry, balancing effectiveness with privacy protection.
Can parents override the ban?
No. Unlike many current systems where parental permission allows younger users, this ban has absolutely no parental override provision. Even if parents believe their child is mature enough for social media or want to allow access for specific reasons, the law prohibits children under 16 from having accounts on restricted platforms regardless of parental consent. The government designed the law this way to put responsibility on platforms rather than on individual family decisions. There are no penalties for parents or children who attempt to circumvent the ban, but platforms must prevent access regardless of parental wishes.
You are not the only one asking this
Is social media currently banned for kids in Australia?
Not yet. The ban takes effect December 10, 2025. Currently, most social media platforms have minimum age requirements of 13, but these are rarely enforced. Australian children under 16 can still legally access social media platforms until the December 2025 implementation date. However, platforms like Meta have begun removing underage users ahead of the deadline, with Meta announcing removal of users under 16 starting December 4, 2025. After December 10, 2025, the ban will be fully enforceable with significant penalties for non-compliant platforms.
When will the new law come into effect?
The law takes effect on December 10, 2025. The legislation passed parliament on November 29, 2024, and companies were given one year to implement age verification systems and comply with the new requirements. This implementation date is firm despite legal challenges, with the government committing to enforcing the ban as scheduled. Between now and December, platforms must finalize their age verification technologies in coordination with the eSafety Commissioner, who is leading industry consultations and technical guidelines.
What age will the ban apply to?
The ban applies to anyone under 16 years old. This is higher than most current platform minimums of 13. The choice of 16 reflects research on adolescent development, vulnerability to social media harms, and the age at which young people may have more developed critical thinking and emotional regulation skills. There are no exceptions or provisions for parental consent. Even with parental permission, children under 16 cannot legally have accounts on banned platforms after December 10, 2025.
Which social media platforms are affected?
As of November 2025, the eSafety Commissioner has identified Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X, YouTube, Kick, and Reddit as age-restricted platforms that fall under the ban. This covers the vast majority of popular social media used by young people. However, messaging services like WhatsApp and Messenger Kids, educational platforms like Google Classroom, health support services like Kids Helpline, and gaming platforms like Roblox are exempt. Pinterest is also not included in the ban. The list continues to be refined and updated as the eSafety Commissioner evaluates which platforms meet the legal definition of age-restricted social media.
How will age verification be enforced?
Enforcement is the responsibility of social media companies, who must take "reasonable steps" to prevent underage access or face fines up to $49.5 million. The government is conducting an Age Assurance Technology Trial to evaluate various verification methods including identity document upload, facial age estimation using AI, third-party verification services, and behavioral inference. Results from this trial, due in mid-2025, will guide implementation standards. Meta has announced users will be able to scan their faces or provide identity documents to prove their age. The exact verification methods will be determined through ongoing consultations between government and industry, balancing effectiveness with privacy protection.
Can parents override the ban?
No. Unlike many current systems where parental permission allows younger users, this ban has absolutely no parental override provision. Even if parents believe their child is mature enough for social media or want to allow access for specific reasons, the law prohibits children under 16 from having accounts on restricted platforms regardless of parental consent. The government designed the law this way to put responsibility on platforms rather than on individual family decisions. There are no penalties for parents or children who attempt to circumvent the ban, but platforms must prevent access regardless of parental wishes.
You are not the only one asking this
Is social media currently banned for kids in Australia?
Not yet. The ban takes effect December 10, 2025. Currently, most social media platforms have minimum age requirements of 13, but these are rarely enforced. Australian children under 16 can still legally access social media platforms until the December 2025 implementation date. However, platforms like Meta have begun removing underage users ahead of the deadline, with Meta announcing removal of users under 16 starting December 4, 2025. After December 10, 2025, the ban will be fully enforceable with significant penalties for non-compliant platforms.
When will the new law come into effect?
The law takes effect on December 10, 2025. The legislation passed parliament on November 29, 2024, and companies were given one year to implement age verification systems and comply with the new requirements. This implementation date is firm despite legal challenges, with the government committing to enforcing the ban as scheduled. Between now and December, platforms must finalize their age verification technologies in coordination with the eSafety Commissioner, who is leading industry consultations and technical guidelines.
What age will the ban apply to?
The ban applies to anyone under 16 years old. This is higher than most current platform minimums of 13. The choice of 16 reflects research on adolescent development, vulnerability to social media harms, and the age at which young people may have more developed critical thinking and emotional regulation skills. There are no exceptions or provisions for parental consent. Even with parental permission, children under 16 cannot legally have accounts on banned platforms after December 10, 2025.
Which social media platforms are affected?
As of November 2025, the eSafety Commissioner has identified Facebook, Instagram, Snapchat, Threads, TikTok, Twitch, X, YouTube, Kick, and Reddit as age-restricted platforms that fall under the ban. This covers the vast majority of popular social media used by young people. However, messaging services like WhatsApp and Messenger Kids, educational platforms like Google Classroom, health support services like Kids Helpline, and gaming platforms like Roblox are exempt. Pinterest is also not included in the ban. The list continues to be refined and updated as the eSafety Commissioner evaluates which platforms meet the legal definition of age-restricted social media.
How will age verification be enforced?
Enforcement is the responsibility of social media companies, who must take "reasonable steps" to prevent underage access or face fines up to $49.5 million. The government is conducting an Age Assurance Technology Trial to evaluate various verification methods including identity document upload, facial age estimation using AI, third-party verification services, and behavioral inference. Results from this trial, due in mid-2025, will guide implementation standards. Meta has announced users will be able to scan their faces or provide identity documents to prove their age. The exact verification methods will be determined through ongoing consultations between government and industry, balancing effectiveness with privacy protection.
Can parents override the ban?
No. Unlike many current systems where parental permission allows younger users, this ban has absolutely no parental override provision. Even if parents believe their child is mature enough for social media or want to allow access for specific reasons, the law prohibits children under 16 from having accounts on restricted platforms regardless of parental consent. The government designed the law this way to put responsibility on platforms rather than on individual family decisions. There are no penalties for parents or children who attempt to circumvent the ban, but platforms must prevent access regardless of parental wishes.
Australia's social media ban has caught the attention of governments around the world. Countries like Norway, France, and the United Kingdom are watching the implementation closely and considering similar restrictions for their own young people. In the United States, several states have already introduced their own versions of age-based social media laws, though these vary widely in approach and scope. The European Union is also monitoring Australia's experience as it continues developing its own online safety standards for children.
As the December 2025 deadline approaches, Australia is essentially running a large-scale experiment that will provide valuable insights for parents, policymakers, and platforms worldwide about protecting children in digital spaces.
Australia's social media ban has caught the attention of governments around the world. Countries like Norway, France, and the United Kingdom are watching the implementation closely and considering similar restrictions for their own young people. In the United States, several states have already introduced their own versions of age-based social media laws, though these vary widely in approach and scope. The European Union is also monitoring Australia's experience as it continues developing its own online safety standards for children.
As the December 2025 deadline approaches, Australia is essentially running a large-scale experiment that will provide valuable insights for parents, policymakers, and platforms worldwide about protecting children in digital spaces.
Australia's social media ban has caught the attention of governments around the world. Countries like Norway, France, and the United Kingdom are watching the implementation closely and considering similar restrictions for their own young people. In the United States, several states have already introduced their own versions of age-based social media laws, though these vary widely in approach and scope. The European Union is also monitoring Australia's experience as it continues developing its own online safety standards for children.
As the December 2025 deadline approaches, Australia is essentially running a large-scale experiment that will provide valuable insights for parents, policymakers, and platforms worldwide about protecting children in digital spaces.
Did You Learn Something Today?
Stay in the loop.
Tiny useful tips sent with zero pressure and zero noise.