There
were a noisy few to say the move would never work, including one of the few organisations here charged with keeping Kiwi kids safe online: Netsafe.
Netsafe CEO Brent Carey said the move would push harmful behaviour underground. He argued that if we allow 14-year-olds to be left home alone, then surely, they should also be allowed to use Snapchat, implying that we are underestimating their ability to navigate online spaces.
This analogy is deeply flawed. If we trust young teens to stay home alone, does that mean we should also allow them to gamble, drink alcohol or drive cars? These comparisons fall short when we account for the unique dangers posed by social media – platforms deliberately designed to be addictive and filled with harmful content that significantly impacts developing minds.
Therefore, just as we delay access to driving or alcohol consumption until a person’s frontal lobe is more fully developed, we must take the same approach with social media. This isn’t about outright banning social media, but about responsibly delaying its introduction until young minds are better equipped to handle its risks.
Platforms like Snapchat prioritise engagement, using algorithms that push harmful content that exacerbates mental health issues. Dr Samantha Marsh, a senior research fellow at the University of Auckland states.
“According to Carey, we should make children responsible for managing their own health and wellbeing by teaching them to use these addictive platforms responsibly. But he suggests doing this at precisely the age when the human brain is most vulnerable to addiction. This approach ignores the overwhelming evidence that children and adolescents, whose brains are still developing, are ill-equipped to self-regulate in the face of such powerful, addictive technologies”.
Additionally, it’s essential to consider who is funding organisations that are involved in shaping these debates. When the media seek comments from “social media experts” on the payroll of companies such as Meta, it raises questions about potential bias in their advocacy for lighter regulations, making it harder to trust their recommendations as being in the best interest of children’s safety.
Instead, we need to heed the warnings from leading health authorities. The US Surgeon General has issued an urgent advisory on the harmful effects of social media on children and adolescents. He emphasised that social media platforms, designed to capture attention, expose young users to content that can deeply harm their mental health, calling for immediate action to mitigate these risks.
Because the reality is that one in four Kiwi teens will experience mental illness before turning 18, and while mental health medication use is at an all-time high, we’re failing to address the root causes or implement effective early interventions. Anxiety is on the rise across all age groups, with the most dramatic increase – 328.9% – seen among 15- to 24 year-olds between 2011 and 2020.
Among 15- to 19-year-old girls, public hospital discharges for intentional self-harm have risen by 138% since 2010. The situation is even more alarming for younger girls (aged 10-14), who saw a 242% spike in self-harm-related discharges in 2020 alone. Overall, hospital discharges for self-harm among 10- to 19-year-olds surged by 207%, from 833 cases in 2009 to 2558 in 2019.
In addition, this issue disproportionately affects more vulnerable families. Tech executives who publicly advocate for the benefits of digital technology for children, especially the vulnerable, often take extreme measures to keep it away from their own children. Many create “nanny contracts” to limit screen time and send their children to expensive, screen-free schools like Waldorf, near Meta and Google’s headquarters. Without stronger oversight, the digital divide will continue to widen, leaving vulnerable children without adequate protection from the harms of social media.
Recent circulation of deep fake nudes in Canterbury schools underscores the urgency for updated legislation and more robust interventions. Kids are taking photos of students in school uniform and undressing them via the app and sharing it around school.
Holly Brooker from Makes Sense says: “We hear stories of kids, often boys, sending horrific, gruesome content to each other on social media sites and WhatsApp. Some examples include porn and sexual abuse, videos of executions, extreme violence, videos of animal torture, to name a few. Often the recipients don’t want to receive or see this content. Grooming on social media and gaming chat is also an increasing problem for young people in New Zealand.”
Unlike Australia, which has enacted stronger protections, New Zealand’s regulations lag behind.
If left unaddressed, the economic and healthcare consequences will be severe, with mental health issues affecting productivity and further straining our very fragile healthcare system.
Kiwis take pride in New Zealand being a great place to raise children, but our failure to act on the significant harm kids are facing online is undermining this ideal. It’s time to uphold our legacy of bold action and do better for our kids. One of the first steps is to recognise how some organisations, meant to advocate for children, are instead partnered with those who perpetuate the harm. This conflict of interest must be addressed if we are serious about safeguarding our children’s futures.