Moves to safeguard children from social media harm are ramping up, but with technology racing ahead, worried parents say New Zealand needs to tighten access.
By the time Sophie (not her real name) was 14, she and her circle of friends were all using social media. In the beginning, their TikTok, Snapchat and Instagram feeds were full of “fun” content: dance routines and shared memes that made them laugh. The Auckland teen often spent hours a day browsing the platforms, but what she was scrolling through changed over months from lighthearted to something much darker. Her circle began sharing videos on self-harming, eating disorders and depression – their own created content and material from others.
When the friends “liked” each other’s posts, algorithms driving the feeds channelled similar content to them. It became more and more disturbing.
Sophie recalls a sleepless night after a friend posted on Snapchat photos of pills she said she was going to take to end her life. But photos and messages disappear on Snapchat after a few seconds, so even if she had wanted to alert adults, there would have been no evidence of the threat. (The girl did not take the pills.)
“The self-harm videos in my TikTok feed became competitive,” Sophie, now 17, says. “Girls and boys I didn’t know would be showing their scars, and you’d look in the comments, and it would be like, ‘Well, I self-harmed five times,’ and someone would respond, ‘Well, I did 10 times.’
“I’d be watching this, and I’d feel bad because I’d think, ‘I only did twice.’ It just fed this crazy mindset.”
Concerned about her rising levels of anxiety and suspicious of what may have been driving it, Sophie’s parents imposed strict phone restrictions. “It was like dealing with an addict,” says her mother Emily, a tech executive. “ She’d be in total meltdown.”
Kids are so much savvier than parents. They can download an app and put it behind a calculator icon on their phone.
The platforms have since banned the search phrase “self-harm”. If you type that into TikTok, Instagram or Snapchat you’ll be directed to an 0800 number. But kids know there are ways around it, and that concerns Sophie, now in Year 13. She is one of the students joining the ranks of concerned parents calling for a ban on social media access for under-16s. “I’d love it if social media didn’t exist now,” she says. “Even now. I find the platforms feed my anxiety.”
A new parent-led lobby group, B416, has been lobbying the government for restriction on social media access for younger Kiwis. Co-founded and chaired by entrepreneur Cecilia Robinson (My Food Bag, Tend healthcare) it has other prominent names on board – Outward Bound CEO Malindi MacLean, Kiwibank director and former Xero executive Anna Curzon, Zuru toys co-founder and entrepreneur Anna Mowbray and stockbroker Blair Knight. Other high-profile parents with big social media followings involved include Gemma McCaw and Matilda Green. All are parents of teens or younger children.
“Parents feel absolutely helpless,” says Emily, also part of B416. “The only way to help our kids is to legislate and educate.”

Political support
Inspired by Australia’s move to restrict social media access for under-16s from December, the group is delighted to hear of a member’s bill to ban social media that they hope will become a government bill.
Backed by Prime Minister Christopher Luxon, National MP Catherine Wedd this week announced the bill, which aims to follow Australia’s lead in denying access to most social media platforms for under-16s (Australia’s law exempts YouTube). The MP for Tukituki, in Hawke’s Bay, says the Social Media Age-Appropriate Users Bill would put the onus on social media companies to verify someone is over 16 before they access platforms.
It‘s not a government bill because coalition partner Act opposes it, so Wedd’s proposal must wait to be drawn by parliamentary ballot. Act leader David Seymour dismissed the solution as “simple, neat and wrong”.
“Just slapping on a ban hastily drafted won’t solve the real problem,” he told the Herald. “The real problem has to involve parents.”
His Act colleague, Internal Affairs Minister Brooke van Velden, says the new bill hasn’t changed her position. “As the minister responsible I am not looking to implement a minimum age for using social media.”
She notes social media companies have signed up to the Code of Practice for Online Safety and Harms – it‘s voluntary – and are required to minimise harmful material online such as violent content and bullying.
She maintains it is up to whānau to decide when and how children should access social media. On Australia’s ban, van Velden says, “It will be interesting to see how this plays out in operation.”Luxon, however, hopes Wedd’s bill will attract opposition party support. A spokesperson told the Listener National was open to pursuing other ways to get the bill on the order paper.
Robinson is pleased. “We want to see a bipartisan approach. It‘s fantastic to get the support from the Prime Minister but we want this to be made a priority.”

International momentum
B416 is part of a growing global movement of parents demanding controls on not only social media but also screens in small hands. Smartphone Free Childhood UK has 200,000 members and has offshoots in 30 countries, including here.
And B416 is not the only local initiative calling for action. Tauranga man Rory Birkbeck founded tech company Safe Surfer, which markets child-proof filtering systems on devices.
Birkbeck paints a dark picture of the online environment and says New Zealand, with its voluntary safety codes, has one of the most unprotected and unregulated regimes in the world.
Parents are right to be worried, he says. “For a lot of these platforms, it‘s like a tech arms race; they’re trying to capture our attention on the phone.
“It‘s now not just about capturing our time and attention, it‘s intimacy. It‘s proven that Big Tech can get more engagement through friendship and intimacy, so they’ve brought out platforms such as character.ai, which are trying to develop relationships with our children.”
And though he thinks awareness of risk is growing among parents, Big Tech is smarter. “There are about 10 million kids on character.ai. This opens up a whole new sexual abuse category. We need a national filtering system and to get app store accountability in place.”

Hands-off approach
Moves to improve regulation of online standards, via the Safer Online Services and Media Platforms review, were scrapped last year after a process stretching back to 2021. A discussion paper released in June 2023 proposed codes of practice setting out safety obligations for major and high-risk platforms. Regulatory efforts would focus on areas of highest risk, such as harm to children or content promoting terrorism. But in April last year, van Velden shelved the plans, saying illegal content was already being policed and the concepts of “harm” and “emotional wellbeing” were open to interpretation.
That leaves New Zealand with multiple agencies with various responsibilities for dealing with harmful content – and confusion about who does what and where to take complaints.
Our most prominent online policing agency, Netsafe, is an independent charity (though it attracts significant funding from government agencies). Its data shows four in 10 teens are using five or more social media platforms regularly. Our youth are among the highest online users in the world, spending 42 hours a week on a screen compared with the OECD average of 35 hours. The average age for getting a smartphone here is 11.
Netsafe chief executive Brent Carey last year dismissed Australia’s social media ban for under-16s as “wrongheaded”, difficult to enforce, and an election-year distraction. But Carey has a shopping list of things he would like to see: greater platform accountability, more controls, additional default privacy and security settings for teens and a review of the decade-old Harmful Digital Communications Act. He believes illegal content should be removed more quickly, while there should be better content filtering and blocking, ad preference improvements, and individuals should have more control over their platform notifications.
Media and Communications Minister Paul Goldsmith told the Listener there are no immediate plans to make changes to the act.

Rise of cyber harm
A raft of international studies shows the more time spent on screen, the greater the risk to mental health. From credible research flows disturbing stats: one in six teens would have been cyberbullied in the past month, says the World Health Organisation; sexual grooming reports on Snapchat have gone up 550% since 2021; a well-regarded US study reported heavy social media users are twice as likely to be depressed as non-users.
Cecilia Robinson calls it “our seatbelt moment”. “Social media is absolutely a mental health issue,” she says. “We need to act now before the next generation becomes collateral damage.”
B416 wants recognition that parents can’t fight this alone. “The tech companies are savvier than the general population,” says Anna Curzon. “It‘s not fair to say to parents, ‘Hey, you decide, and you monitor your children’s phones.’”
National‘s proposed bill is modelled on Australia’s Online Safety Amending (Social Meda Minimum Age) Act, which comes into force in December. The world-first legislation followed a campaign, 36 Months, led by Sydney radio host Michael Wipfli and film-maker Robert Galluzzo, which was aimed at raising the age of access to social media to 16. A petition gathered 125,000 signatures and Prime Minister Anthony Albanese listened.
It requires the operators of TikTok, Instagram, Snapchat and Facebook to take “reasonable steps” to restrict access to under-16s. Non-compliance can result in multimillion dollar fines for “systemic breaches”.
Just slapping on a ban hastily drafted won’t solve the real problem. The real problem has to involve parents.
The moves have naturally not met with universal applause from either free speech advocates or tech companies. The Australian Human Rights Commission has said excluding under-16s from Instagram, Facebook, Snapchat and TikTok is censorship.
In this country, internet law expert and Listener columnist David Harvey says National‘s proposal could fall foul of freedom of expression provisions in the Bill of Rights. “The internet is primarily a system for communication,” the retired judge says, “so any attempt to regulate the internet has implications for freedom of expression.”
But already there is change as major platforms try to head-off mandated restrictions. Meta’s Instagram has introduced new teen accounts (54 million users worldwide are automatically opted in), while TikTok has introduced family pairings, allowing parents to monitor content. It‘s also blocked direct messaging for under-16s, and put a timer on – blocking 13- to 17-year-olds from using it for more than 60 minutes a day, and only until 10pm.
In the UK, new children’s codes will be ushered in through online safety legislation from July. The big social media platforms must use effective age checks to identify under-18 users and they face fines if algorithms fail to filter out harmful content.
Texas has just advanced a bill banning under-18s from creating social media accounts and platforms will have to verify the ages of users.
New York State passed laws last year requiring parental consent for under-18s to sign up to social media platforms; Utah has introduced age-verification rules putting the onus on Apple and Google, rather than individual apps such as Instagram and X, to check ages.
In Florida, a mother is suing an AI chatbot company after her 14-year-old son died by suicide following a romantic relationship with an AI character.

Developing brains
B416’s Anna Curzon agrees parents cannot keep up with what the platforms are doing, but their digital native kids don’t take long to find ways to skirt restrictions. “Kids are so much savvier than their parents. They can download an app and put it behind a calculator icon on their phone.
“New Zealand is an outlier with social media regulations for our children. We have nothing here to protect them. And we haven’t had a public debate.”
Although parent lobbyists and many school principals (see “Social exclusion”, opposite) agree platforms need to be forced to put more safeguards in place for children and youth, how we go about it – and how far we go – is strongly debated. University of Auckland public health researcher Dr Samantha Marsh, who focuses on child and youth health, argues for an outright ban for under-16s. She says the response to reports of social media damage to developing brains is reminiscent of “when Big Tobacco lobbyists said that cigarettes weren’t harmful to human health in the 1950s and 60s”.
Marsh‘s research has included the threads between social media use and youth mental health, cyberbullying and even links between screen use and obesity. With social media, she says girls are particularly vulnerable from 11 to 13, while for boys it‘s 14 to 15. “Adolescence is a time when kids are really driven to want to connect socially with people. They’re at the peak of their risk-taking behaviours and they’re reward-seeking and novelty-seeking,” she says.
“A ban isn’t the golden bullet. It‘s just one piece of a very complex puzzle. The platforms might say they’re tinkering to make them safer but these are design-led platforms, and there are so many aspects that you can’t control. You can’t stop somebody from looking at a picture of a pretty girl online and comparing themselves to her and [it] affecting mental health. That‘s not going to be filtered out.”
A member of B416, Marsh sends her own two children to a Rudolf Steiner school because she doesn’t want them learning on devices before age 13.
Makes Sense is another Kiwi parent-led lobby group and it, too, has a “hold the phone” policy, asking parents to hold off buying smartphones for kids until they’re 16. Until then, an older-style phone is just fine, and so much safer. Run by Auckland therapist Jo Robertson and communication executive Holly Brooker, Make Sense focuses on the sexual harms online. Robertson calls social media “the shopfront for the porn industry” and has a horror list of examples of content pre-teens and teens are exposed to, from grooming to snuff videos.

Targeted ads
It‘s not just porn that sells its wares to customers via a phone screen. In recent testimony to a US Congress inquiry, former Meta employee Sarah Wynn-Williams said its Facebook subsidiary shared information with advertisers based on an individual‘s posts. In Careless People, her bestselling insider view of Meta, the Kiwi tells how Facebook would track when girls deleted selfies so a beauty company could target them with product ads.
She wryly noted her colleagues didn’t allow their teens mobile phones, and Silicon Valley was “awash in wooden Montessori toys and shrouded in total screen bans”.
Associate Professor Sarah Hetrick, a specialist in psychological medicine at the University of Auckland, supports harm-reduction strategies rather than a social media ban for children. She believes Big Tech should be regulated to make platforms much safer and held to account for harmful content, rather than “letting them off the hook‘’ with an age restriction.
The platforms might say they’re tinkering to make them safer but there are so many aspects you can’t control.
Hetrick, who is principal adviser for the Suicide Prevention Office, is working on a youth suicide prevention tool #chatsafe, which is close to being launched on social media platforms. It models one run in Australia, but will be sensitive to Māori needs.
The mother of two teens says her 13-year-old son has seen disturbing online content she wishes he hadn’t been exposed to. But she’s also concerned age restrictions would lead to exclusion for marginalised youth, especially rangitahi Māori, who find support, make friendships and enjoy a sense of community on Snapchat, TikTok and Instagram.
She says purpose-built apps and websites aren’t enough to support them – an argument some mental health groups made in Australia in submissions on its minimum-age bill. One, headspace, quoted research showing 73% of young people sought mental health support on social media.
Hetrick says social media is not inherently good or bad. Young people can establish, and grow social connections online, and strengthen their friendships offline, too.
But Marsh, a colleague of Hetrick‘s in the university’s faculty of medical and health sciences, doesn’t buy the “improving mental health” argument. “Young people need online spaces which are completely regulated, where they can get the right support that‘s got evidence behind it. Not just another 15-year-old talking about their journey through self-harm.
“We also don’t want our kids reaching out constantly 24-7, whenever anything goes wrong to disclose information about themselves online, because currently they’re rewarded for that behaviour online.”
Cecilia Robinson acknowledges not everyone will agree with B416’s stance, but says, “This is increasingly an issue of equity. It‘s unrealistic to expect parents to manage this alone, and without proper safeguards, it‘s our most vulnerable children who will be most exposed and left even further behind.
Social exclusion
It was her 12-year-old son asking for a smartphone that started Cecilia Robinson on the path to co-founding B416. Why did he need one? Because his friends had them, was the answer, and without one, he felt excluded. The founder of Tend medical clinics says the mental health burden in primary care horrifies her, and her research showed how social media plays a part in this.
A year later, her son stands out in his Auckland classroom: he can text or call from his Kid-Safe smartphone but he’s the only one who can’t access social media, so he’s socially excluded from platforms like Snapchat – and at times, from his friends, which is worrying, too.
“As a country, we need to provide new social norms – it shouldn’t be normal that all children are on Snapchat,” says Robinson. “Meta is benefiting off our kids.
“We’ve had a massive societal shift in terms of how our children are using technology, and I don’t think we as a society understand enough about what our kids are seeing, and is this content child-appropriate?
“We should start thinking about it the way we say that children shouldn’t have access to tobacco or alcohol or cars until they reach a certain age. We also need a minimum legal age for social media.”

B416 expects to win the support of many schools. Kate Gainsford is principal of Aotea College in Porirua and chair of the Secondary Principals Council. She says the government-mandated ban on phones in schools a year ago has been a huge success: students are talking more with one another and class is not being interrupted as much. But social media remains a widespread problem, and principals would like social media platforms to be more responsible at managing and moderating content.
“Schools on their own can’t have the level of safety we need for young people in this space. We need an informed movement supported and enforced by the entire community, not just schools.”
At Whangaparāoa College, on Auckland’s Hibiscus Coast, principal Steve McCracken and his team deal with social media fallout on a daily basis. He says young people feel too much pressure to be tuned into their devices, responding to notifications, or trying to keep up or seek attention.
McCracken argues that critics of a ban, who focus on harm-reduction tools are missing the point. “Social media companies are using their technology to entice our young people and I just think we are fighting a losing battle.
“Until we as a country take a stand on this, it’s too tough. We’ve lost a generation and if we don’t do something now we’re going to lose more kids.”
A parent with young sons, McCracken plans to deny smartphone access to his boys as Robinson has done.