A principal, parents and a safety educator speak about the risks to children amid obsessive social media consumption. Video / Mike Scott / Carson Bluck
National MP Catherine Wedd has put a member’s bill into the ballot to ban social media for under-16s, based on the ban recently passed in Australia.
Act does not support the bill, so it is not a Government bill. It will need to be drawn from the ballot to come before the House.
It might eventually pass, if drawn, as other parties have expressed some support for the bill – but not without reservations.
There is a lot to unpack on the vexed issue of social media platforms, the harms they can cause to young people in particular, and the Government’s role in minimising those harms.
Social media platforms such as TikTok and Instagram have been associated with a dramatic increase in mental healthproblems among New Zealand teens.
While some researchers believe the link is inconclusive, others maintain there’s an overwhelming correlation between social media and rising teenage anxiety, depression and loneliness.
Politicians and communities have long been calling on social media platforms to be more responsible. Despite content moderators, teen accounts (on Instagram), and easier ways for parents to see what their kids are doom-scrolling (TikTok’s family pairings), tech platforms have been generally reluctant to do anything that might hurt their bottom lines, which are founded on maximal engagement.
This perceived inadequacy has led to governments around the world increasingly responding with regulation.
Following Australia’s lead, National MP Catherine Wedd has put forward a member’s bill that would ban social media for under-16s. Platforms such as X, Instagram and TikTok would have to take “all reasonable steps” to verify users were at least 16 before allowing them to create an account. Failure to do so would lead to fines of up to $2 million.
Tukituki MP Catherine Wedd with her bill banning social media for under 16s. Prime Minister Christopher Luxon, who supports the bill, is holding the biscuit tin from where member's bills are drawn.
The bill will only come before the House if drawn from the members’ bill ballot, however, because it’s not a Government bill; Act doesn’t support it, saying the real solution should involve parents and calling instead for a select committee inquiry.
It sounds simple and is popular; a 1 News-Verian poll in December last year found 68% support for a ban for under-16s, while a Horizon poll found 74% support for an age limit, with 16 being the most popular age threshold.
This makes it politically appealing for Prime Minister Christopher Luxon, who supports it, while Labour leader Chris Hipkins is more circumspect but broadly supportive.
So why do some key stakeholders say it might be ineffective, unworkable, unfair, and even worse than the status quo?
The legal minimum
There’s a reason why the ban in Australia, passed last year, isn’t coming into effect until December: issues that need ironing out to ensure it’s workable.
One is how to verify users’ age, and how to do this without intruding on privacy (Wedd’s bill has a vague reference to privacy as a relevant factor). What data will be used, how will it be protected and securely stored, and for how long?
“I presume they’ll all have to prove their age,” Netsafe chief executive Brent Carey told the Herald.
“The whole verification is a minefield, with trials in Australia around whether it should be your photo or your document or facial recognition/biometrics. There’s a fine line between something that works, and something that’s a kind of Big Brother surveillance intrusion of privacy.“
There’s also a question of whether a ban can be easily circumvented with a VPN, which places the device being used in a different country. This is how, for example, social media users in China avoid that country’s internet censorship.
Instagram is considered one of the worst social media platforms for youngsters' mental health. Photo / 123rf
Then there are questions over jurisdiction. Would a ban in New Zealand apply to a social media platform with servers in another country? If so, what enforcement action would be available if the platform simply didn’t comply?
The answers are yet to be fully tested. Regulatory frameworks in individual countries that could be challenged – such as Britain’s duty of care obligations for tech platforms – are only just getting started.
“We’re really not going to see how effective it is for a little bit of time yet,” said University of Canterbury senior law lecturer Cassandra Mudgway, who researches online abuse and human rights.
There has been one interesting case of note, she said: the legal impasse after Australia’s eSafety Commissioner Julie Inman-Grant issued takedown notices to Meta and X over footage on their platforms of a stabbing in Sydney.
Inman-Grant went to a federal court in an attempt to enforce a global removal, but eventually dropped the case after the court initially ruled in X’s favour. X is now challenging the validity of the takedown order.
Mudgway said the whole episode showed tech companies’ general unwillingness to do anything more than the legal minimum.
“That is the risk of having only one or two countries enforcing certain standards: the tech companies are only going to do the minimum in each country.” This could be mitigated if several countries held the same standards.
‘Half-baked’
Even assuming a ban is workable and enforceable, there’s a question of fairness. It would likely be an unreasonable restriction on the right to freedom of expression, which was why the Australian Human Rights Commission was against the ban in Australia.
A blanket ban would also unfairly deprive young users of the benefits of social media, including making it easier for the marginalised or vulnerable to connect with their peers.
“Take young people living with autism, for whom part of their main support system is talking with each other [on social media], or young kids dealing with anxiety or depression, and no longer being able to do that,” said Internet NZ chief executive Vivien Maidaborn.
A ban could see them turn to a dark corner of the internet “even less visible to adults, that we can’t even see let alone engage with”.
InternetNZ chief executive Vivien Maidaborn says New Zealanders are right to be concerned about social media harming young people.
“Nothing like this proposal should happen without a really robust process with those affected communities, because simple solutions very rarely have the outcome you want,” she said.
“The unintended consequences can do as much harm as the current harm.”
“But this as a solution feels half-baked, a simplistic approach that risks being symbolic rather than effective.
“It needs refining in terms of penalties – $2m is negligible for global platforms [the Australian law enables fines 27 times greater] – robust enforcement, privacy protections, and a holistic approach to online safety, and how these big media platforms operate.”
She and Carey both called for more education.
“Education from a young age about online safety isn’t there at the moment,” Carey said, noting the refreshed technology curriculum was yet to land.
He said the proposed ban was a “great conversation starter”, but he wanted a whole-of-government review of online safety.
That included a review of the Harmful Digital Communications Act 2015 (HDCA), which enabled Netsafe to handle complaints about online material. Last year there were 28,468 online harm reports, including 6272 harmful digital complaints.
“We were one of the first jurisdictions in the world to bring in alternative dispute resolution for a social media dispute. We sit between the alleged producer of the content, the victim, and the platform, and try to resolve a matter before it can go to a court,” Carey said.
“But I’m a bit worried because demand to NetSafe has been growing 20% every year. We’re not in crisis mode, but we’re sounding the alarm.
“We’re working with really old legislation. It’s too important an issue to be run out of the policy biscuit tin.” Members’ bills are randomly drawn from a biscuit tin.
Netsafe chief executive Brent Carey is calling for more education from a young age about online safety. Photo / Dean Purcell
No silver bullet
Carey said other Western jurisdictions have surpassed New Zealand in building better ecosystems for online safety.
Some of what exists in those systems, but lacking in New Zealand, are duty of care obligations for online platforms, an online safety regulator, and the ability to issue codes of conduct to online hosts, rather than leave it to the platforms themselves.
The new duty of care rules in Britain, for example, make online platforms more liable for the content they host. That includes a requirement to prevent children from accessing harmful material such as porn, or content that encourages eating disorders or suicide.
Online platforms must also ensure “abusive or hateful” content is age-appropriate, and manage their algorithms in a way that prevents users’ exposure to illegal or age-inappropriate content.
It’s too early to see how effective this will be in terms of limiting harmful content, or resolving the tension between freedom of expression rights and what might be considered harmful. Surely an Instagram feed full of models shouldn’t be against the law, but what if it could be seen as encouraging an eating disorder among young girls?
One of the advantages of a blanket ban for under-16s is that it avoids such questions, to which there are valid arguments on both sides.
Under the HDCA, online platforms in New Zealand aren’t liable for the content they host.
There’s been only one meaningful change to digital safety laws in New Zealand in the past decade. Following the 2019 terror attack in Christchurch, the change enables online platforms to be fined up to $200,000 if they fail to comply with a takedown notice as soon as “reasonably practicable” for illegal content on their site.
This was about creating a more practical response to illegal content - such as child pornography, or material inciting violence - rather than broadening the scope of what should or shouldn’t be online.
The latter inevitably raises free speech issues, territory the Government is reluctant to tread. It has shelved work on hate speech that was recommended by the Royal Commission of Inquiry into the terror attack.
The previous Government also looked into an independent online regulator, similar to what exists in Australia, but the current Government scrapped it after lobbying from free speech advocates. The Government also has no current plans to review the HDCA.
Carey is supportive of a select committee inquiry in the absence of any Government review. Just because there’s no obvious solution doesn’t mean nothing should be done, he said.
“It’s such a complex and nuanced space. We need the right tools, parents, platforms, regulation, education, they all have to be factored in.
“No country’s cracked it, but we all want to do more for safety.”
Sign up to The Daily H, a free newsletter curated by our editors and delivered straight to your inbox every weekday.
Derek Cheng is a senior journalist who started at the Herald in 2004. He has worked several stints in the Press Gallery team and is a former deputy political editor.