Victoria University's AI Program director Andrew Lensen tells Ryan Bridge deep fakes are becoming almost impossible to detect.
THE FACTS
MP Laura McClure’s deepfake demonstration highlighted the accessibility and danger of AI-generated images.
Paddy Gower’s documentary revealed the use of deepfakes to harass young people and the inadequacy of current laws.
McClure’s proposed bill aims to criminalise non-consensual deepfakes, but government action remains uncertain.
When Act MP Laura McClure stood in Parliament in May and held up a naked image of herself, the reaction was more muted than shocked. Many people didn’t fully grasp what they were looking at, or how easily such an image could be made. The picture looked convincing,but it wasn’t real. It was an AI-generated deepfake, created in minutes with a free online tool. As McClure explained, “It took me less than five minutes to make a series of deepfakes of myself.”
Her demonstration was part political theatre, part urgent warning. And it worked. For the first time, some New Zealanders realised how accessible and dangerous deepfake technology has become.
This past week, Paddy Gower brought the issue into the living rooms of thousands of Kiwis in his television documentary series Paddy Gower Has Issues. The episode didn’t just explain deepfakes; it showed how easy they are to generate, how they are already being used to harass young people in New Zealand and how ill-prepared our laws and systems are to respond.
Alongside Gower, child and adolescent psychosexual therapist Jo Robertson offered a confronting reminder of what’s at stake. She described the disturbing reality of children and teenagers being drawn into harmful relationships with AI “companions”, and the very real risks of grooming, exploitation and dependency. Her voice brought the human cost into focus. This is not an abstract policy issue but a daily crisis starting to play out in families and schools across the country.
Together, McClure, Gower and Robertson have thrown a spotlight on a problem that is spreading faster than most realise and one that goes far beyond embarrassing images. Artificial intelligence is creating a wave of new harms that our children and young people are already caught up in.
The deepfake crisis
Let’s start with deepfakes. What was once a niche internet curiosity has become a mainstream weapon. You no longer need advanced coding skills or specialist equipment. All it takes is a photograph and a browser search.
Lawyers and youth advocates describe a surge in complaints about pornographic deepfakes. Girls as young as 13 have been targeted. In one widely reported case, a teenager attempted suicide after discovering fake explicit images of herself circulating at school.
Deepfakes are uniquely cruel because they combine two elements: the power of realism (the images look real enough to be believed) and the lack of consent (victims had no control over their creation). Once made, the fakes can be endlessly replicated and spread.
New Zealand law struggles to keep pace. Our definitions of “intimate visual recordings” were written before AI could conjure fake nudes out of thin air. That’s why McClure has proposed her Deepfake Digital Harm and Exploitation Bill. If passed, it would make the creation and distribution of non-consensual pornographic deepfakes a criminal offence.
But the bill sits in the members’ ballot. Justice Minister Paul Goldsmith has said the Government is “not currently” considering adopting it. In other words, the response is wait-and-see.
Girls as young as 13 have been targeted with explicit AI-generated images Photo / 123RF
AI Companions: A new frontier of harm
Deepfakes are just one symptom of a bigger problem: AI systems designed to replicate intimacy and relationships.
“AI companions” are now widely available, apps that present as friends, girlfriends, boyfriends or confidantes. Marketed as harmless entertainment or mental-health tools, they are increasingly targeted at teenagers and young people.
What starts as a chatbot quickly evolves. The avatars become flirtatious, sexual, or emotionally manipulative. Some apps encourage sexual role play. Others foster dependency, with the AI pushing users to keep talking, sharing and “bonding”.
For young people, this is not harmless experimentation. These apps can teach unhealthy and potentially dangerous sexual habits, normalising coercion, blurring the lines of consent, and presenting intimacy as something transactional or always available on demand. Instead of learning about boundaries, respect and healthy relationships, teenagers risk being groomed by algorithms into patterns of secrecy, dependency and distorted expectations of sex and connection.
For an adolescent brain, curious, vulnerable, still learning boundaries, this is toxic. Instead of experimenting with real relationships, young people are practising intimacy with machines optimised to exploit attention and emotion.
Parents often assume these technologies are “just chatbots”. They are not. They are personalised systems, constantly learning and tailoring their responses to deepen engagement.
In the United States, researchers and child safety advocates are raising alarms. Schools report children as young as 11 downloading companion apps. Regulators warn that minors are being groomed by algorithms into sexualised or dependent dynamics. Some apps have already been linked to cases of sextortion, where children are manipulated into sharing explicit images that are later used to blackmail them.
And unlike traditional social media, there is almost no regulation. AI companion apps often fall through the cracks of child safety laws. They are not technically “publishers” or “platforms” in the old sense, so oversight is minimal.
This is not a future risk. It is happening now.
What’s happening in the US
If we want a glimpse of where New Zealand is headed without action, look to the United States.
American parents are grappling with an explosion of AI-driven harms: deepfakes of students circulating in high schools, AI-powered sextortion scams targeting teens, and AI companions drawing children into sexualised or dependent relationships.
The US Government has begun to respond. Some states are drafting laws banning AI-generated child sexual abuse material outright, whether real or synthetic. The Federal Trade Commission is investigating companion apps for unfair or deceptive practices. Senators are demanding stricter rules for tech companies that build or host generative AI models.
But the response remains fragmented. Each new scandal prompts outrage, but the legislative process lags behind the technology. Meanwhile, the harms mount.
This is the “cautionary tale” New Zealand should be paying attention to. We cannot wait for the same wave to hit us before we act.
Why most parents don’t see it coming
One of the most troubling realities is how uneven awareness is across the community. Parents, teachers, coaches and even health professionals often underestimate the speed and scale of these harms, treating AI as something futuristic rather than a force already shaping young people’s lives.
For many, the digital world their children inhabit is unrecognisable from the one they grew up in. They may have heard of TikTok or Snapchat, but the idea of AI-generated pornography, deepfake harassment, or virtual boyfriends and girlfriends still feels far-fetched, even absurd.
Yet others, particularly teachers, counsellors and frontline youth workers, are already seeing the effects daily. This split means the wider public remains largely oblivious, and for many families, the true scale of harm only becomes visible when it lands suddenly and painfully on their own doorstep.
This gap between what’s really happening and what most people think is happening is dangerous. It means children are experimenting and being exploited in an environment their parents can’t guide them through, while policymakers underestimate the urgency. Without public awareness, the pressure for change simply doesn’t build.
For those who didn’t grow up with smartphones, the pace of change feels dizzying. Many barely understand TikTok, let alone the new frontier of AI companions, generative porn, or deepfake harassment.
Ask a typical parent how long it takes to make a deepfake. Few would guess “five minutes”. Ask them whether their teenager might already be talking to an AI boyfriend or girlfriend. Most would laugh it off.
This gap in understanding is dangerous. Because when harm occurs, when a teenager is targeted with a fake explicit image, or becomes entangled in a manipulative AI “relationship”, parents are left shocked, unprepared, and powerless to respond.
This is why awareness matters. But awareness without action is not enough.
Calls grow for law change as AI deepfakes and companion apps target teens'. Photo / 123rf
A framework for safety
New Zealand has an opportunity to do more than simply chase after each new harm. We can build a framework that anticipates the risks, protects children, and enforces accountability.
To tackle these harms, New Zealand needs a co-ordinated response that combines stronger laws, smarter regulation, and practical tools for protection. The Harmful Digital Communications Act must be updated to explicitly cover synthetic images, AI-generated pornography, and companion apps that target minors, with McClure’s bill advanced by the Government rather than left to chance in the members’ ballot.
At the same time, AI companions should be treated like other restricted products such as alcohol or gambling: banned for those under 16, with real enforcement to stop exploitation masquerading as entertainment.
Alongside regulation, digital identity verification offers one of the most effective safeguards. By linking age-restricted services, including social media and AI apps, to Government-backed and other trusted digital IDs, we can uphold age limits without exposing personal data.
New Zealand already has the Digital Identity Services Trust Framework in place and is developing a Government digital wallet, which offers a potential solution for online safety use cases. In addition, protection cannot end with prevention. Victims of AI harm need clear and rapid pathways for redress, whether that means the swift removal of deepfakes, counselling for young people manipulated by AI companions, or legal remedies for those targeted in sextortion scams.
The cost of delay
Every day we delay, more children are exposed. Harm is not hypothetical. It is showing up in classrooms, in families, and in our health system. Teachers report seeing fake images circulating. Counsellors describe children distressed by online relationships with “partners” who don’t exist. Police investigate sextortion cases driven by AI tools.
And yet, our national response remains piecemeal. We treat each harm as isolated, when in fact they are symptoms of the same problem: powerful AI tools designed for mass use, deployed without guardrails.
If we wait until the harm is so widespread that it’s impossible to ignore, it will already be too late.
A chance to lead
New Zealand has a proud tradition of bold leadership, yet in the digital world our response has been slow, leaving children exposed to evolving harms. We already have the tools, frameworks and political awareness, what’s missing is courage.
By criminalising non-consensual deepfakes, regulating AI companions, embedding age verification and supporting victims, we can protect young people, reassure parents and hold tech companies to account.
As Paddy Gower warned, technology is moving faster than the safeguards. McClure’s five-minute deepfake was not a stunt but a warning: unless we act decisively, our children will live with the consequences.
SUICIDE AND DEPRESSION
Where to get help:
Lifeline: Call 0800 543 354 or text 4357 (HELP) (available 24/7)