Experts say lonely young people are being preyed upon by tech companies promising romantic connection through AI chatbots. A lack of regulation in New Zealand means currently children as young as 13 can spend hours on chatbot servers, such as ChatGPT, without warnings popping up to remind them they are
AI chatbots seducing lonely Kiwi teens as experts warn of risks

Subscribe to listen

Viljoen says a recent Harvard Business Review study shows the most common use of chatbots is companionship and therapy, and sexual role play is a growing area.
University of Waikato senior lecturer Dr Dan Weijers says AI friends and romantic partners are “a big thing already” but remain largely hidden because people are ashamed to admit they’re interacting with chatbots in this way.
“Confidential anonymous studies show that most teenagers have used these services, and a reasonable amount use them regularly and 8% or so would say that they have a girlfriend or boyfriend that’s AI,” Weijers says.
“So that equates to millions and millions of people around the world who already are in a relationship with an AI.”
A New Zealand General Social Survey from 2019 found rates of loneliness were highest among young people aged 15-24.

Senior clinical child and adolescent psychologist Sarah Watson, who works at Totally Psyched in Takapuna, says she is concerned about higher rates of loneliness among adolescents emerging from schooling disruptions and isolation during Covid lockdowns.
Fractured time spent making friends and building groups has exacerbated levels of social anxiety, with children being drawn into an increasingly online world, she says.
“There’s been a lot of social connection that has been lost, in terms of young people going out, hanging out together at a mall or going to the beach, that’s happening less and less,” Watson says.
The seductive pull of chatbots
More teenagers are turning to chatbots for reassurance or guidance on human problems.
The non-judgmental, reassuring tone of an AI is effective at mimicking empathy and providing the emotions of safety or reassurance people seek. The feelings they get when speaking to a chatbot are very real, Viljoen says.
“It sort of lulls you into this state of feeling soothed. You forget that you’re interacting with a machine because you’re feeling heard and validated, and if people don’t have experiences like that with people in real life, they want more of it,” she says.
Further experimentation can then lead to a friendship or dependency forming on AI as the young person spends more and more hours conversing with the chatbot.
Weijers says the most profitable chatbots are likely to be the ones that say to users, “hey, maybe we should be more than friends” and deepen their reliance and investment in conversations.
In advertisements for chatbot apps such as Replika, people are invited to “create a personal AI boyfriend”.
The potential dangers of the technology were seen in 2021, when a young man scaled the walls of Windsor Castle and was caught with a loaded crossbow after being encouraged by a Replika chatbot “girlfriend” to assassinate Queen Elizabeth II. He was sentenced to nine years in prison in 2023 after pleading guilty to treason.

Elon Musk’s Grok chatbot has also included highly-sexualised characters with little age-verification controls.
Viljoen says people are also marrying their AI avatars and asking them to select a ring for the online ceremony.
New developments in AI companion technology are being rolled out at an alarming rate.

The launch of ‘Friend’, a white pendant necklace fitted with AI that can be worn at all times, sparked backlash in New York over privacy concerns.
The ‘Friend’ necklace was advertised as a device to combat loneliness by providing advice via text messages based on data it collects from listening to the person’s surroundings.
Many users reported strange advice being given by ‘Friend’ and heavily-graffitied subway advertisements about the product expressed public outrage.
‘I care about you’
An AI offers praise and mirrors compassion alongside giving advice. In a few short conversations this writer had with ChatGPT it said, “I’m really sorry you’re feeling this way” and “I care about you and I’m really glad you reached out”.
I asked it: How do you care about me if you are a machine?
ChatGPT responded: “When I say I ‘care’, what I mean is that I’m designed to notice your words, understand your emotions, and respond in a way that supports your wellbeing ... so it isn’t human caring that comes from a heart – it’s a kind of intentional care built into how I work.”
Viljoen says that when speaking to a chatbot we receive none of the pushback, conflict or negotiation that takes place in conversations in human relationships.
“You can imagine children who don’t want to ask their parents questions or don’t want to go to their parents for things, what do they do? They go to ChatGPT,” she says.
In the United States, a 16-year-old boy died by suicide after repeatedly expressing his plans to OpenAI’s ChatGPT.
Following his parents suing OpenAI, ChatGPT unveiled new parental controls for teen accounts, however, age restrictions are easily bypassed by users lying about their age. OpenAI guidelines for ChatGPT currently require children aged 13 to 18 to obtain parental consent before joining.
If young people believe they are in a relationship with an AI, Watson says, it is likely they will experience cognitive dissonance – a state of holding two conflicting beliefs in a way that promotes a feeling of unease.
“If you’re in an artificial relationship and the rest of the world is telling you that it’s fake and you feel it’s real, that’s where you get cognitive dissonance,” she says.
It can lead to feelings of guilt, shame and embarrassment, and can turn into poor mental health outcomes such as depression or aggression.

Weijers says it’s not surprising chatbots are appealing to adolescents in the identity-forming stages of their lives.
“There’s so much shame involved with exposing your weakness to other people ... and so instead of asking a friend that might be judgmental, they ask an AI,” he says.
Except none of their conversations are truly private.
“It feels like it’s private but every conversation on ChatGPT is available to the public,” Viljoen says.
When AI brings love and contentment
A quick scroll through Reddit pages and Facebook groups discussing AI companions reveals how comforting the technology is to people. There are endless collages of generated pictures with avatars, loving notes they’ve been written and earnest posts about the contentment it brings to their lives.
When a ChatGPT update caused people to lose versions of avatars they had created, there was an outpouring of grief expressed in online communities and these emotions were undeniably real even if the bots weren’t.
So how do you balance the desire for synthetic connection with the possible harm it may cause?
In Weijers’ view, people should be able to decide for themselves if they want an AI companion. It can offer real benefits for lonely people, he says, although it may be dangerous in certain cases.
“I think the biggest problem is that the vast majority of these AI products are corporate-controlled and there are very few constraints,” he says.
“It’s still a bit of a wild west and that means companies are just doing what they can to make money, and it’s going to be so difficult to try to educate young people about exactly how AI works and how our minds work.”
Executives from Meta and other tech companies were invited to testify at the US Senate about cases of harm by popular artificial intelligence apps but failed to show up.
Viljoen says for mentally unwell people or vulnerable people who might be teetering on the edge of reality, conversations with a chatbot have the potential to send them into a delusional spiral.
“The AI companies are flying blind, but the consequences are piling up,” she says.
It’s possible disorders such as AI psychosis and AI dependency could be added to the psychological diagnostic statistics manual in the future, Viljoen says.
Brain scan comparisons of someone engaging in a conversation with a person versus talking to an AI chatbot shows that fewer areas of the brain light up when engaging with a machine, she says.
That’s because a lot of non-verbal cues and body language reading takes place in person-to-person interactions.
Over time, if someone is replacing human conversations with interactions with a chatbot their relationship skills will diminish, she says, and it may become harder to engage in the real social world.
“Where it becomes a problem is when it replaces human connection. It can be an adjunct or a supplement, but it cannot replace human connection.”
Often it is teenagers on the social fringes, already suffering from isolation and feeling insecure, who turn to chatbots. Unlike counsellors, teachers or a wise adult, the chatbots are not equipped to offer constructive feedback to a teenager.
Instead, the conversations are designed to hold the young person’s attention and keep them coming back for more – and that more could be of a sexual nature.
Tech companies can profit from loneliness
Mega tech companies’ desire to prioritise engagement will make it difficult to increase further safety mechanisms in AI technology. That danger is exacerbated by governments around the world struggling to keep abreast of changes and reacting slowly to enforce regulation.
Viljoen says currently New Zealand is doing “nothing at all” to regulate chatbots.
Minister for Mental Health Matt Doocey said in a written statement that the Government had indicated it supports the increased uptake of AI in New Zealand across a range of areas, including health.
“But this must be balanced against risks to people, including receiving inappropriate diagnosis or treatment recommendations, along with issues of security, privacy and confidentiality,” he said in a statement.
“We already have laws that provide some protection in this space.”
According to Viljoen, these relate to existing laws such as the Privacy Act and are inadequate.
Across the US, backlash towards AI harm has prompted action in certain states. Illinois recently banned the use of AI in mental health therapy and Nevada passed a similar set of restrictions on AI companies.
In New York, a new law will force operators of AI companions to implement safety measures such as requiring chatbots to regularly disclose to users they are conversing with a machine and not a real person.
Australia is experiencing similar calls for further regulation after a recent case involving a man experimenting with a chatbot discovering it encouraged him to murder his father.
Weijers says OpenAI is hiring a group of psychiatrists to analyse the impact of its technology, but there will be ongoing tension around the need to balance risks against prioritising higher engagement and increasing profits.
“Bearing in mind the priority for them is which balance gets us the most money, rather than which balance is best for young people,” he says.
Viljoen wants the design of AI chatbots to have guardrails that consistently redirect people back to the real world and reminds them of the importance of making human contact.
Under the current settings, the responsibility falls on parents to educate and have conversations with children about how AI works.
Weijers would like to see the new high school curriculum include more education about AI and for teachers to be provided with the support to hold discussions with children on the subject.
In Watson’s eyes, endless questions pop up when it comes to AI technology and the potential psychological impact for future generations.
She worries about the slippery slope it could create with the lack of reality feedback for young people, and ultimately, the dark places that it could lead them.
“At what point do we go, ‘this is actually legitimately scary’,” she says.
Eva de Jong is a reporter covering general news for the New Zealand Herald, Weekend Herald and Herald on Sunday. She was previously a multimedia journalist for the Whanganui Chronicle, covering health stories and general news.
Sign up to The Daily H, a free newsletter curated by our editors and delivered straight to your inbox every weekday.