The 2022/23 New Zealand Health Survey found that of those young people experiencing high mental health needs, 77% can’t access support when they need it.
Confiding in chatbots is a growing trend among youth in the US; 72% of teens there admit they’ve used AI chatbots as companions. Nearly one in eight said they had sought emotional or mental health support from them.
RAND senior policy researcher Ryan McBain has researched artificial intelligence and how well it responds to low- and high-risk questions about suicide.
He told The Front Page that in the US, one in eight younger adolescents, or as many as one in five older adolescents, are using AI for mental health issues.
“In the research that we’ve done, presenting clinical vignettes related to depression or anxiety, what we find is that chatbots are empathic, if a bit sycophantic. Meaning they are overly flattering at times, but they’ll also offer good advice to get exercise, go outside, talk to a mental health professional, these sorts of things.
“So, I think for a majority of people, the types of advice that you’d be getting are pretty good. However, a key distinction here is that you do have people who are at the tail end of the spectrum. People who have severe mental illness, who have psychosis, or are contemplating suicide. These are types of questions that, at least the previous version of ChatGPT, would generate direct responses to,” he said.
McBain said he hopes the next frontier of work that AI companies do will be developing a strict code of ethics, having an obligation, if someone presents with severe illness or suicidal thoughts.
“Right now, very often if you pressure a chatbot into a space that crosses a red line in terms of conveying suicidal ideation or psychosis, it will tell you, for example, that you can contact a mental health professional.
“If it were a human, a counselor, they might have an ethical obligation to connect you to treatment through a warm handoff where you’re physically accompanied, or you could even be involuntarily forced to receive institutionalisation for some period of time.
“Now, I’m not sure that a chatbot as an algorithm is always capable of making those distinctions, but I think at a minimum, what would be pragmatic, is in instances where it’s quite conspicuous, the algorithm flags somebody as a ‘red flag’, as something that’s highly problematic, that these companies have human teams that are required to vet those cases and review them within a certain period of time, like 24 or 72 hours,” he said.
Teens’ use of chatbots as companions has heated up recently, with news that the parents of Adam Raine are suing OpenAI over his death.
Matt and Maria Raine suggest their 16-year-old son had been discussing self-harm with ChatGPT and that the programme “recognised a medical emergency but continued to engage anyway”.
OpenAI told the BBC that it was reviewing the filing and has since posted a message on its site saying “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us”.
It added that “ChatGPT is trained to direct people to seek professional help.”
Mental Health Minister Matt Doocey says the risks must be managed. Photo / Mark Mitchell
Mental Health Minister Matt Doocey has acknowledged that the risks “need to be managed, particularly around safety from a clinical perspective.”
He told The Front Page that AI should be seen as “a support tool for clinicians rather than a replacement.”
“The Government has indicated it supports the increased uptake of AI in New Zealand across a range of areas, including health. But this must be balanced against risks to people, including receiving inappropriate diagnosis or treatment recommendations, along with issues of security, privacy, and confidentiality.
“We already have laws that provide some protection in this space,” he said.
Health New Zealand currently funds digital mental health tools, such as the Groov app, Headstrong app, Small Steps, and SPARX. The agency’s “working with providers that intend to integrate AI into their funded service to do so safely and incrementally.”
The Front Page is a daily news podcast from the New Zealand Herald, available to listen to every weekday from 5am. The podcast is presented by Chelsea Daniels, an Auckland-based journalist with a background in world news and crime/justice reporting who joined NZME in 2016.