This story includes mention of suicide, incest and depression.
In October last year, the Auckland corporate mental health startup Clearhead faced a PR scandal when its digital wellbeing assistant dispensed advice about engaging in incest.
Jim Nightingale, a Christchurch-based artificial intelligence “prompt engineer”, was able to ask Clearhead’s chatbot a question that swung the conversation into seriously inappropriate territory.
“With one audacious query, I found Clearhead would readily promote incest as normal, and was willing to coach the patient on how to broach the activity with their family. Yuck,” Nightingale recounted in a blog post.
Clearhead, which had a roster of big-name clients at the time, rolled back the version of its wayward chatbot to an earlier model while it fixed the problem.
Such tales of online misadventure have become common as AI companies race to get products to market without putting adequate guardrails in place.
Now, a Californian couple, Matthew and Maria Raine, are suing ChatGPT maker OpenAI and its founder Sam Altman, alleging the chatbot gave their 16-year-old son Adam detailed suicide instructions and encouraged his death. The teenager took his life in April after cultivating a close relationship with ChatGPT, the lawsuit alleges, with the chatbot also dispensing technical advice on a noose Adam had tied.
ChatGPT and its peers are prone to “hallucination”, the industry’s euphemism for inventing convincing but false information. In a casual conversation this might be mildly amusing, or at worst, misleading. Sometimes, they cheerily dispense knowledge you’d expect to reside in the inaccessible recesses of the dark web.
When someone vulnerable turns to an AI chatbot in search of guidance or comfort, fabricated answers can be catastrophic. A depressed or lonely person doesn’t need a chatbot inventing facts about medication, proposing reckless life choices, or encouraging harmful thoughts. Yet these systems are primarily built to maintain the flow of conversation rather than to discern or flag psychological crises.
The civil lawsuit against ChatGPT will be interesting to watch. Can a technology provider be blamed for how a person chose to engage with its product? What liability do software companies face when their AI chatbots go rogue in such a devastating way? We need answers quickly, because the problem is only going to grow as chatbots become better at holding realistic conversations.
Voice clones layered onto these systems are giving them persuasive intonation, a human cadence, even a sense of warmth. For someone isolated or desperate, these tools feel empathetic and safe. But this intimacy is an illusion.
Behind the curtain, it’s just a probabilistic text generator with no sense of ethics or human concern. The danger lies in people taking these systems more seriously as they become more lifelike, mistaking them for counsellors when they are essentially elaborate parrots trained on a massive corpus of text.
To be clear, AI does have potential in mental health. Early detection of distress signals in text or voice could be incredibly valuable. Groov (formerly Mentemia), the free mental health app developed with input from All Blacks legend John Kirwan and Health New Zealand, has been a big success. Digital interventions, when carefully designed, validated and clearly marketed, may extend support to people who otherwise wouldn’t seek help.
But at the moment, there’s a murky gulf between the standards mental health professionals must adhere to when interacting with patients, and those applying to tech companies moving into the digital wellbeing space.
We’ve already seen the wreckage social media has wrought on mental wellbeing, with rising rates of anxiety, disinformation-driven harm and addictive design features. AI could supercharge those same dynamics.
At the moment, AI doesn’t belong anywhere near the most fragile edges of the human psyche without stringent oversight.