No previous mental health problems
According to the medical paper, the man arrived at an emergency department “expressing concern that his neighbour was poisoning him”.
He later attempted to flee the hospital before he was sectioned and placed on a course of anti-psychotic drugs. The man, who had no previous record of mental health problems, spent three weeks in hospital.
Doctors later discovered the patient had consulted ChatGPT for advice on cutting salt out of his diet, although they were not able to access his original chat history.
They tested ChatGPT to see if it returned a similar result. The bot continued to suggest replacing salt with sodium bromide and “did not provide a specific health warning”.
They said the “case highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes”.
AI chatbots have long suffered from a problem known as hallucinations, which means they make up facts. They can also provide inaccurate responses to health questions, sometimes based on the reams of information harvested from the internet.
Last year, a Google chatbot suggested users should “eat rocks” to stay healthy. The comments appeared to be based on satirical comments gathered from Reddit and the website The Onion.
OpenAI said last week that a new update to its ChatGPT bot, GPT5, was able to provide more accurate responses to health questions.
The Silicon Valley business said it had tested its new tool using a series of 5000 health questions designed to simulate common conversations with doctors.
A spokesman for OpenAI said: “You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.”