
They say the worst thing you can do is Google your symptoms when you're unwell, but turning to ChatGPT for medical advice could also have some pretty dire consequences.
A 60-year-old man discovered this for himself when he found himself in hospital after he poisoned himself on the AI chatbot's advice.
The man, whose case is detailed in the American College of Physicians Journals, was concerned about the amount of salt in his diet and the negative impact it could be having on his health, so he decided to consult ChatGPT about cutting out sodium chloride.
The AI bot suggested he start consuming bromide instead, which can be found in small amounts in seawater and in certain minerals. It was previously used as ingredient in a number of pharmaceutical products, however, it has since been discovered to be toxic to humans in larger quantities.
Advert

Unaware of this, the man began replacing salt with bromide he ordered from the internet and after about three months, he started experiencing severe paranoia and hallucinations, which led to him being hospitalised.
The man, who had no previous history of poor mental or physical health, initially suspected his neighbour of poisoning him, however, after being treated with fluids and electrolytes he shared other symptoms, including new acne and cherry angiomas, leading doctors to conclude he was experiencing bromism.
Bromism, which is caused by excessive exposure to bromine, can cause neurological symptoms like seizures, tremors, confusion and even comas. It can also cause anxiety, depression, psychosis, fatigue and anorexia, among other symptoms.
"Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet," the case report explained.
Advert
He replaced table salt 'sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning'.

After three weeks in hospital, the man was discharged and the author of the case has warned others not to make the same mistake of taking medical information from AI sources such as ChatGPT.
They wrote: "It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation."
Meanwhile, OpenAI, the developer behind ChatGPT says the Terms of Use say information 'may not always be accurate'.
Advert
The terms state: "You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice."
The company’s Service Terms also say: “Our Services are not intended for use in the diagnosis or treatment of any health condition.”
LADbible has contacted OpenAI for further comment.
Topics: Technology, Health, AI, Mental Health