A U.S. medical journal has issued a cautionary note about relying on artificial intelligence for health guidance after a 60-year-old man developed a rare medical condition following advice he reportedly sought from ChatGPT about eliminating table salt from his diet.Case Reported in Leading Medical JournalThe case, published in the Annals of Internal Medicine by researchers from the University of Washington, describes how the man developed bromism, also known as bromide toxicity, after substituting sodium chloride (table salt) with sodium bromide. Bromism was a recognized condition in the early 20th century and is believed to have accounted for nearly one in ten psychiatric admissions at the time.The man told doctors he had read about the negative effects of sodium chloride and consulted ChatGPT for alternatives. He said the chatbot mentioned that chloride could be replaced with bromide, albeit “likely for other purposes, such as cleaning.” Despite this, he began taking sodium bromide regularly for three months.Also Read: UK Health Chiefs Warn of Rising Cases of Deadly Chikungunya Virus That Can Cause Body to ‘Fold In on Itself’ Sodium bromide, historically used as a sedative, has largely been abandoned in modern medicine due to safety concerns.Unclear What Advice Was Originally GivenThe researchers noted that they could not access the patient’s original ChatGPT conversation log, making it impossible to verify the exact wording or context of the advice. However, when the authors themselves asked ChatGPT about replacing chloride, the AI’s answer also mentioned bromide, provided no explicit health warning, and did not question the intent behind the inquiry.“In our view, a medical professional would have sought clarification and avoided suggesting bromide for human consumption,” the authors wrote.AI Risks in Medical Decision-MakingThe authors warned that ChatGPT and other AI tools can produce scientifically inaccurate information, fail to critically evaluate potential risks, and unintentionally spread misinformation. They stressed that while AI may be useful in connecting scientific concepts with the public, it can also promote “decontextualized information” that could lead to preventable adverse health outcomes.They recommended that medical professionals consider whether patients have been influenced by AI when assessing symptoms and history.Also Read: Illinois Becomes First US State To Ban AI-Powered Mental Health Therapy; Why Is This Step Important In Ensuring Patient Safety?Patient’s Hospitalization and SymptomsAccording to the report, the man arrived at a hospital believing his neighbor was trying to poison him. He presented with multiple dietary restrictions and paranoia, refusing water offered by staff despite being extremely thirsty. Within 24 hours, he attempted to leave the hospital, prompting his involuntary admission for psychiatric care.He was diagnosed with psychosis and treated accordingly. Once stable, he reported additional symptoms characteristic of bromism, including facial acne, persistent thirst, and insomnia.OpenAI’s Response and Recent UpdatesThe case predates the recent launch of ChatGPT’s latest version, GPT-5. OpenAI claims the updated model is better equipped to answer health-related queries and proactively flag potential concerns, including serious physical or mental health risks. The company maintains that ChatGPT is not a substitute for professional medical care, a stance reflected in its user guidelines.However, the University of Washington authors argue that even with advancements, AI tools should never be relied upon as primary sources for medical decision-making.The report underscores a growing concern within the medical community: as AI becomes more integrated into daily life, patients may increasingly turn to chatbots for health advice without consulting qualified professionals. Experts say this case is a stark reminder of the dangers of self-medicating based on unverified or misunderstood online information.