In a serious caution to the public, Dr Uma Kumar, Head of Rheumatology at AIIMS New Delhi, has warned against using AI chatbots such as ChatGPT for medical self-diagnosis. As reported by Hindustan Times, she issued the warning while speaking to the media after a recent case at the institute exposed the risks of acting on automated health advice.The concern followed an incident in which a patient developed severe internal bleeding after treating back pain based on suggestions generated by an AI chatbot. The patient consumed non-steroidal anti-inflammatory drugs without consulting a doctor or undergoing basic medical tests.When AI Advice Turns DangerousAccording to doctors at AIIMS, the patient relied on an AI tool to manage persistent back pain instead of seeking clinical care. The chatbot recommended commonly used painkillers, which the patient purchased and took independently.As Hindustan Times noted, the AI system had no access to the patient’s medical history or their risk of stomach and intestinal complications. What appeared to be a routine solution resulted in a life-threatening episode of internal bleeding.Doctors say this reflects a growing pattern, where quick online answers are replacing medical evaluation, even for drugs that are widely available over the counter.Why Medical Diagnosis Is Not Data MatchingDr Kumar explained that medical diagnosis follows a structured process known as diagnosis by exclusion. Doctors rule out possible causes through examinations, laboratory tests, imaging, and patient history before deciding on treatment.An AI model, however, works by identifying patterns in data. It cannot examine a patient, detect physical warning signs, or judge whether a symptom points to a deeper problem. In this case, proper investigations would likely have revealed a high risk of bleeding, a step that was entirely bypassed.The Risk of Confident but Incorrect GuidanceMedical experts are increasingly concerned about what are often called AI hallucinations, where chatbots present information with confidence despite gaps or inaccuracies.While platforms such as ChatGPT include disclaimers, their tone can appear authoritative, particularly to someone in pain. As highlighted by Hindustan Times, the recommendation to use NSAIDs was not unusual in general practice, but for this patient, it proved dangerous.Without a doctor to check for contraindications or underlying conditions, even a common suggestion can lead to serious harm.Doctors Call for Caution and Clearer OversightThe incident has renewed debate over how AI platforms should handle health-related queries. AIIMS doctors are urging the public to treat online tools as sources of general information rather than personal treatment guides.Experts believe AI can assist healthcare in limited roles, such as research support or administrative tasks, but should never replace professional diagnosis or supervision.There are also calls for stronger public awareness and clearer regulation to prevent similar incidents. Doctors continue to stress that medical judgment, built on examination and evidence, cannot be replaced by algorithms.