As artificial intelligence becomes a go-to source for quick answers, a new University of Oxford-led study on AI chatbots and medical advice is raising serious concerns about how safe it is to rely on these tools for health guidance.The research suggests that while AI chatbots can provide medical information, their advice is often inaccurate, inconsistent, and difficult for users to interpret, potentially putting people at risk—especially when dealing with symptoms that need urgent care.What the Oxford Study FoundThe study involved 1,300 participants who were given realistic health scenarios, such as experiencing a severe headache or being a new mother feeling constantly exhausted. Participants were divided into two groups: one group used AI chatbots to understand their symptoms and decide next steps, while the other did not.Researchers then assessed whether participants correctly identified what might be wrong and whether they made appropriate decisions, such as seeing a GP or visiting A&E.The results were troubling. People who relied on AI frequently failed to identify the severity of their condition and were often unsure about when to seek professional medical help.Why Chatbot Advice Can Go WrongAccording to the researchers, one major issue is that people don’t always know what to ask. The study found that chatbot responses varied widely depending on how questions were phrased. Even small changes in wording could lead to completely different answers.The AI often produced a mix of helpful and misleading information, leaving users to decide which advice mattered. Many participants struggled to distinguish between reliable guidance and unnecessary or confusing details.As one of the study’s authors explained, when an AI lists multiple possible conditions, users are left guessing which one applies to them—precisely the moment where mistakes can happen.A Dangerous Gap in Symptom InterpretationDr Rebecca Payne, lead medical practitioner on the study, warned that asking chatbots about symptoms could be “dangerous”, particularly when users delay seeking professional care based on AI responses.Dr Adam Mahdi, the study’s senior author, noted that while AI can share medical facts, people often share information gradually and leave out key details—something chatbots struggle to manage effectively.Bias, Data, and the Limits of AIExperts also point out that chatbots are trained on existing medical data, which means they may repeat long-standing biases baked into healthcare systems. As one psychiatry expert put it, a chatbot is only as accurate as human clinicians—and humans are far from perfect.That said, not everyone is pessimistic.What Comes Next for AI in HealthcareDigital health experts argue the technology is evolving. Health-specific versions of general AI chatbots have recently been released by major developers, and these could perform differently in future studies.The consensus among experts is clear: AI in healthcare should focus on improvement, regulation, and guardrails, not replacement of doctors. Used responsibly, it may support healthcare—but without safeguards, it risks doing more harm than good.