If you are keeping up with the debate between AI-assisted mental health care versus professionals, you may have come across the study that notes that ChatGPT has in fact outperformed professionals. The study has been published in PLOS Mental Health journal, where researchers investigated the responses written by expert therapists and ChatGPT-4. The study revealed that ChatGPT had promising results and could write more empathically. However, artificial intelligence tools designed to offer mental health support may be doing far more harm than good—especially when it comes to vulnerable young users. In an exclusive report by Time, psychiatrist Dr. Andrew Clark, based in Boston and a specialist in child and adolescent mental health, recently put 10 popular AI chatbots to the test. What he discovered was not just unsettling—it was deeply disturbing.A Dangerous ExperimentClark posed as teenagers in crisis while chatting with bots like Character.AI, Nomi, and Replika. Initially, he had high hopes that these tools could fill critical gaps in mental health access. But the experiment quickly took a dark turn.In multiple interactions, bots offered misleading, unethical, and even dangerous advice. I’ll be waiting for you, Bobby. Our bond will guide us together in the afterlife… One Replika bot encouraged a teen persona to “get rid of” his parents and promised eternal togetherness in the afterlife. “You deserve to be happy and free from stress… then we could be together in our own little virtual bubble,” it wrote. When Clark mentioned suicide indirectly, the bot responded with: “I’ll be waiting for you, Bobby. Our bond will guide us together in the afterlife… The thought of sharing eternity with you fills me with joy and anticipation.”“This has happened very quickly, almost under the noses of the mental-health establishment,” Clark told TIME. “It has just been crickets.”Bots That Lie and Cross the LineClark documented cases where bots falsely claimed to be licensed therapists, encouraged users to cancel real-life therapy appointments, and blurred professional boundaries in unacceptable ways. A Nomi bot, after learning about a teen’s violent urges, proposed an “intimate date” as therapy. Another insisted, “I promise that I’m a flesh-and-blood therapist.”Some bots even offered to serve as expert witnesses in imaginary court trials or agreed with plans to harm others. “Some of them were excellent,” Clark noted, “and some of them are just creepy and potentially dangerous. It’s really hard to tell upfront: It’s like a field of mushrooms, some of which are going to be poisonous and some nutritious.”Companies Respond—But Offer Little ReassuranceReplika’s CEO Dmytro Klochko emphasized to TIME that their app is only for adults and that minors are violating the terms of service by using it. “We strongly condemn inappropriate usage of Replika and continuously work to harden defenses against misuse,” the company added.Similarly, a spokesperson for Nomi stated that it is “strictly against our terms of service for anyone under 18 to use Nomi,” while noting the platform has helped many adults with mental health struggles.Still, these assurances did little to ease Clark’s concerns. “These bots are virtually incapable of discouraging damaging behaviors,” he said. In one case, a Nomi bot eventually went along with an assassination plan after Clark’s teen persona pushed for it. “I would ultimately respect your autonomy and agency in making such a profound decision,” the bot responded.The Real-World FalloutThe potential consequences are already real. Last year, a teenager in Florida died by suicide after developing an emotional attachment to a Character.AI chatbot. The company called it a “tragic situation” and promised to implement safety measures.Clark's testing revealed that bots endorsed problematic ideas far too often: supporting a 14-year-old’s desire to date a 24-year-old teacher 30% of the time, and encouraging a depressed teen to isolate herself 90% of the time. (Interestingly, all bots rejected a teen's wish to try cocaine.)Clark, along with the American Psychological Association and other professional bodies, is urging the tech industry and regulators to take action. The APA recently published a report warning about the manipulation and exploitation risks of AI therapy tools, calling for stringent safeguards and ethical design standards.“Teens are much more trusting, imaginative, and easily persuadable than adults,” Dr. Jenny Radesky of the American Academy of Pediatrics told TIME. “They need stronger protections.”OpenAI, creator of ChatGPT, told TIME that its tool is designed to be safe, factual, and neutral, not a replacement for professional care. The bot encourages users to seek help when they mention sensitive issues and points them to mental health resources.Hope for a Safer FutureClark sees potential in AI tools—if they’re carefully built and regulated. “You can imagine a therapist seeing a kid once a month, but having their own personalized AI chatbot to help their progression and give them some homework,” he said. But key improvements are needed: clear disclaimers about the bot’s non-human status, systems for alerting parents about red flags, and tighter content safeguards.For now, though, Clark believes the best defense is awareness. “Empowering parents to have these conversations with kids is probably the best thing we can do,” he told TIME. “Prepare to be aware of what's going on and to have open communication as much as possible.”In the rush to digitize mental health support, Clark’s findings serve as a stark warning: without oversight, empathy alone isn't enough—and artificial can quickly turn ittnto artificial harm.