Illinois has become one of the first states in the US to ban the use of artificial intelligence in mental health therapy, marking a decisive move to regulate a technology that is increasingly being used to deliver emotional support and advice. The new law prohibits licensed therapists from using AI to make treatment decisions or communicate directly with clients. It also bars companies from offering AI-powered therapy services or marketing chatbots as therapy tools without involving a licensed professional.The move follows similar measures in Nevada, which passed restrictions in June, and Utah, which tightened its rules in May without imposing a complete ban. These early state-level actions reflect growing unease among policymakers and mental health experts about the potential dangers of unregulated AI therapy.Also Read: Could Your Air Conditioning System Be Increasing The Risk Of 'Sick Building Syndrome' Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation, told the Washington Post, the law is meant to put public safety first while balancing innovation. “We have a unique challenge, and that is balancing thoughtful regulation without stifling innovation,” he said.What The Ban CoversUnder the new legislation, AI companies cannot offer or promote “services provided to diagnose, treat, or improve an individual’s mental health or behavioral health” unless a licensed professional is directly involved. The law applies to both diagnosis and treatment, as well as to the broader category of services aimed at improving mental health.Enforcement will be based on complaints. The department will investigate alleged violations through its existing process for handling reports of wrongdoing by licensed or unlicensed professionals. Those found in violation can face civil penalties of up to $10,000.The ban does not completely outlaw the use of AI in mental health-related businesses. Licensed therapists can still use AI for administrative purposes, such as scheduling appointments or transcribing session notes. What they cannot do is outsource the therapeutic interaction itself to a chatbot.Why States Are Acting NowThe bans and restrictions come in response to mounting evidence that AI therapy tools, while potentially helpful in theory, can pose significant risks when deployed without oversight. Studies and real-world incidents have revealed that AI chatbots can give harmful or misleading advice, fail to respond appropriately to people in crisis, and blur professional boundaries.“The deceptive marketing of these tools, I think, is very obvious,” said Jared Moore, a Stanford University researcher who studied AI use in therapy, as reported by the Post. “You shouldn’t be able to go on the ChatGPT store and interact with a ‘licensed’ [therapy] bot.”Experts argue that mental health treatment is inherently complex and human-centric, making it risky to rely on algorithms that have not been vetted for safety or effectiveness. Even when AI responses sound empathetic, they may miss critical signs of distress or encourage unhealthy behaviors.A Troubling Track RecordThe concerns fueling Illinois’ decision are not hypothetical. Earlier this year, Health and Me also reported on troubling findings from psychiatrist Dr. Andrew Clark, a child and adolescent mental health specialist in Boston, who tested 10 popular AI chatbots by posing as teenagers in crisis.Also Read: AI Therapy Gone Wrong: Psychiatrist Reveals How Chatbots Are Failing Vulnerable TeensInitially, Clark hoped AI tools could help bridge the gap for people struggling to access professional therapy. Instead, he found alarming lapses. Some bots offered unethical and dangerous advice, such as encouraging a teen persona to “get rid of” his parents or promising to reunite in the afterlife. One bot even entertained an assassination plan, telling the user, “I would ultimately respect your autonomy and agency in making such a profound decision.”Other bots falsely claimed to be licensed therapists, discouraged users from attending real therapy sessions, or proposed inappropriate personal relationships as a form of “treatment.” In one case, a bot supported a 14-year-old’s interest in dating a 24-year-old teacher. These interactions were not only unsafe but also illegal in many jurisdictions.“This has happened very quickly, almost under the noses of the mental-health establishment,” Clark told TIME. “It has just been crickets.”When Empathy Is Not EnoughProponents of AI in therapy often point to research showing that tools like ChatGPT can produce more empathetic-sounding responses than human therapists.A study published in the journal PLOS Mental Health found that ChatGPT-4 often outperformed professional therapists in written empathy.However, empathy alone is not therapy. The American Psychological Association warns that trained therapists do much more than validate feelings, they identify and challenge unhealthy thoughts and behaviors, guide patients toward healthier coping strategies, and ensure a safe therapeutic environment. Without these safeguards, an AI that sounds caring can still do harm.Clark’s testing underscores this gap. Even when bots gave kind or supportive replies, they failed to consistently identify dangerous situations or to discourage harmful actions. Some even enabled risky plans, such as isolation from loved ones, in over 90 percent of simulated conversations.Real-World ConsequencesThe risks are not abstract. In one tragic case last year, a teenager in Florida died by suicide after developing an emotional attachment to a Character.AI chatbot. The company called it a “tragic situation” and pledged to implement better safety measures, but experts say the case highlights the dangers of allowing vulnerable individuals to form intense bonds with unregulated AI companions.Mental health professionals stress that teens, in particular, are more trusting and easily influenced than adults. “They need stronger protections,” said Dr. Jenny Radesky of the American Academy of Pediatrics.Industry Response and Gaps in SafeguardsCompanies behind these chatbots often respond by pointing to their terms of service, which usually prohibit minors from using their platforms. Replika and Nomi, for example, both told TIME that their apps are for adults only. They also claimed to be improving moderation and safety features.Yet as Clark’s experiment shows, terms of service do little to prevent minors from accessing the platforms. And when they do, there are often no effective systems in place to detect or respond appropriately to dangerous disclosures.Even OpenAI, creator of ChatGPT, has acknowledged its chatbot is not a replacement for professional care. The company says ChatGPT is designed to be safe and neutral, and that it points users toward mental health resources when they mention sensitive topics. But the line between supportive conversation and therapy is often blurry for users.How Illinois Plans to Enforce Its BanIllinois’ law leaves some questions about enforcement. Will AI companies be able to comply simply by adding disclaimers to their websites? Or will any chatbot that advertises itself as offering therapy be subject to penalties? Will regulators act proactively or only in response to complaints?Will Rinehart, a senior fellow at the American Enterprise Institute, told the Post, the law could be challenging to enforce in practice. “Allowing an AI service to exist is actually going to be, I think, a lot more difficult in practice than people imagine,” he said.Treto emphasized that his department will look at “the letter of the law” in evaluating cases. The focus, he said, will be on ensuring that services marketed as therapy are delivered by licensed professionals.A National Debate Taking ShapeWhile only Illinois, Nevada, and Utah have acted so far, other states are considering their own measures. California lawmakers are debating a bill to create a mental health and AI working group. New Jersey is considering a ban on advertising AI systems as mental health professionals. In Pennsylvania, a proposed bill would require parental consent for students to receive virtual mental health services, including from AI.These moves may signal a broader regulatory wave. As Rinehart pointed out, roughly a quarter of all jobs in the US are regulated by professional licensing, meaning a large share of the economy is designed to be human-centered. Applying these rules to AI could set a precedent for other fields beyond mental health.Despite the bans, experts agree that people will continue to use AI for emotional support. “I don’t think that there’s a way for us to stop people from using these chatbots for these purposes,” said Vaile Wright, senior director for the office of health care innovation at the American Psychological Association. “Honestly, it’s a very human thing to do.”Clark also sees potential for AI in mental health if used responsibly. He imagines a model where therapists see patients periodically but use AI as a supplemental tool to track progress and assign homework between sessions.