SummaryAI is increasingly used in mental health but poses risks if not designed with strong clinical safeguards. Experts warn that chatbots may offer comfort without real help, delaying treatment. With clinician oversight, AI can triage cases, support therapists, and sustain engagement, but human judgment remains crucial, especially in crises and suicide prevention.