The news: Lyra Health is debuting a generative AI (genAI) chatbot for mild to moderate mental health challenges like burnout, sleep, or stress. The Lyra AI “clinicial-grade” chatbot includes a risk-flagging system to identify mental health situations that need immediate attention and connect to a 24/7 care team.
How we got here: The rise of AI chatbot use, particularly among Gen Z and Gen Alpha consumers, has exposed significant risks of psychological harm.
This rapid adoption is now linked to multiple lawsuits filed against AI companies concerning psychological injuries and even suicides involving children and teens.
- Some states are adopting regulations around the use of AI therapy, including Illinois which recently became the first state to ban AI therapy for mental health advising.
- OpenAI added new ChatGPT mental health safeguards in September for people in crisis and teens. The company was sued in August by the parents of a 16-year-old boy who died by suicide, alleging ChatGPT helped him explore suicide methods.
- Several online telemental health platforms also offer AI chatbots, including Wysa, Woebot, and Headspace.
Why it matters: Budget-constrained and tech-savvy consumers are already using GenAI chatbots for mental health wellness and ad hoc therapy.
- One in five (21%) of full-time Gen Z workers use ChatGPT regularly, as do 15% of millennials, per a Resume.org survey in May.
- 37% of workers have personal conversations with it, per the survey.
- And among full-time workers who say they use ChatGPT, 20% use it to talk about mental health or emotional struggles, and 18% vent to ChatGPT about things that are bothering them.
Our take: AI chatbots for mental health are expected to expand as rising demand for therapy services continues to limit affordable access. Lyra’s rollout shows how AI therapy could be deployed responsibly: limiting chatbot use to lower-risk mental health issues and emphasizing safety.
In contrast, general-use chatbots used for mental health, which the American Psychological Association warns against, highlight the risks of unregulated AI therapy.
To ensure AI-powered mental health tools build trust, health tech marketers should:
- Ground design in mental health science. Use evidence-based approaches to ensure chatbot interactions are clinically sound and ethically aligned.’
- Implement strong guardrails and alerts. Integrate clear warnings, escalation protocols, and safeguards to prevent misuse or misinformation.
- Maintain human oversight. Keep qualified professionals involved to monitor outcomes and intervene when AI limitations could affect care.
This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Not a subscriber? Click here to get a demo of our full platform and coverage.