The news: OpenAI detailed new ChatGPT mental health safety measure results on Monday, alongside an internal analysis that shows potentially millions of users’ conversations indicate emotional reliance on the chatbot.
Digging into the data: OpenAI’s analysis estimates 0.15% of users have conversations with ChatGPT that indicate heightened emotional attachment, while another 0.15% of users have conversations that include “explicit indicators of potential suicidal planning or intent” in a given week.
- An estimated 0.07% of users show possible signs of mental health issues related to psychosis or mania, per OpenAI, although it also cautioned that the conversations are rare and difficult to detect and measure.
- More than 800 million people use ChatGPT every week, per Wired. By Wired’s tracking, about 2.4 million people are possibly expressing suicidal thinking on ChatGPT or prioritizing the chatbot over real-life loved ones, school, or work.
Zooming out: OpenAI initially announced new safeguards for people showing signs of mental health crises. And the results posted this week show improved responses and fewer “undesired” answers in ChatGPT-5. Results were evaluated by more than 170 participating psychiatrists, psychologists, and primary care physicians.
- Across mental health conversations, ChatGPT-5 reduced undesired answers between 39% and 52% compared with GPT-4o, per the experts’ assessment.
Why it matters: With ongoing shortages of mental healthcare providers and rising healthcare costs, budget-conscious consumers are turning to AI chatbots for informal therapy.
- Nearly 1 in 4 (23.4%) US adults experienced a mental illness in the past year, per a September report from Mental Health America.
- Almost half (49%) of large language model (LLM) users who self-reported a mental health condition use chatbots including ChatGPT, Claude, and Gemini for mental health support, per a February 2025 Sentio University survey of 499 US adults with ongoing mental health conditions and who have used LLMs.
- 90% of those same respondents cited accessibility, while 70% cited affordability as the leading reasons for using chatbots for mental health support.
Our take: Retroactive mental health and wellness safeguards are necessary in AI chatbots, and OpenAI’s new safety updates show progress. But they’re only one part of the ecosystem. Healthcare and tech companies also need to adopt more monitoring and clinician oversight for people already using these tools. Marketers should educate parents of teens and young adults about safe AI use, and emphasize best practices like clinician collaboration, backup safety measures, and transparent data policies.
This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Not a subscriber? Click here to get a demo of our full platform and coverage.