Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

OpenAI plans to add new mental health safeguards for ChatGPT

The news: OpenAI is rolling out ChatGPT mental health safeguards for people in crisis and boosting protection specifically for teens with added parental controls.

Digging into the details: OpenAI previewed details from its ongoing review in a blog post on Tuesday, outlining how it’s expanding interventions, making connections to emergency services and expert help easier, and adding teen protections.

  • It’s working with 90 psychiatrists, pediatricians and general practitioners in mental health, and adding expertise in areas including eating disorders, substance use, and adolescent health.
  • Parental controls allow parents to link accounts to teens (users must be older than 13 to use ChatGPT) and get notifications if the system detects their teen is in “acute distress.”

How we got here: AI chatbots have been at the center of several recent cases of mental health harm, including teen suicides and murder.

  • In August, a Connecticut man killed himself and his mother after months of delusional interactions with ChatGPT, per the Wall Street Journal.
  • Last month, the parents of a 16-year-old boy sued OpenAI after their son died by suicide, alleging ChatGPT helped him explore suicide methods.
  • A California mom sued Character AI last year after she alleges the abusive relationship her teen son had with its chatbot led to his suicide.

Yes, and: Illinois became the first state to ban AI therapy last month, ruling that AI can be used by therapists for administrative tasks, but not for diagnoses or treatment.

Our take: Additional AI guardrails are a positive mental health development, but tech companies should continue to develop more. Healthcare is an important emerging use case for AI, but when it comes to mental health, caution and vigilance needs to trump speed to market.

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Not a subscriber? Click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account