New OpenAI lawsuit puts ChatGPT Health under scrutiny

The news: A new wrongful death lawsuit against OpenAI alleges that ChatGPT gave 19-year-old Sam Nelson dangerous drug advice that contributed to his death. The suit is seeking to halt the rollout of OpenAI’s newly launched medical assistant platform, ChatGPT Health.

Digging into the details: The lawsuit alleges that during the time Nelson was discussing illicit drug use with ChatGPT, OpenAI made the chatbot more engaging at the expense of its safety guardrails, per NYT. On the day he died, Nelson discussed mixing alcohol, an herbal supplement, and Xanax with the bot. While ChatGPT noted the combination was unsafe, it failed to warn of fatality and ultimately suggested a specific dosage.

OpenAI maintains that Nelson used a version of ChatGPT that was retired in February and reiterated that the tool is not a medical substitute. The company stated it is currently refining its responses for acute and sensitive situations alongside mental health experts.

Why it matters: ChatGPT Health did not exist at the time of Nelson’s death, but lawyers argue the case illustrates how chatbot-generated health advice can still contribute to harmful or fatal outcomes. Calling for ChatGPT Health to be paused, the lawsuit alleges the product fails to consistently apply crisis safeguards and overlooks high-risk medical emergencies.

There is limited evidence that ChatGPT’s dedicated health platform delivers safe and reliable medical advice. A recent study of ChatGPT Health found inconsistent safety responses during high-risk mental health crises and widespread under-triaging of critical medical events—failing to recommend hospitalization in more than half of cases where it was warranted.

Despite this, consumers are increasingly turning to genAI tools for health information and guidance.

  • 25% of ChatGPT’s 800 million global weekly active users submit a prompt about healthcare each week, while 5% do so every day, per OpenAI data that informed the creation of ChatGPT Health.
  • Some 30% of ChatGPT users seeking health guidance turn to the chatbot for mental health support, according to a February Tebra survey.
  • And genAI for health is set to be the fastest-growing AI use case in 2026, per our forecasts.

Implications for AI companies: Even a temporary shutdown of ChatGPT Health would be a setback for AI companies positioning their products as consumer health tools. While any pause would likely be short-lived, the scrutiny generated by this and similar lawsuits could force meaningful safety changes to AI health features. Those modifications could include stricter limits for younger users, broader escalation protocols that trigger alerts to authorities more often, and more frequent or earlier termination of overly sensitive conversations.

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Not a subscriber? Click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!