Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Watchdog raises concerns over consumer use of health AI chatbots

The news: Misuse of AI chatbots in healthcare topped the list of 2026 health technology hazards, according to ECRI, a nonprofit healthcare research organization. ECRI compiles its annual list with input from engineers, scientists, clinicians, and other patient safety analysts.

ECRI noted that widely available large language models (LLMs)—including ChatGPT, Claude, Copilot, Gemini, and Grok—are neither designed nor regulated for healthcare and can occasionally produce incorrect, “sometimes dangerous,” responses.

Adding to the potential harm of AI chatbots, LLMs can hallucinate and are often predisposed to pleasing users rather than delivering the most accurate response, which can result in faulty medical advice, per ECRI.

Why it matters: Medical device misuse, patient harm, and cybersecurity threats to health tech products typically top ECRI’s hazard list. AI in clinical workflows has appeared on the list before, but ECRI’s inclusion of consumer LLM use for health signifies growing concern about potential harm among medical and tech experts.

People are increasingly turning to AI chatbots for health answers, despite uneven reliability.

Implications for AI companies: As general-purpose AI platforms promote fast, actionable health guidance, users must understand the technology’s limits. AI companies must establish robust protocols and guardrails, including grounding models in evidence-based medical literature, acknowledging potential gaps in reliability, and clearly disclaiming that AI responses don’t replace a physician’s diagnosis or treatment.

With companies like OpenAI and Anthropic rolling out features such as analyzing patient medical records, they should also develop guidance for medical associations to help clinicians steer effective consumer use of health chatbots.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!