The news: Misuse of AI chatbots in healthcare topped the list of 2026 health technology hazards, according to ECRI, a nonprofit healthcare research organization. ECRI compiles its annual list with input from engineers, scientists, clinicians, and other patient safety analysts.
ECRI noted that widely available large language models (LLMs)—including ChatGPT, Claude, Copilot, Gemini, and Grok—are neither designed nor regulated for healthcare and can occasionally produce incorrect, “sometimes dangerous,” responses.
Adding to the potential harm of AI chatbots, LLMs can hallucinate and are often predisposed to pleasing users rather than delivering the most accurate response, which can result in faulty medical advice, per ECRI.
Why it matters: Medical device misuse, patient harm, and cybersecurity threats to health tech products typically top ECRI’s hazard list. AI in clinical workflows has appeared on the list before, but ECRI’s inclusion of consumer LLM use for health signifies growing concern about potential harm among medical and tech experts.
People are increasingly turning to AI chatbots for health answers, despite uneven reliability.
Implications for AI companies: As general-purpose AI platforms promote fast, actionable health guidance, users must understand the technology’s limits. AI companies must establish robust protocols and guardrails, including grounding models in evidence-based medical literature, acknowledging potential gaps in reliability, and clearly disclaiming that AI responses don’t replace a physician’s diagnosis or treatment.
With companies like OpenAI and Anthropic rolling out features such as analyzing patient medical records, they should also develop guidance for medical associations to help clinicians steer effective consumer use of health chatbots.