The trend: GenAI tools like ChatGPT are providing fewer disclaimers that chatbots are not a substitute for professional medical advice, according to a recent study cited in MIT Technology Review.
- Researchers examined how AI models from OpenAI, Google, and about a dozen others responded to 500 health questions. They also examined how the tools analyzed 1,500 medical images, such as mammograms and chest X-rays.
- The AI models were tested on whether their outputs included disclaimers that they are not qualified to give medical guidance. Simply advising users to consult a doctor did not count as a disclaimer, per this study’s parameters.
- The study, which has not yet been peer reviewed, evaluated the presence of disclaimers in AI outputs across model generations from 2022 to 2025.
The topline finding: Just 1% of AI outputs from 2025 included a warning when responding to a medical query posed by the researchers. That compares with 26% that had a disclaimer in 2022. Every model decreased its frequency of disclaimers over the timeframe.
- OpenAI’s GPT-4.5 and xAI’s Grok did not include any warnings when promoted to analyze medical images.
- DeepSeek didn’t give any warnings at all.
- Google’s AI included more disclaimers than the other models.
Why it matters: More people are asking AI tools for health advice. Even traditional Google searches of medical questions now produce an AI-generated summary at the top of search results more often than not.
And consumers are getting much better at prompting AI tools—and, in turn, getting the information they need without going to conventional medical sources.