The data: Over one-third of Gen Zers (39%) and millennials (34%) who have used genAI tools to check symptoms report that they would put off seeing a doctor if the AI told them their issue was low-risk, according to an October 2025 poll from The Mesothelioma Center at Asbestos.com conducted by SurveyMonkey.
Why it matters: Regardless of age, genAI is becoming a go-to resource for consumers seeking clarity on new health issues.
- The survey found that for US adults across all major generations (Gen Z to baby boomers), understanding symptoms (72%) remains the most common health-related AI query.
- Over half (52%) of US adults have used ChatGPT at some point to research medical symptoms they were worried about.
- ChatGPT ranks second to only Google search (84%) in online tools people have used to check symptoms, per this survey.
- About 1 in 5 (22%) US adults have used Gemini.
But younger users in particular are more ready to adopt AI, and are becoming more comfortable with taking action on the health advice it offers.
- Younger consumers adopt genAI tools (for any purpose) at much higher rates than older people: Around two-thirds of all genAI users are under 44, per EMARKETER’s forecast.
- When AI labels symptoms as low-risk, Gen Zers are more likely to skip or delay a doctor’s visit (39%) than book an appointment (36%), per Asbestos.com. About 26% were undecided on what action they’d take.
- Millennials showed a similar split: If an AI said their symptoms were low-risk, 34% would put off going to the doctor, while 31% would still go.
Implications for AI companies: AI users are starting to shift from treating the technology as a secondary or supplementary information source to depending on it as a primary one. This trend is especially pronounced for health queries, where the immediacy of AI-generated information is valuable for people without 24/7 doctor access or quick appointment options.
However, overrelying on AI for medical guidance carries real risk, especially as models are still maturing and sometimes produce faulty information. AI companies should add explicit in-chat disclaimers against being used as a replacement for medical care and strengthen guardrails to block unvetted or potentially harmful health advice.
Go deeper with our Health Trends to Watch in 2026 report, which explores how genAI is poised to play a major role in consumers’ health journeys.