Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Most US hospitals aren’t testing their predictive AI models for bias

The data: Some two-thirds (65%) of US hospitals use predictive AI models, but fewer evaluate their tools for accuracy (61%) and bias (44%), according to a January 2025 study published in Health Affairs. Researchers analyzed responses from 2,425 acute care hospitals.

How hospitals are using predictive AI models: The most common clinical use cases in hospitals include:

  • Predicting health trajectories or risks for inpatients (92%)
  • Identifying patients who are at high risk for needing follow-up outpatient care (79%)
  • Recommending treatments (44%)
  • Health monitoring (34%)

The problem: Many predictive AI models aren’t being tested for accuracy or bias.

Most hospitals lack the resources to develop AI models in-house, which in turn leads to less internal testing and evaluation. This could end up harming patients by perpetuating or exacerbating health inequities.

For example, a patient might not get appropriate follow-up care or treatment if a hospital is relying on recommendations from an AI model that’s trained on data reflecting only white men or that’s based on race-based medical misconceptions.

The final word: The study’s findings highlight the need for more rigorous testing and oversight of AI models in clinical settings. This could become a more difficult undertaking following President Trump’s recent decision to roll back a Biden administration executive order that included rules designed to ensure the healthcare industry responsibly implemented AI.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account