Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Los Angeles Times will add AI-powered bias meter to content, but what about AI’s bias?

The news: The Los Angeles Times is adding an AI-powered “bias meter” to its news and opinion articles starting in January.

  • Patrick Soon-Shiong, the Times’ billionaire owner, said the meter will indicate underlying opinions in authors’ statements and will have an option for readers to leave comments.
  • The paper’s guild pushed back against Soon-Shiong’s suggestion that LA Times content is biased, and a veteran op-ed columnist resigned in protest of the decision.

The plan: The bias meter is being developed using technology from Soon-Shiong’s other business ventures, per The New York Times. The paper’s owner compared the tool to X’s “notes” feature, which allows moderators to add context or information to posts that could be misleading or incorrect.

Zooming out: Soon-Shiong’s interest in an AI bias meter stems from his concerns that the paper is becoming an “echo chamber” of opinions.

“What we need to do is not have what we call confirmation bias … the reader can press a button and get both sides of that exact same story,” Soon-Shiong said on a podcast.

The flaw in the machine: AI itself isn’t always neutral. Outputs are based on the data that systems are trained on and the prompts they receive, leaving room for AI models to inadvertently inherit biases.

  • Last week, an AI system used by the UK government to investigate welfare fraud was shown to hold bias based on age, disability, marital status, and nationality, per The Guardian.
  • In May, a Yale study found notable racial bias in OpenAI’s ChatGPT model.

“While the idea of enforcing standards of neutrality in political reporting is a fine idea, the notion of using an algorithm to do it seems questionable at best. AI … is not a failsafe replacement for human judgment,” Gizmodo writer Lucas Ropek said.

Our take: Although concrete details of the LA Times’ bias scale plans and what AI model it will use weren’t revealed, it will be crucial that it’s trained on a wide range of media content.

A purely neutral media source may not appeal to all subscribers, especially in a fragmented political climate, and the paper will likely need to make the bias meter’s training data clear to gain readers’ trust.

This article is part of EMARKETER’s client-only subscription Briefings—daily newsletters authored by industry analysts who are experts in marketing, advertising, media, and tech trends. To help you finish 2024 strong, and start 2025 off on the right foot, articles like this one—delivering the latest news and insights—are completely free through January 31, 2025. If you want to learn how to get insights like these delivered to your inbox every day, and get access to our data-driven forecasts, reports, and industry benchmarks, schedule a demo with our sales team.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account