Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Sony AI launches consent-based data set to expose algorithmic bias

The news: Sony AI released the Fair Human-Centric Image Benchmark (FHIBE), a freely available image data set to test AI fairness using images from 2,000 volunteers across 80 countries—all consent-based and removable on request, per Engadget.

Sony AI’s data set is a departure from the norm of data scraped from the internet and other sources without consent and could provide a baseline for future bias testing.

“This project comes at a critical moment, demonstrating that responsible data collection—incorporating best practices for informed consent, privacy, fair compensation, safety, diversity, and utility—is possible,” said Alice Xiang, lead research scientist for AI Ethics at Sony AI.

Why it matters: Sony says FHIBE is the first global, consent-based data set meant to uncover bias in the way AI “sees” people—and no existing large language models (LLMs) passed all of its fairness tests—revealing that AI’s inherent bias and lack of inclusivity remain persistent problems.

Here’s how it works:

  • FHIBE highlights where AI gets things wrong in identifying people or labeling images.
  • It shows that details, such as hairstyles or lighting, can affect how accurately AI recognizes certain groups.
  • Sony says this data set can help fix problems before AI tools reach the public.

A breakthrough for brands: Because it’s consent-based and globally diverse, FHIBE can strengthen computer vision tools used in advertising, image generation, and audience targeting.

Marketers who depend on AI to analyze images, segment audiences, and create visuals can lean on FHIBE to provide a verified, bias-tested foundation that saves time on audits and reduces the risk of unfair or inaccurate results.

Our take: Independent data sets like FHIBE give marketers, platforms, and regulators a common reference point for evaluating AI performance. That helps brands prove compliance, reduce reputational risk, and speed up adoption of trustworthy automation.

Tools like FHIBE could reduce bias and rebuild trust in how AI sees—and represents—people in marketing and advertising processes.

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Non-clients can click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!