Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Meta withdraws from Media Rating Council brand safety audits

The news: Meta withdrew from Media Rating Council (MRC) brand safety audits last week, just months after its accreditation was officially issued, per Adweek.

  • Instagram and Facebook feeds were accredited for content-level brand safety by the MRC in June, indicating to brands that ad inventory on Meta’s two dominant platforms was brand-safe. Facebook’s in-stream video content accreditation was also renewed in June.
  • Accreditations for all of these inventory types have now been pulled after Meta withdrew from the MRC’s annual auditing requirements for brand safety.
  • A Meta spokesperson said to Adweek that Meta advertisers “[value] validation of third-party and suitability metrics,” leading Meta to “[ask] the MRC to prioritize a third-party brand safety audit.”
  • Meta still maintains accreditation for its measurement categories, including video viewability, display ad impression metrics, and sophisticated invalid traffic detection.

Zooming out: Meta’s withdrawal builds on years of advertiser and regulatory scrutiny over the brand safety capabilities of Instagram and Facebook.

  • Advertisers have frequently complained about limited visibility into where ads appear on Meta platforms and the difficulty of determining whether ads run alongside inappropriate content—a critical issue as Meta faces an influx of deepfakes and other harmful AI-generated content.
  • The company announced in January that it would switch its previous fact-checking system with community notes, reigniting advertiser concerns over brand safety.
  • Meta faces political and regulatory risks in key markets like the EU over allegations that it violates the Digital Services Act (DSA), which sets strict standards for hate speech and misinformation.

Yes, but: Meta is still making efforts to combat brand safety concerns in other ways to appeal to the majority of marketers who view social media as presenting the biggest brand safety challenges.

  • The company announced last week that it was expanding its third-party verification capabilities to Threads feeds through partners like DoubleVerify and Integral Ad Science.
  • Meta is also attempting to address teen safety across its main platforms by tightening safeguards for Instagram teen accounts and, most recently, giving parents the option to block AI chatbots for teen users.

What it means for marketers: Despite its other brand safety moves, Meta’s step away from the MRC shows that advertisers now operate in a digital ad space with fewer brand safety guarantees, making it vital for marketers to enhance their own monitoring and verification efforts.

  • Brands must remain in tight collaboration with third-party partners and vet vendors regularly to ensure internal processes align with company needs.
  • Refining exclusion and inclusion lists will be critical. Marketers must fine-tune and review these lists to ensure ads don’t appear alongside undesirable content, regularly auditing and updating lists to reflect current risks and sensitivities.
  • Leveraging contextual targeting and genAI will also prove valuable, resulting in more effective campaigns that still preserve brand safety. These tools help identify and avoid risky content more accurately than broad blacklists.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!