Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Meta introduces new brand safety protocols across Threads, Instagram

The news: Meta announced updates to its brand safety and suitability capabilities for Threads and Instagram this week as it looks to gain advertiser trust in its platforms amid regulatory scrutiny.

  • Meta expanded its third-party verification capabilities to Threads feeds through its business partners, including DoubleVerify, Integral Ad Science, and Scope3. Zefr support will be introduced in the future.
  • Each partner offers AI-driven solutions to report the context where ads appear, providing independent safety and suitability scores for adjacent content, content examples with associated risk levels, and impression-level data.
  • The company is also tightening safeguards on Instagram for teen accounts, which will now “be guided by PG-13 movies ratings by default.” Users under 18 cannot opt out without parental permission.
  • Other updates include restrictions on what accounts teens can discover, follow, and interact with; blocked search terms and results; stricter content recommendation protocols; and a parental oversight feature to allow parents to filter content for teen accounts.

Meta’s brand safety push: The moves are a response to broader brand safety concerns; over half (53%) of US marketers feel that social media presents the biggest brand safety challenges. And given Meta’s vast reach with advertisers and audiences, brand suitability has proven to be a challenge:

  • Marketers have complained of limited visibility into where their ads appear on Meta platforms in the past, with many struggling to know if their ads run alongside inappropriate content. This is a critical pain point as Meta struggles with issues like an influx of deepfake content.
  • Meta also faces political and regulatory risks over allegations that it violates regulations like the Digital Services Act (DSA) in the EU, which sets strict standards for hate speech and misinformation. Meta has been accused of approving AI-generated ads with harmful content.
  • Regulators are increasingly cracking down on teen safety, while teens generally don’t trust tech companies—putting platforms that don’t prioritize safety at risk with regulators and audiences alike.

What it means for marketers: The new restrictions are a double-edged sword. On one hand, advertisers will have increased confidence in their ability to appear next to safe content that doesn’t damage brand image. But on the other hand, reaching younger audiences helping drive growth could become more challenging and require nuance.

Brands should prioritize contextual targeting and premium inventory sources to ensure ads appear in suitable environments, while conducting internal audits to safeguard reputation while still reaching intended audiences. Marketers should also prepare for potential youth audience shifts—especially as teen interest in Instagram is already falling—to maintain engagement as teens seek alternative channels.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!