Meta leans on AI to police content at scale

The news: Meta is cutting back on third-party content moderators in favor of AI to help its systems catch violations faster, stop more scams, and reduce “over-enforcement mistakes.”

Over the next few years, the social media giant plans to “focus on strengthening our internal systems and workforce.” Existing community guidelines and rules aren’t slated to change.

  • The company specifically pointed to AI technology being better suited to manage tasks like repetitive reviews of graphic content.
  • Humans will still play a large role in managing higher-level decisions such as reporting content to law enforcement or managing account appeals.

The decision builds off of Meta’s plan in January 2025 to cut back usage of third-party vendors for fact-checking.

What it means: The system change is a potential step toward cleaner, faster-moving content control, but it risks creating platforms where decisions over content violations are increasingly driven by opaque AI systems.

The moderation plan could improve safety while risking predictability and control.

  • Even with Meta aiming to reduce over-enforcement, marketers may still see inconsistent takedowns, especially for content related to divisive topics that could require context.
  • Brands could have less clarity into why content is flagged, even if human-led appeals speed up resolution after enforcement decisions are made.

Additionally, AI tends to struggle with nuance, such as satire, cultural context, and sensitive topics.

  • Meta—which has made moves in the past year to reduce “left-leaning bias”—could standardize enforcement in ways that lack contextual judgment.
  • The risk of biases in AI could increase the chances that political or cultural content is either unevenly moderated or evaluated in an overly simplified way.

Implications for marketers: Automating content moderation will help Meta reduce costs while it works to improve enforcement efficiency at scale. Although it could raise moderation concerns, Meta’s brand safety protocols and tools should insulate advertisers from direct exposure to controversial content.

The company’s massive reach and ad performance mean that even advertisers with brand safety concerns are unlikely to pull back spending.

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Non-clients can click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!