Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Meta’s content moderation pullback cuts takedown errors, removals

The news: Meta’s content moderation policy changes, including the end of its fact-checking program in January, have decreased content removal mistakes.

  • Erroneous content takedowns dropped by half between Q4 2024 and the end of Q2 2025, per its Q1 Integrity Report.
  • The overall amount of content that breaks Meta platform guidelines “largely remained unchanged for most problem areas.”

What changed? Total removals, including for hateful content and bullying, fell as much as 50%. There was also a decrease in the amount of content that Meta acted on before it was reported by a user.

  • Meta took down 3.4 million pieces of posts with hateful conduct on Facebook and Instagram between January and March 2025, compared with 7.4 million between January and March of 2024.
  • It took action on 5.1 million pieces of bullying or harassing content, down from 7.9 million.
  • Meta acted on 366 million pieces of spam in Q1 2025, compared with 436 million a year earlier.

Actions taken on fake Facebook and Instagram accounts came to 1 billion in Q1, up from 631 million.

AI scale back: In January, Meta said it would reduce its reliance on automated systems to scan for policy violations and would shift focus to high-level violations such as terrorism, child exploitation, and fraud. “Using automated systems to scan for all policy violations … has resulted in too many mistakes and too much content being censored that shouldn’t have been,” Meta chief global affairs officer Joel Kaplan said in a blog post.

However, AI hasn’t been scrapped from its process. In the Transparency Report, Meta states that it’s using LLMs to remove content from review lines when the company is sure there isn’t a violation, which frees up human moderators to focus on material that’s more likely to violate community guidelines.

Our take: Meta’s effort to balance free speech and harm reduction is showing progress. But as political tensions and AI scams continue to rise, its “lighter touch” moderation will continue to be put to the test.

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Non-clients can click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account