Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Meta’s safety council raises red flags over misinformation risks after moderation shift

The news: Meta’s Safety Advisory Council wrote an open letter criticizing the social media company’s decision to cut its fact-checking program, calling it “a concerning departure from Meta’s history of leadership and innovation in proactive harm prevention.”

  • The council said Meta’s decision to implement a community-notes model places an “unreasonable burden” on users to navigate harmful content and manage moderation themselves.
  • The open letter also touched on how scaling back topic restrictions could affect marginalized groups that are already targeted disproportionately online, including women, queer communities, and immigrants.

The Safety Advisory Council, which was founded by Meta in 2009, is a group of independent online safety organizations and experts that consults with the company on public safety issues.

Community notes concerns: While crowd-sourced fact-checking can help address misinformation, the council expressed concerns about its effectiveness, especially with the rise of AI-generated content.

  • “It’s unclear how Meta has weighed these challenges against the potential benefits. … Fact-checking serves as a vital safeguard, particularly in regions of the world where misinformation fuels offline harm,” the letter said.
  • The council pointed to studies on similar initiatives, such as X’s community-notes program, which found polarizing issues often can’t gain a consensus, leaving misinformation unchecked.

Scaling back the rules: In early January, Meta said it was removing restrictions on topics like immigration and gender to scale back its response to “societal and political pressure” and avoid obstructing free expression.

  • “We want to undo the mission creep that has made our rules too restrictive. … It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” Joel Kaplan, Meta’s chief global affairs officer, said.
  • Kaplan added that Meta was going to focus its content moderation on more “high-severity” violations, like terrorism, child exploitation, and scams, and will only take action on less severe policy violations if users personally report it.

Our take: Meta has the opportunity to use its AI models to enhance the community-notes feature and expedite screening of polarizing topics. Detailed reports on enforcement decisions could help users and advertisers understand Meta’s evolving brand identity and keep them engaged and on its platforms.

This article is part of EMARKETER’s client-only subscription Briefings—daily newsletters authored by industry analysts who are experts in marketing, advertising, media, and tech trends. To help you start 2025 off on the right foot, articles like this one—delivering the latest news and insights—are completely free through January 31, 2025. If you want to learn how to get insights like these delivered to your inbox every day, and get access to our data-driven forecasts, reports, and industry benchmarks, schedule a demo with our sales team.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account