Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Meta to automate risk assessments and project launch decisions, reducing human oversight

The news: Meta will automate up to 90% of all internal risk assessment procedures.

  • Algorithm updates, new safety features, and changes to user guidelines will be almost entirely approved by an AI-powered system, per NPR, rather than by employees.
  • Meta may also automate assessments for areas like AI safety, child safety, and management of posts with violent content or misinformation.

Risk assessment in the EU would not be subject to these changes due to requirements for company oversight.

How would it work? Meta’s AI would make an instant decision on whether a project or update can be approved. In high-risk situations, which could include platform policy changes, humans can manually review the assessment, but that won’t be the default.

Privacy teams may also lose authority over decisions to delay product launches, The Information reported in February.

Zooming out: Lowering the guardrails for software updates and policy changes aligns with Meta’s push to accelerate product development and move faster with AI. For example:

  • Meta started using AI to assist with content moderation on Facebook and Instagram after it canceled its fact-checking program.
  • In May, the company split its AI teams to streamline development of consumer-facing tools and deep research models.

What’s at stake? Half-baked feature launches and software updates could trigger user backlash and reflect poorly on Meta’s AI efforts.

  • “Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you're creating higher risks,” an anonymous former Meta executive told NPR.
  • With fewer humans involved, employees’ thoughts and perspectives on how updates might affect users or what consequences they could hold may no longer factor into decision-making.

Our take: This is a return to Meta’s “move fast and break things” credo as the company seeks to automate more operations and streamline processes.

Moving decision-making power away from human teams puts the onus on leadership to ensure speed doesn’t come at the cost of safety or quality. It also means that the social media giant needs to have exceptional trust in its own AI systems or risk brand damage.

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Non-clients can click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account