The news: Meta will automate up to 90% of all internal risk assessment procedures.
- Algorithm updates, new safety features, and changes to user guidelines will be almost entirely approved by an AI-powered system, per NPR, rather than by employees.
- Meta may also automate assessments for areas like AI safety, child safety, and management of posts with violent content or misinformation.
Risk assessment in the EU would not be subject to these changes due to requirements for company oversight.
How would it work? Meta’s AI would make an instant decision on whether a project or update can be approved. In high-risk situations, which could include platform policy changes, humans can manually review the assessment, but that won’t be the default.
Privacy teams may also lose authority over decisions to delay product launches, The Information reported in February.
Zooming out: Lowering the guardrails for software updates and policy changes aligns with Meta’s push to accelerate product development and move faster with AI. For example:
- Meta started using AI to assist with content moderation on Facebook and Instagram after it canceled its fact-checking program.
- In May, the company split its AI teams to streamline development of consumer-facing tools and deep research models.
What’s at stake? Half-baked feature launches and software updates could trigger user backlash and reflect poorly on Meta’s AI efforts.
- “Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you're creating higher risks,” an anonymous former Meta executive told NPR.
- With fewer humans involved, employees’ thoughts and perspectives on how updates might affect users or what consequences they could hold may no longer factor into decision-making.
Our take: This is a return to Meta’s “move fast and break things” credo as the company seeks to automate more operations and streamline processes.
Moving decision-making power away from human teams puts the onus on leadership to ensure speed doesn’t come at the cost of safety or quality. It also means that the social media giant needs to have exceptional trust in its own AI systems or risk brand damage.
This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Non-clients can click here to get a demo of our full platform and coverage.