Meta’s Shift to Automated Risk Assessments: A Double-Edged Sword

People talk near a Meta sign outside of the company’s headquarters in Menlo Park, Calif. (AP Photo/Jeff Chiu)
For years, Meta has maintained a rigorous process for evaluating the potential risks associated with new features on its platforms, including Instagram, WhatsApp, and Facebook. This involved teams of human reviewers assessing whether updates could violate user privacy, harm minors, or exacerbate the spread of misleading content. However, recent internal documents obtained by NPR reveal a significant shift: up to 90% of these risk assessments will soon be automated.
The Automation Revolution
This transition to automated risk assessments means that critical updates to Meta’s algorithms and safety features will largely be approved by artificial intelligence systems. The human element, which once provided a layer of scrutiny and debate, is being significantly reduced. This change is seen as a boon for product developers, allowing them to roll out updates and features more swiftly. Yet, it raises concerns among current and former employees about the potential for real-world harm due to insufficient oversight.
A former Meta executive, who chose to remain anonymous, expressed apprehension about the implications of this shift. "If this process means more products launching faster with less rigorous scrutiny, it creates higher risks," they stated. The fear is that the negative consequences of product changes may not be identified until they manifest in harmful ways.
Meta’s Justification
In response to these concerns, Meta has stated that it has invested billions in user privacy and that the automation of risk assessments is intended to streamline decision-making. The company claims that "human expertise" will still be utilized for complex issues, and only "low-risk decisions" will be automated. However, internal documents suggest that even sensitive areas such as AI safety and youth risk are under consideration for automation.
The New Review Process
Under the new system, product teams will receive "instant decisions" after completing a questionnaire about their projects. The AI will identify risk areas and outline requirements that must be met before a product can launch. While manual reviews by humans will still occur for projects deemed to involve new risks, this will no longer be the default process.
Concerns from Within
Critics within Meta argue that many product managers and engineers lack the expertise in privacy and risk assessment necessary to make informed decisions. Zvika Krieger, a former director of responsible innovation at Meta, noted that product teams are primarily evaluated on their speed of launching products, not on their ability to assess risks. This could lead to a culture where important safety considerations are overlooked.
Krieger cautioned that while streamlining reviews through automation could be beneficial, pushing this too far could compromise the quality of oversight. "If you push that too far, inevitably the quality of review and the outcomes are going to suffer," he warned.
The European Union Exception
Interestingly, the internal documents indicate that users in the European Union may be somewhat insulated from these changes. Decision-making and oversight for products and user data in the EU will remain with Meta’s European headquarters in Ireland, which must comply with stricter regulations governing online platforms.
A Shift in Company Culture
The changes at Meta reflect a broader trend towards more unrestrained speech and rapid updates to its applications. This shift follows CEO Mark Zuckerberg’s efforts to align more closely with political figures like President Trump, marking a cultural tipping point for the company.
Katie Harbath, a former public policy director at Facebook, acknowledged that while AI could help streamline processes, it is crucial to maintain human checks and balances. "If you want to move quickly and have high quality, you’re going to need to incorporate more AI," she said, but emphasized that these systems must be complemented by human oversight.
The Bigger Picture
As Meta continues to evolve its risk assessment processes, the implications of these changes are far-reaching. The company is under increasing pressure to compete with rivals like TikTok and OpenAI, which may be driving its push towards automation. However, the potential for unintended consequences looms large, as the balance between speed and safety hangs in the balance.
In summary, while the automation of risk assessments at Meta may facilitate faster product launches, it raises significant concerns about the adequacy of oversight and the potential for real-world harm. As the company navigates this new landscape, the effectiveness of its approach will be closely scrutinized by both employees and users alike.

