Meta is developing an AI-powered system to evaluate potential harms and privacy risks for up to 90% of updates across its apps, including Instagram and WhatsApp, according to internal documents obtained by NPR.
Currently, Meta conducts mandatory privacy reviews under a 2012 FTC agreement, primarily relying on human evaluators. The new system would streamline this process—product teams would complete a questionnaire, and AI would provide an "instant decision" on risks along with compliance requirements before launch.
While this shift could accelerate updates, a former Meta executive warned NPR that automation increases risks, as AI may miss potential harms that human reviewers would catch. The source cautioned that problematic changes could go live before their negative impacts become apparent.
A Meta spokesperson stated the company has invested $8 billion in its privacy program and remains committed to innovation while meeting regulatory standards.
"As risks evolve, we enhance our processes to better identify issues, improve decision-making, and refine user experience," the spokesperson said. "We use AI for consistency in low-risk decisions but rely on human experts for complex or novel challenges."