“As risks evolve and our program matures, we enhance our processes to better identify risks, streamline decision-making, and improve people’s experience,” a Meta spokesperson told Fortune in a statement. The company didn’t confirm or deny the details from NPR’s reporting.
“We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues. Our commitment is to deliver innovative products for people while meeting regulatory obligations.”
From the start, humans have conducted nearly all of Meta’s privacy and integrity reviews. But algorithms could soon be in charge of handling incredibly sensitive issues.
The $1.46 trillion technology company told Fortune that it still relies on “human expertise for rigorous assessments and oversight of novel or complex issues,” and that AI will only take over “low-risk decisions.” But internal documents procured by NPR show that technology is slated to evaluate cases like AI safety, youth risk, violent content, and the spread of falsehoods, which have historically been done by Meta’s employees. Those human risk assessors needed the sign-off from others to send out updates—now, AI will make its own evaluations on dangers.
“If you push that too far, inevitably the quality of review and the outcomes are going to suffer,” Krieger said.