According to Weiss-Blatt, those philanthropists have collectively poured more than $1 billion into efforts to study or mitigate “existential risk” from AI. However, she pointed at Moskovitz’s organization, Open Philanthropy, as “by far” the largest donors.
The organization pushed back strongly on the idea that they were projecting sci-fi-esque doom and gloom scenarios.
Sacks’ venture capital firm, Craft Ventures, did not immediately respond to a request for comment.
The “propaganda money” Sacks refers to comes largely from the Effective Altruism (EA) community, a wonky group of idealists, philosophers, and tech billionaires who believe humanity’s biggest moral duty is to prevent future catastrophes, including rogue AI.
Adelstein disagrees, noting that the reality is “more fragmented and less sinister” than Weiss-Blatt and Sacks portrays.
“Most of the fears people have about AI are not the ones the billionaires talk about,” Adelstein told Fortune. “People are worried about cheating, bias, job loss — immediate harms — rather than existential risk.”
He argues that pointing to wealthy donors misses the point entirely.
“There are very serious risks from artificial intelligence,” he said. “Even AI developers think there’s a few-percent chance it could cause human extinction. The fact that some wealthy people agree that’s a serious risk isn’t an argument against it.”
To Adelstein, longtermism isn’t a cultish obsession with far-off futures but a pragmatic framework for triaging global risks.
“We’re developing very advanced AI, facing serious nuclear and bio-risks, and the world isn’t prepared,” he said. “Longtermism just says we should do more to prevent those.”
He also brushed off accusations that EA has turned into a quasi-religious movement.
“I’d like to see the cult that’s dedicated to doing altruism effectively and saving 50,000 lives a year,” he said with a laugh. “That would be some cult.”



