In a candid and emotionally charged interview, Amodei escalated his war of words with Nvidia CEO Jensen Huang, vehemently denying accusations that he is seeking to control the AI industry and expressing profound anger at being labeled a “doomer.” Amodei’s impassioned defense was rooted in a deeply personal revelation about his father’s death, which he says fuels his urgent pursuit of beneficial AI while simultaneously driving his warnings about its risks, including his belief in strong regulation.
Amodei directly confronted the criticism, stating, “I get very angry when people call me a doomer … When someone’s like, ‘This guy’s a doomer. He wants to slow things down.’” He dismissed the notion, attributed to figures like Jensen Huang, that “Dario thinks he’s the only one who can build this safely and therefore wants to control the entire industry” as an “outrageous lie. That’s the most outrageous lie I’ve ever heard.” He insisted that he’s never said anything like that.
His strong reaction, Amodei explained, stems from a profound personal experience: his father’s death in 2006 from an illness that saw its cure rate jump from 50% to roughly 95% just three or four years later. This tragic event instilled in him a deep understanding of “the urgency of solving the relevant problems” and a powerful “humanistic sense of the benefit of this technology.” He views AI as the only means to tackle complex issues like those in biology, which he felt were “beyond human scale.” As he continued, he explained how he’s actually the one who’s really optimistic about AI, despite his own doomsday warnings about its future impact.
Amodei insisted that he appreciates AI’s benefits more than those who call themselves optimists. “I feel in fact that I and Anthropic have often been able to do a better job of articulating the benefits of AI than some of the people who call themselves optimists or accelerationists,” he asserted.
Amodei claimed he’s “one of the most bullish about AI capabilities improving very fast,” saying he’s repeatedly stressed how AI progress is exponential in nature, where models rapidly improve with more compute, data, and training. This rapid advancement means issues such as national security and economic impacts are drawing very close, in his opinion. His urgency has increased because he is “concerned that the risks of AI are getting closer and closer” and he doesn’t see that the ability to handle risk isn’t keeping up with the speed of technological advance.
To mitigate these risks, Amodei champions regulations and “responsible scaling policies” and advocates for a “race to the top,” where companies compete to build safer systems, rather than a “race to the bottom,” with people and companies competing to release products as quickly as possible, without minding the risks. Anthropic was the first to publish such a responsible scaling policy, he noted, aiming to set an example and encourage others to follow suit. He openly shares Anthropic’s safety research, including interpretability work and constitutional AI, seeing them as a public good.
Amodei addressed the debate about “open source,” as championed by Nvidia and Jensen Huang. It’s a “red herring,” Amodei insisted, because large language models are fundamentally opaque, so there can be no such thing as open-source development of AI technology as currently constructed.
An Nvidia spokesperson, who provided a similar statement to Kantrowitz, told Fortune that the company supports “safe, responsible, and transparent AI.” Nvidia said thousands of startups and developers in its ecosystem and the open-source community are enhancing safety. The company then criticized Amodei’s stance calling for increased AI regulation: “Lobbying for regulatory capture against open source will only stifle innovation, make AI less safe and secure, and less democratic. That’s not a ‘race to the top’ or the way for America to win.”
Anthropic reiterated its statement that it “stands by its recently filed public submission in support of strong and balanced export controls that help secure America’s lead in infrastructure development and ensure that the values of freedom and democracy shape the future of AI.” The company previously told Fortune in a statement that “Dario has never claimed that ‘only Anthropic’ can build safe and powerful AI. As the public record will show, Dario has advocated for a national transparency standard for AI developers (including Anthropic) so the public and policymakers are aware of the models’ capabilities and risks and can prepare accordingly.”
Amodei did not mention Altman directly, but said his decision to co-found Anthropic was spurred by a perceived lack of sincerity and trustworthiness at rival companies regarding their stated missions. He stressed that for safety efforts to succeed, “the leaders of the company … have to be trustworthy people, they have to be people whose motivations are sincere.” He continued, “if you’re working for someone whose motivations are not sincere who’s not an honest person who does not truly want to make the world better, it’s not going to work you’re just contributing to something bad.”
Amodei also expressed frustration with both extremes in the AI debate. He labeled arguments from certain “doomers” that AI cannot be built safely as “nonsense,” calling such positions “intellectually and morally unserious.” He called for more thoughtfulness, honesty, and “more people willing to go against their interest.”
For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.