Anthropic CEO Dario Amodei doesn’t think he should be the one calling the shots on the guardrails surrounding AI.
“I think I’m deeply uncomfortable with these decisions being made by a few companies, by a few people,” Amodei said. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”
“Who elected you and Sam Altman?” Cooper asked.
“No one. Honestly, no one,” Amodei replied.
Greater AI scrutiny and safeguards were at the foundation of Anthropic’s 2021 founding. Amodei was previously the vice president of research at Sam Atlman’s OpenAI. He left the company over differences in opinion on AI safety concerns.
“AI is advancing too head-spinningly fast,” Amodei said. “I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off.”
Anthropic’s practice of calling out its own lapses and efforts to address them has drawn criticism. In response to Anthropic sounding the alarm on the AI-powered cybersecurity attack, Meta’s chief AI scientist, Yann LeCun, said the warning was a way to manipulate legislators into limiting the use of open-source models.
Anthropic did not immediately respond to Fortune’s request for comment.
Others have said Anthropic’s strategy is one of “safety theater” that amounts to good branding, but no promises about actually implementing safeguards on technology. Amodei denied this and said the company is obligated to be honest about AI’s shortcomings.
“It will depend on the future, and we’re not always going to be right, but we’re calling it as best we can,” he told Cooper. “You could end up in the world of, like, the cigarette companies or the opioid companies, where they knew there were dangers and they didn’t talk about them and certainly did not prevent them.”



