Amodei sought to clarify, though, that the scope of the designation was narrower than Secretary of War Pete Hegseth claimed when he first announced the decision last Friday. Hegseth had said that the designation would require all U.S. military contractors to sever all commercial ties to Anthropic.
But Amodei said the relevant statute, 10 USC 3252, applies only to direct use of Claude within specific Department of War contracts rather than all use of Claude by companies that happen to hold such contracts. Under the law, the Secretary of War is also required to use the least restrictive means necessary, limiting how broadly the designation can be applied. Anthropic has said it will sue to challenge the designation, and several legal experts have already publicly questioned if the Pentagon’s ruling is legally sound.
In his blog, Amodei also said that Anthropic “had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible.” But Emil Michael, the undersecretary of war, subsequently posted on X that “I want to end all speculation: there is no active @DeptofWar negotiation with @AnthropicAI.”
The leaked memo drew backlash from several people who had been broadly sympathetic to Anthropic’s position in the Pentagon dispute. Critics pointed to the fact that many OpenAI employees had previously signed open letters supporting Anthropic’s stance on AI safety red lines.
In Thursday’s statement, Amodei said he apologized “for the tone” of the memo, adding it had been written within hours of a chaotic series of announcements and did not reflect his “careful or considered views.”
Much of the dispute between the Pentagon, Anthropic, and OpenAI has been centered on two principles that Anthropic has insisted be enshrined in any contract with the government: it will not allow its AI to be used for fully autonomous weapons or mass domestic surveillance. The company walked away from a deal after the DoW insisted that Anthropic’s contract include its agreement that the military could use Anthropic’s technology for “any lawful use,” something Anthropic apparently viewed as too open-ended.
Just hours after the deal fell apart, OpenAI swooped in to make a deal with the Pentagon in which it agreed to the “any lawful use” language, but claimed it had explicitly pointed out that domestic surveillance would be considered unlawful under existing U.S. law. Many were skeptical though of exactly how much protection OpenAI’s contract would provide, noting that the government has often claimed that certain activities—such as the bulk purchase and analysis of commercially data on U.S. citizens—is legal. The U.S. intelligence services are also allowed to process the data of U.S. citizens when that data has been collected “incidentally,” an exception that some civil liberties groups say potentially gives U.S. intelligence agencies broad latitude to surveil Americans.
OpenAI later acknowledged that the deal announced last Friday looked “sloppy and opportunistic” and it renegotiated some of the terms to add further restrictions on intelligence agencies using its models.
Anthropic appears to be trying to cool the tensions with the Pentagon, with Amodei striking a more conciliatory note in the most recent statement. He pledged that Anthropic would continue supplying its models to the Department of War at nominal cost for as long as needed, to avoid leaving warfighters without tools during active operations: “Anthropic has much more in common with the Department of War than we have differences,” he wrote.



