The maker of the AI chatbot Claude said in a statement that it’s not walking away from negotiations but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
Sean Parnell, the Pentagon’s top spokesman, said earlier on social media that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
“It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei wrote in a statement. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
“We will not let ANY company dictate the terms regarding how we make operational decisions,” he said.
The talks that escalated this week began months ago. Amodei said that if the Pentagon doesn’t reconsider its position, Anthropic “will work to enable a smooth transition to another provider.”
Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said the Pentagon has been handling the matter unprofessionally while Anthropic is “trying to do their best to help us from ourselves.”
“Why in the hell are we having this discussion in public?” Tillis told reporters. “This is not the way you deal with a strategic vendor that has contracts.”
He added, “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.”
Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports that the Pentagon is “working to bully a leading U.S. company.”
“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance,” Warner said in a statement. It “further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”
While Pentagon officials say they always will follow the law with their use of AI models, the department has taken steps to change the culture among the military legal ranks.
The same month, Hegseth also fired the top lawyers for the Army and the Air Force without explanation. The Navy’s top lawyer had resigned shortly after the election in late 2024.
___
O’Brien reported from Providence, Rhode Island. Associated Press writer Ben Finley contributed to this report.



