Anthropic’s $200 million contract with the Department of Defense is up in the air after Anthropic reportedly raised concerns about the Pentagon’s use of its Claude AI model during the Nicolas Maduro raid in January.
“The Department of War’s relationship with Anthropic is being reviewed,” Chief Pentagon Spokesman Sean Parnell said in a statement to Fortune. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
“Anthropic has not discussed the use of Claude for specific operations with the Department of War,” an Anthropic spokeperson said in a statement to Fortune. “We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”
Under the Defense Department contract, Anthropic won’t allow the Pentagon to use its AI models for mass surveillance of Americans or use of its technology in fully autonomous weapons. The company also banned the use of its technology in “lethal” or “kinetic” military applications. Any direct involvement in active gunfire during the Maduro raid would likely violate those terms.
This position was highlighted by Anthropic in a statement to Fortune. “Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy.”
The company “is committed to using frontier AI in support of US national security,” the statement read. “We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right.”
Palantir, OpenAI, Google and xAI didn’t immediately respond to a request for comment.
Although the DOD has accelerated efforts to integrate AI into its operations, only xAI has granted the DOD the use of its models for “all lawful purposes,” while the others maintain usage restrictions.
“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” the senior official told the outlet.
The Pentagon’s comments are the latest in a public dispute coming to a boil. The government claims that having companies set ethical limits to its models would be unnecessarily restrictive, and the sheer number of gray areas would render the technologies futile. As the Pentagon continues to negotiate with the AI subcontractors to expand usage, the public spat becomes a proxy skirmish for who will dictate the uses of AI.



