The Defense Department’s reliance on Anthropic’s AI came as a shocking realization that ultimately led to their dramatic schism, according to a top Pentagon official.
After the U.S. military’s raid on Venezuela in early January that captured dictator Nicolas Maduro, Anthropic asked Palantir if its AI was used in the operation. While Anthropic has characterized the inquiry as routine, the Pentagon and Palantir interpreted it as a potential threat to their access.
“I’m like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?” Michael recalled. “So I went to Secretary Hegseth, I said this would happen and that was like a whoa moment for the whole leadership at the Pentagon that we’re potentially so dependent on a software provider without another alternative.”
The Pentagon insisted it would use the AI in lawful scenarios and refused to abide by any limits from the company that would go beyond those constraints.
After failing to reach a compromise last week, President Donald Trump ordered the federal government to stop using Anthropic while giving the Pentagon six months to phase it out. Defense Secretary Pete Hegseth also designated the company a supply-chain risk, meaning contractors can’t use it for military work.
For now, the military continues to use Anthropic during the U.S. war on Iran, as AI helps warfighters identify potential targets at a rapid pace.
During his podcast appearance, Michael raised the concern that a rogue developer could “poison the model” to render it ineffective for the military, train it to hallucinate purposefully, or instruct it to not follow instructions.
He then contacted OpenAI, which eventually reached a similar deal that Anthropic had. Elon Musk’s xAI was also brought into the classified fold, while the Pentagon is trying to get Google’s AI allowed into classified settings too.
“I’m not biased,” Michael said. “I just I want all of them. I want to give them all the same exact terms because I need redundancy.”
He acknowledged that Anthropic had become “deeply embedded” in the department while other AI companies hadn’t pursued enterprise customers as aggressively by providing forward-deployed engineers.
The falling-out between the Pentagon and Anthropic highlighted the clash of cultures between the defense establishment and Silicon Valley, which has its roots in military innovations but has since turned squeamish about seeing its technology used for war.



