The case—which involves a historic first in that the Department of Defense, informally renamed the Department of War (DOW) by the Trump administration, labeled a U.S.-led business as a supply-chain risk to national security—is rooted in a contract negotiation that escalated quickly. The DOW wanted to add a blanket “all lawful use” clause to its contracts with the AI firm so the military could use Anthropic’s Claude tool for any legal purpose.
The presiding judge in the case expressed doubts about the sweeping authority the Pentagon had wielded in the case. Federal District Judge Rita Lin said she would issue a ruling on Anthropic’s legal challenge “in the next few days,” and spent Tuesday’s hearing asking the parties questions about their disagreement.
During contract negotiations with the Pentagon in February, Anthropic balked at the possibility of the military using Claude for lethal autonomous warfare and mass surveillance of Americans, and attempted to insist on provisions expressly forbidding such use. Anthropic, led by founder Dario Amodei, said it hasn’t thoroughly tested those uses and doesn’t believe they work safely. The DOW claimed those guardrails were unacceptable and that military commanders need latitude to make determinations on missions.
In briefs in the case and in court on Tuesday, the government said the administration’s actions were in response to Anthropic’s refusal to implement certain terms in its contract, and argued free speech wasn’t at issue in the case. Deputy Assistant Attorney General Eric Hamilton said the government has unrestricted power to determine which companies it will contract with. Hamilton said Anthropic’s conduct had raised concerns that future software updates could be used as a “kill switch” to keep the AI from functioning in military operations.
Rather, Lin said the real question to be decided by the court was whether the government “violated the law” when it went beyond just not using Anthropic’s AI services and finding a more permissible AI vendor to work with.
“After Anthropic went public with this contracting dispute, defendants seemed to have a pretty big reaction to that,” Lin said.
“What is troubling to me about these reactions is that they don’t really seem to be tailored to the stated national security concern,” said Lin. If the concern is about chain of command, DOW could just stop using Claude and go on its way, she said.
“One of the amicus briefs used the term ‘attempted corporate murder,’” she added. “I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic. And specifically my concern is whether Anthropic is being punished for criticizing the government’s contracting position in the press.”
The American Federation of Government Employees, a union of 800,000 federal workers, said in its amicus brief that the Trump administration had a pattern of using national security concerns as a pretext for retaliation against free speech.
Microsoft wrote that a ban on Anthropic would hurt its own business, and could chill future defense-industry investment and engagement with AI.
The Human Rights and Technology Justice Organization brief didn’t take a position who should win in court, but argued against militarized AI broadly, and stating that its use could lead to catastrophic human rights risks.
Lin said she’ll issue an opinion this week.



