This was the crux of the Pentagon’s stated objection to Anthropic’s existing contract. The military did not think it was right to have a private company dictating policies to an elected government.
Most Americans might agree with the Pentagon’s position—in principle. Except it is complicated, in practice, by three things. First, AI technology is moving extremely fast, but the mechanisms of democratic control—legislation, Congressional oversight, elections—move extremely slowly. In the three years since ChatGPT debuted, Congress has not passed any federal AI legislation. The Trump Administration has dismantled limited AI regulations put in place by its predecessor, while also acting to punish states that pass their own AI regulations.
So while many people might agree that policies on the government’s AI use ought to be set by elected officials, there is the practical issue of what to do when those elected representatives fail to act. The idea of trying to arrive at AI policy through contractual negotiations between labs and government is a poor substitute for true democratic governance, but it might be better than no governance at all. The controversy over Anthropic’s Pentagon contract should be a wake up call for Congress to act.
Second, the trend among governments over the past several decades has been to interpret existing laws broadly in order to expand the power of the government to use technology to surveil its citizens. (The story has been one of the executive branch gradually clawing back surveillance powers it lost through Congressional action following the scandals that emerged with Watergate and the Church Committee hearings in the mid-1970s.) Many activities of the military are also cloaked in secrecy that makes democratic oversight and accountability difficult. This constant pushing at the boundaries of what the law will allow has made the public distrustful of the government’s intentions. So it’s not surprising that some people at this point may actually have more faith in a seemingly well-intentioned and brilliant, but unelected, technology executive, such as Anthropic’s Dario Amodei, to do the right thing and set the right policies.
Finally, there is the issue that many Americans have with this specific government. The Trump administration has repeatedly taken unprecedented actions to punish domestic dissent, often on flimsy legal justifications, or with no legal justification at all, and has repeatedly deployed the military domestically to intimidate or punish perceived domestic opposition. It has also launched several military actions overseas with little to no legal justification. So is it any wonder that many question whether this particular administration should be given the power to use AI for anything its own lawyers believe is legal?
The Pentagon’s current approach comes close to nationalization by other means. One option the DoW threatened was using the Defense Production Act, a Cold War-era law, to compel Anthropic to deliver an AI model on its preferred terms—a sort of soft nationalization of Anthropic’s production pipeline. And the retaliatory decision to label Anthropic a “supply chain risk” is designed in part to intimidate other AI companies into accepting the Pentagon’s preferred contract terms, which again seems nationalization-adjacent.
Altman has claimed he struck his deal with the Pentagon in part to de-escalate the tension between the government and AI companies, saying that “a close partnership between governments and the companies building this technology is super important.” While I’m unsure of Altman’s true motives, I agree with him on this last point. At a time when AI potentially threatens unprecedented changes to the economy and society, fomenting distrust and conflict between the government and the people building advanced AI systems seems like a pretty bad idea.
With that, here’s more AI news.



