“The issues are super complex, and demand clear communication,” he wrote. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
According to Altman, the new contract language will state that OpenAI’s AI systems shall not be “intentionally used for domestic surveillance of U.S. persons and nationals,” consistent with the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978.
The renegotiated terms will also add explicit restrictions that cover commercially purchased data—such as cell phone location records or fitness app information—which has been a legal gray area. According to a report in The Atlantic, rival Anthropic had specifically sought similar guarantees against domestic surveillance in its own negotiations with the Pentagon. Its insistence on harder safeguards to prohibit the use of its tools for surveillance was reportedly one of the major stumbling blocks that ultimately collapsed those talks.
Despite the renegotiated terms, legal experts have questioned over how enforceable the restrictions are.
“This seems like a significant improvement over the previous language with respect to surveillance, and I’m glad to see it,” said Charles Bullock, a senior research fellow at the Institute for Law and AI, in a post on X. “It does not address autonomous weapons concerns, nor does it claim to.”
Independent analysts as well as OpenAI staff have also advocated for a process in which independent lawyers would be able to review the full contract and share their analysis with concerned employees.
Anthropic had sought two hard limits in its negotiations with the Pentagon: a ban on its AI being used for mass surveillance of American citizens, and a prohibition on its technology being incorporated into autonomous weapons systems—defined as those capable of making a decision to strike targets without direct human oversight.
Critics, including Jonathan Iwry, a Fellow at the Accountable AI Lab at the Wharton School of the University of Pennsylvania, accused OpenAI of undercutting Anthropic at a critical moment.
“What is particularly disappointing is that the rest of the AI industry failed to come to Anthropic’s support,” Iwry told Fortune. “If these companies were serious about their commitment to safe and responsible AI (on which some of them built their reputations), they could have closed ranks and stood together against the Pentagon on behalf of the public. Instead, they let the administration play them off against one another as market competitors.”
Many of OpenAI’s own employees signed an open letter supporting Anthropic following the standoff. Consumers also signaled their support by sending Claude, Anthropic’s AI assistant, to the top of Apple’s App Store charts for the first time, suggesting users were switching in protest. Chalk graffiti criticizing OpenAI’s decision also appeared on the sidewalk outside its San Francisco offices.



