While Google has struck a defiant tone, internal backlash appears to be mounting, with several employees criticizing the deal publicly.
Tensions between tech workers and management over military applications are not new, particularly when AI systems risk being used in warfare, but Google’s own stance has been gradually shifting in ways that alarm critics. In the wake of the Project Maven controversy, for example, Google published a set of AI principles pledging not to develop AI for weapons or for surveillance that violates internationally accepted norms. But, in February 2025, the company updated those principles and removed that explicit pledge from its public website.
Laura Nolan, a former Google employee who resigned over Project Maven, told Fortune it is unsurprising that employees working on a general-purpose technology, such as AI, would be uneasy about their work contributing to military targeting systems.
“These are not people who are necessarily expecting to work at a defense constructor as suddenly they are,” she said. However, she also said that workers today have less influence than they once did, as cost-cutting and layoffs across the tech sector have weakened employee leverage and made collective organizing more difficult.
“The companies want to redirect money into AI, and they think that this may even be able to replace engineers,” Nolan said. “Staff in tech have also never been particularly well organized because historically, it’s been a good business to be in and staff have normally been treated very well,” she said.
Google also appears to have learnt lessons from the Project Maven controversy.
“One of the things the company learnt from the Maven incident was they very much started to crack down on internal communication, they decommissioned a lot of the internal mailing lists, and they decommissioned the internal social network,” she said. “It is harder to organize internally now.”
The only organized pushback from employees so far is primarily an open letter to management protesting the use of the tech in military situations, which has now amassed around a thousand signatures, according to one Google DeepMind researcher who spoke to Fortune but asked for anonymity to speak freely about their employer. Part of the issue, the researcher said, is that some within the company feel the Pentagon deal fundamentally clashes with DeepMind’s values, and has left employees questioning whether the AI systems they help to build will now be deployed in ways they consider dangerous and cannot see or verify.
“There was a pride in doing AI for good for a very long time,” the researcher said. “Suddenly, the things I’ve pushed to improve might be used in very different ways with not enough oversight to harm people.”
The researcher also said many staff were still unaware of the deal because Google never clearly communicated that it was negotiating—or had signed—the contract. The closest Google has come to responding to employees’ concerns is publishing an internal memo about “responsible AI” and military partnerships that did not explicitly acknowledge the agreement, they said. The researcher called the lack of transparency around the contract “pretty indicting” for Google and said it felt as if the deal had been done “in the dark.”
“We need to use the little leverage that exists to maybe get leadership to sort of maybe at least commit to more transparency,” the researcher said. They added that as AI-driven automation reduces headcounts across the industry, it has become harder to mount the kind of internal pushback that helped kill Google’s Project Maven contract in 2018.
Representatives for Google did not respond to a request for comment from Fortune by the time of publication.
The deal—and Google’s decision to push through with it despite strong employee opposition—has put fresh pressure on a question that has dogged the AI industry since Anthropic’s negotiations with the Pentagon publicly collapsed earlier this year: whether AI companies can or should impose meaningful limits on how governments use their technology, especially when it comes to autonomous weapons and mass surveillance, and whether employees have any real power over how the technology they create is used.
The areas of concern around Google’s deal are the same two that have plagued other AI companies: autonomous weapons and mass surveillance. On weapons, critics worry AI could theoretically be used to autonomously identify and select targets without direct human oversight. On surveillance, AI’s ability to aggregate scattered data points into a comprehensive picture of a person’s life is already technically feasible—and, according to legal experts, currently lawful. These experts say this is the case even though several U.S. laws, including the 1978 Foreign Intelligence Surveillance Act, the 2015 USA Freedom Act, and the Fourth Amendment of the U.S. Constitution—which protects individual citizens from illegal searches and seizures—would all seemingly prohibit mass surveillance of U.S. citizens. But legal experts say that under existing U.S. law, government authorities can buy commercially available data from brokers and feed it to AI systems, amounting in practice to mass surveillance of Americans.
“Given that we offer general-purpose models and not models that are specifically trained or evaluated for such purposes, there are huge risks,” the Google researcher said. “With mass surveillance, it’s very clear that this is really dangerous, and we just don’t have the laws or the regulations.”
He noted that current large language models like Gemini are not yet suited to run on weapons systems directly as they are too slow and too large to be embedded in something like a drone.
However, he said the issue is around the precedent these “all lawful purposes” contracts set for future, more capable systems. He argued Google’s agreement risks normalising a model in which companies hand over powerful, general‑purpose AI to the Pentagon with few meaningful constraints, making it much harder to roll back or tighten those terms later.
Google is not the first AI company to sign a Pentagon deal that critics say falls short on these two issues, but legal experts say its contract appears to be the most permissive yet.
Following Anthropic’s rupture with the Department of War over its refusal to sign a contract that included the “all lawful purposes” language that the Pentagon has been insisting on, both OpenAI and Elon Musk’s xAI both inked deals with the Pentagon that allowed their tech to be deployed for “all lawful use” by the government. OpenAI’s decision, coming after it has stated publicly that it supported Anthropic’s red lines too, sparked employee dissent within OpenAI, led to customer boycotts of ChatGPT, and caused at least one senior employee to resign from the AI lab. The backlash was so widespread that OpenAI CEO Sam Altman later publicly apologized for the “sloppy and opportunistic” deal and said the company will re-negotiate parts of the deal.
In comparison to OpenAI, Google’s deal hasn’t had quite the same level of scrutiny, even within the company.
“Some people actually aren’t even aware of the letter because there is no internal communication about this at all,” the Google researcher said. “With all the blowback against OpenAI, this is just a hope that people have moved on and this is the new normal.”
Legal experts have said that the language in Google’s deal appears to be less restrictive and more permissive of government use than OpenAI’s.
“The OpenAI contract seemed like it did give some kind of contractual guarantee that the models weren’t going to [be] used for certain kinds of mass domestic surveillance,” Charlie Bullock, a senior research fellow on LawAI’s U.S. Law and Policy team, told Fortune. “Even that contractual guarantee is not present in Google’s deal.”
Bullock added that under Google’s terms, if there are technical safeguards within the models that prevent the government from doing something it wants to do, Google is obliged to step in and remove those safeguards. The government can do whatever it wants, as long as it’s lawful, according to Bullock’s assessment of the contract, whereas OpenAI’s contract appeared to lack the language about removing and adjusting safety settings from filters.
However he also noted that, unlike Google, OpenAI had published a smaller portion of its contract with the Pentagon and these assurances may be undermined in other places.
Seán Ó hÉigeartaigh, a research professor at the Centre for the Future of Intelligence, said the Google agreement appeared “strictly weaker” than OpenAI’s on the available evidence.
“From a legal perspective, it looks less strong and thus more concerning,” he said, adding that it was “disappointing” that Google’s deal had not attracted the same level of public discourse and internal debate as OpenAI’s.



