The partnership reflects a growing drive by tech companies to sell their artificial intelligence products to militaries for a wide range of uses, including in Israel, Ukraine and the United States. However, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people.
Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake “additional fact-finding.” The statement did not identify the outside firm or provide a copy of its report.
The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes.
The company’s statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats. Microsoft said it had also provided “special access to our technologies beyond the terms of our commercial agreements” and “limited emergency support” to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on Oct. 7.
“We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others,” Microsoft said. “We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza.”
The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians.
In its statement, the company also conceded that it “does not have visibility into how customers use our software on their own servers or other devices.” The company added that it could not know how its products might be used through other commercial cloud providers.
Emelia Probasco, a senior fellow for the Center for Security and Emerging Technology at Georgetown University, said the statement is noteworthy because few commercial technology companies have so clearly laid out standards for working globally with international governments.
“We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict,” she said. “It’s like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world.”
Overall, Israel’s invasions and extensive bombing campaigns in Gaza and Lebanon have resulted in the deaths of more than 50,000 people, many of them women and children.
No Azure for Apartheid, a group of current and former Microsoft employees, called on Friday for the company to publicly release a full copy of the investigative report.
Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft Friday for taking a step toward transparency. But she said the statement raised many unanswered questions, including details about how Microsoft’s services and AI models were being used by the Israeli military on its own government servers.
“I’m glad there’s a little bit of transparency here,” said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. “But it is hard to square that with what’s actually happening on the ground.”