Palantir, the Miami-based data analytics and artificial intelligence platform, is a key software provider for the Department of Defense—and the main channel by which the Department has been using Anthropic’s large language model, Claude.
Karp says he had been in numerous discussions with all parties involved—discussions he declined to give specifics about, as he says he doesn’t want to “out conversations” or “bash people.”
But Karp does want to make one thing clear: The Defense Department is not using AI for domestic mass surveillance on U.S. citizens—and, to his knowledge, it has no plans to.
“Without commenting on internal dialogs, there was never a sense that these products would be used domestically,” Karp said. “The Department of War is not planning to use these products domestically. That’s a completely different kettle of fish… The terms the Department of War wants are completely focused on non-American citizens in a war context.”
Palantir has a vast business doing work for the U.S. government, including the DoD. Anthropic partnered with Palantir in 2024 to offer its AI technology to the DoD via Palantir. Anthropic also began working directly with the DoD last year to create a version of its technology designed for the Defense Department.
Palantir, which was funded by the CIA’s venture capital arm early on and whose software has been used in counter-terrorism efforts abroad, has long been accused of helping government and intelligence agencies spy on civilians and potential domestic suspects. Karp has repeatedly rebutted such claims for over a decade and has spoken about the importance of setting technical guardrails around technology that could be used in the U.S. for domestic surveillance. Palantir early on created a “Privacy and Civil Liberties” team—an interdisciplinary group of engineers, lawyers, philosophers, and social scientists—tasked with building privacy‑protective features into its products and fostering a culture of responsible use. The team helped set up internal channels, including an ethics hotline, for employees to flag work they viewed as crossing ethical lines.
Karp told Fortune he is “very sympathetic with arguments against using these products inside the U.S.” and said that he is “totally in favor” of setting terms of engagement and limits to how domestic agencies can use artificial intelligence.
“Quite frankly, I think we should self-impose them,” Karp said of these terms of engagement. “The Valley should have a consortium: This is what we’re going to do, and this is what we’re not going to do,” he said.
But Karp drew a sharp distinction between whether tech companies should set terms with domestic agencies and whether they should set them with the Department of Defense, which is primarily focused on managing the United States’ relationships with other countries and its adversaries.
“What we’re talking about now is using products vis-a-vis someone who’s trying to kill our service members,” Karp said, noting that he personally supports “wide license” of usage for the Department of Defense specifically.
“If we knew China and Russia and Iran wouldn’t build them, I would be in favor of very heavy—very heavy—legal constraints,” Karp said. But he points out that American adversaries will build them and use them against the U.S. anyway. “I don’t think this is an opinion. I think this is a fact, and that fact means I think the Department of War should have wide license to use these products.”



