Google agrees to deal with Pentagon for AI usage in classified projects

Published April 28, 2026 12:49pm ET | Updated April 28, 2026 1:31pm ET



The War Department has agreed to a deal with Google to use the company’s AI models for classified projects.

The deal, which was reported by the Information, will allow the department to use Google’s AI ​for “any lawful government purpose,” meaning it joins OpenAI and Elon Musk’s xAI, which also ​have agreements in place to supply AI models for classified use.

“We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security. We support government agencies across both classified and non-classified projects, applying our expertise to areas like logistics, cybersecurity, diplomatic translation, fleet maintenance, and the defense of critical infrastructure,” a Google spokesperson told the Washington Examiner.

The Pentagon’s deal with Google notes that the company does not have “any right to control or veto lawful government operational decision-making.”

The Google spokesperson continued: “We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security. We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”

More than 600 Google employees sent a letter to CEO Sundar Pichai on Monday urging him not to agree to a deal that allows the company’s AI to be used for classified work, according to the Washington Post.

“We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways. This includes lethal autonomous weapons and mass surveillance but extends beyond,” the letter read. “The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads. Otherwise, such uses may occur without our knowledge or the power to stop them.”

The Pentagon has sought deals with most AI providers with the goal of getting them to agree to allow the department to use their programs for “any lawful” use, while Anthropic, one of those companies, was listed as a supply-chain risk after it refused to allow the broad usage of its AI platform, Claude.

An official in the Under Secretary for War for Research and Engineering (R&E)’s office confirmed to the Washington Examiner the official agreement includes the “lawful use” language.

TRUMP: DEAL WITH ANTHROPIC STILL ‘POSSIBLE’ FOR MILITARY USE

Anthropic did not want Claude to be used for mass domestic surveillance or fully autonomous weapons, neither of which department officials say they want to do, but the Pentagon also refuses to allow private companies the final say over how they use the programs.

The company filed two lawsuits that are still playing out regarding the department’s labeling it a supply chain risk, which is a designation that had been exclusive to foreign companies. The company’s CEO, Dario Amodei, met with senior administration officials at the White House earlier this month, and shortly after, President Donald Trump said it’s still “possible” they finalize an agreement. 

OpenAI and xAI agreed to their deals with the Pentagon during the department’s falling out with Anthropic.