In Focus delivers deeper coverage of the political, cultural, and ideological issues shaping America. Published daily by senior writers and experts, these in-depth pieces go beyond the headlines to give readers the full picture. You can find our full list of In Focus pieces here.
The Pentagon, under the Trump administration, is pursuing an expansive and widespread effort to become what it describes as an “AI-first fighting force.”
Recommended Stories
The department has agreed to several contracts with various artificial intelligence companies over the last year or so to integrate their advanced platforms across the military’s classified and unclassified networks. On Friday, the department announced its newest slate of deals with SpaceX, OpenAI, Oracle, Google, Nvidia, Reflection, Microsoft, and Amazon Web Services to use their services in classified settings.
In announcing the deals, the department said it and these companies “share the conviction that American leadership in AI is indispensable to national security.” The Pentagon also said the companies agreed to allow the department to use their technology for “lawful operational use.”
The man responsible for leading the department’s artificial intelligence charge is Emil Michael, the former Uber executive who is now serving as undersecretary of war for research and engineering.
In discussing the new agreements on Friday, Michael said in a CNBC interview, “What we’ve learned since we started this effort at the Department of War is that it’s irresponsible to be reliant on any one partner.”
Lauren Kahn, a former Pentagon official and now an analyst with Georgetown’s Center for Security and Emerging Technology, told the Washington Examiner, “It’s honestly a really good thing, and more than a good thing, it was an inevitability, it was a necessity.”
“It’s long overdue that the Pentagon has access to these systems,” Kahn added.
While the AI push at the Pentagon did not begin under the Trump administration, the department released its new AI Acceleration Strategy in January, laying out three main tenets: warfighting, intelligence, and enterprise operations. Under the warfighting tenet, the goal is to pursue ways to discover, test, and scale new ways of fighting with and against AI-enabled capabilities; trying to work through AI-enabled battle management and decision support; and accelerating AI-aided military simulation development.
“We will unleash experimentation, eliminate bureaucratic barriers, focus our investments and demonstrate the execution approach needed to ensure we lead in military AI,” Secretary of War Pete Hegseth said at the time. “We will become an ‘AI-first’ warfighting force across all domains.”

Under the intelligence umbrella, the policy calls for accelerating the technical intelligence collected and analysis of threats and foreign military equipment to turn that intelligence into weapons much faster. The last section, enterprise operations, includes department-wide access to frontier generative AI models, like Google’s Gemini and xAI’s Grok.
“We absolutely have to stay ahead,” Hegseth said in front of the Senate Armed Services Committee on Thursday. “The advantage that AI provides applied to any number of capabilities, whether it’s domain awareness, targeting cycles, you name it, AI, and leveraging it, that’s why we’ve made it the forefront. It’s AI-first with everything we do, integrating it at every potential echelon to ensure we can respond faster. If we’re better at that than any adversary is, it’s going to give us an advantage, and we have to maintain that.”
A Pentagon official told the Washington Examiner, “I would say at a very broad level, everything is going to have a spring of AI on it. That goes with any piece of technology — it’s safe to assume that going forward.”
“AI, obviously, is something that Emil’s heavily focused on, but it’s not limited to just that,” the official continued. “It all has a very broad swath across all the different areas of technology development in the department.”
Under questioning from Sen. Jacky Rosen (D-NV), Hegseth said, “We follow the law and humans make decisions,” affirming that “AI is not making lethal decisions.”
The United States is far from the only country trying to harness the power of AI for military purposes.
Last week, White House Office of Science and Technology Policy Director Michael Kratsios published a memo alleging that the government “has information indicating that foreign entities, principally based in China, are engaged in deliberate, industrial-scale campaigns to distill U.S. frontier AI systems.”
Per the memo, the administration intends to share information with American AI companies about these attempted thefts, enable better coordination with them against such attacks, try to develop best practices to identify, mitigate, and remediate these activities, and explore ways to hold foreign actors accountable.
Reporting from the New York Times last month suggested that China’s technological prowess at autonomous drones, recently on display in a military parade, “set off alarm bells” in the Pentagon, which determined it was lagging behind. While the U.S. and China are locked in an AI arms race, they are far from the only countries investing significant resources into being the best.
Use of AI
Pentagon employees could use AI for routine things, or it can be involved in kinetic operations.
The department launched an official AI platform, GenAI.mil, along with Google Cloud’s Gemini for Government in December. And to date, more than 1.3 million department personnel have used it, already generating tens of millions of prompts.
“We’ve seen numerous anecdotes from across the joint force of folks shaving what has taken months down to days, and thousands of manpower hours shaved down by simply putting Gen AI to good use,” the Pentagon official added.
In one example of how the department is using AI, per the official, the Army XVIII Airborne Corps was able to shorten the time it took to create the Writing Corps and Division-level Operations Orders for the Southcom area of responsibility to about six weeks using GenAI, while it usually takes between nine and 12 months.
Another primary use of AI, like with Project Maven, is to take intelligence gathered from countless entities and synthesize it in a fraction of the time it would take a human to do.
“We’re trying to synthesize data among all of our different providers and the intel agencies to obviously get a clearer picture of different streams of data, whether it’s satellite imagery or whatever we have at our disposal to make better decisions, augment the warfighter, and make sure we always maintain a dominant advantage in any domain of warfare,” the official continued.
In 2017, the department launched Project Maven, which said at the time that it needed “to do much more and move much faster” in order to “integrate artificial intelligence and machine learning.”
The military has paired the Maven Smart System, which was built by Palantir, combined it with Anthropic’s Claude, and used them to sift through classified data gathered in real time from satellites, surveillance, and other intelligence, specifically during the war in Iran. The systems also suggested hundreds of targets and issued precise locations for them, speeding up the campaign, according to the Washington Post.
Anthropic
While the Pentagon announced the deals with seven tech companies, Anthropic was not among them.
The department listed Anthropic as a “supply chain-risk” in early March, a designation historically reserved for foreign companies, over disputes about guardrails for the military’s use of its AI platform in warfare. Previously, Anthropic’s Claude model was the only AI platform allowed on the Pentagon’s classified network.
Despite the designation, which Anthropic has sued the administration over, the company is still in talks with the White House about a new deal. The company’s CEO, Dario Amodei, met with senior administration officials at the White House on April 17, and shortly after, President Donald Trump said it’s still “possible” they make a deal.
Without mentioning Amodei by name, however, Hegseth called him “an ideological lunatic who shouldn’t have sole decision-making over what we do,” during his Senate testimony on Thursday.
Anthropic recently unveiled its newest AI platform, Claude Mythos, which the company said is so powerful in identifying and exploiting hidden flaws in software that it does not plan to release it to the public. The company also announced an initiative with other tech companies to use Mythos as part of their defensive security work, while they also gave it to more than 40 additional organizations that build or maintain critical infrastructure, so they can ensure their own defenses.
PENTAGON SIGNS DEAL WITH SIX TOP AI COMPANIES FOR CLASSIFIED WORK AFTER ANTHROPIC FALLOUT
Michael, in a Friday morning interview on CNBC, sought to distinguish the department’s dispute with Anthropic and ensure Mythos can safeguard America’s infrastructure.
“I think the Mythos issue that’s being dealt with governmentwide, not just at [the] Department of War, is a separate national security moment where we have to make sure that our networks are hardened up, because that model has capabilities that are particular to finding cyber vulnerabilities and patching them,” he said.
Michael is one of several Trump-appointed Pentagon officials with a business or tech background. Collectively, they are trying to overhaul how the Pentagon operates with the private sector and defense industrial base far beyond artificial intelligence.
