Big Tech companies agree to share AI models with US government for national security reasons

Published May 5, 2026 10:41am ET



Major U.S. technology companies have agreed to give the federal government early access to their most advanced artificial intelligence models for national security reviews before they are released to the masses. 

Companies including Google, Microsoft, and Elon Musk’s xAI will allow U.S. officials to evaluate new AI systems before they are publicly released, according to multiple reports. 

The arrangement is being coordinated through the Commerce Department’s Center for AI Standards and Innovation, which will assess risks tied to cybersecurity, misuse, and broader national security concerns. 

The agreement effectively gives the government influence and review over cutting-edge AI development at a stage previously controlled almost entirely by private companies. While officials are not blocking releases, the predevelopment reviews could shape how and when new models reach the public. 

The move follows growing alarm in Washington over the capabilities of so-called “frontier” AI systems, particularly their potential to enable sophisticated cyberattacks or be weaponized by adversaries. 

The Biden administration had previously made voluntary arrangements with companies such as OpenAI and Anthropic to allow limited safety testing of advanced models. The latest agreements under President Donald Trump signal a broader and more structured effort to integrate national security considerations directly into the product development cycle. 

The push for tighter oversight has accelerated in recent months amid controversy surrounding Anthropic and its powerful Mythos model, which raised concerns from Trump’s technology advisers about its ability to identify and exploit system vulnerabilities.

The company has been locked in a high-profile dispute with the Pentagon after resisting demands to loosen safeguards on military use of its AI systems. War Secretary Pete Hegseth labeled Anthropic a “supply chain risk” over its demand for safeguards, which resulted in several lawsuits between the two parties. 

At the same time, the War Department has expanded partnerships with other tech firms to deploy AI tools across military networks, underscoring the technology’s growing role in national defense and battlefield decision-making. 

TRUMP SAYS AI WILL PROBABLY KILL JOBS BUT ALSO ‘CREATE A LOT OF JOBS’

SpaceX, OpenAI, NVIDIA, Reflection, Microsoft, Amazon Web Services, and Google signed agreements with the Pentagon allowing their AI models to be used in classified systems. Before the new agreements, Anthropic’s Claude system was the only AI tool permitted in classified settings. 

The Trump administration, which had previously rolled back some Biden-era AI regulations, is reportedly weighing additional steps, including an executive order or legislation that could formalize federal oversight of high-risk AI systems.