A Pentagon dispute with one of the United States’s leading AI companies, Anthropic, should not have been allowed to happen. Not because the fight was avoidable. Because the terms that caused it should have been settled before the first contract was signed.
They weren’t. That is a failure of policy, not just procurement.
Recommended Stories
The dispute forced into the open an issue Washington has been avoiding for years. The federal government is deploying AI systems it does not fully control. It is paying for infrastructure it cannot fully access. National security and economic policy are being built on a foundation where the rules can change after the fact. That is not a technology problem. That is a governance problem, and it has a price.
PENTAGON SIGNS DEAL WITH SIX TOP AI COMPANIES FOR CLASSIFIED WORK AFTER ANTHROPIC FALLOUT
When the customer is the U.S. government and the customer does not control its own systems, taxpayers are funding someone else’s leverage.
Sovereign AI is not a complicated concept. The customer controls the data. The customer controls the infrastructure. The customer controls how the technology is used. Those are not unreasonable demands. They are reasonable expectations for any institution that cannot afford to be surprised, renegotiated, or locked out: defense agencies, hospitals, energy grids, and financial regulators.
Washington has been slow to acknowledge it. The result is a procurement landscape where consumer-grade terms are attached to enterprise and government contracts. Accept the provider’s conditions and revisit them later while hoping the contract holds. That is an abdication, not a policy.
The model that works already exists. Cohere, one of the leading enterprise AI companies in the market, built its business around customer control. Through its combination with Aleph Alpha, Cohere offers government customers full deployment control. The data stays where the customer puts it, the terms do not shift mid-contract, and accountability runs in one direction. That is the baseline the market should demand.
This is an affordability issue. When federal agencies cannot deploy AI with confidence, productivity stalls and investment slows. The downstream costs fall on working families in the form of slower wage growth and higher prices. Sovereign AI is not a niche defense procurement argument. It is a competitiveness argument, and competitiveness is an affordability argument.
Congress and the executive branch should act on three fronts. Federal procurement standards should require sovereign deployment options for any AI touching sensitive data or critical infrastructure. Contracting frameworks should define control, access, and dispute resolution before signing or risk being forced to do so afterward in a public fight. And agencies deploying AI at scale should be required to demonstrate that they, not the vendor, hold the operational keys.
TRUMP: DEAL WITH ANTHROPIC STILL ‘POSSIBLE’ FOR MILITARY USE
Companies that cannot meet a sovereign standard probably should not be running federal AI systems. Those who can will compete and win on merit.
The recent dispute was a warning. The window to set these standards while the market is still forming is open. It will not stay open forever.
Chuck Flint is the executive director of the Coalition for Affordability and Prosperity.
