Pentagon files first rebuttal to Anthropic lawsuit: ‘Unacceptable risk’

The Pentagon has filed its first rebuttal to two lawsuits filed by Anthropic, which battled with the Trump administration over the U.S. military’s use of the private company’s artificial intelligence technology late last month.

In the 40-page court filing on Tuesday, the Department of War warned that Anthropic’s prior access to the department’s “technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains.”

Basing its argument on that concern, the federal government is asking the U.S. District Court for the Northern District of California to prevent one of the AI firm’s lawsuits from moving forward.

“After all, AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic—in its discretion—feels that its corporate ‘red lines’ are being crossed,” the court document states. “DoW deemed that an unacceptable risk to national security.”

The legal battle stems from a Feb. 27 directive issued by President Donald Trump for all federal agencies to stop using Anthropic’s services with a planned six-month phase-out period. Shortly thereafter, War Secretary Pete Hegseth implemented the directive at his own department and labeled the company a “supply chain risk.”

As a result of the failed negotiations, Anthropic’s $200 million contract with the Pentagon, which the Pentagon awarded last year, collapsed. The contract detailed the use of Anthropic’s AI technology in classified defense systems.

The company objected to the possibility that the Pentagon may use AI for mass domestic surveillance or autonomous weapon systems. The Pentagon disputes the allegations.

The new filing contained a quote stating the department “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor [does it] want to use AI to develop autonomous weapons that operate without human involvement.”

Anthropic filed two lawsuits on March 9 — one in the California-based federal court and another in the U.S. Court of Appeals for the District of Columbia Circuit — to challenge the risk label. The designation is notable because it’s typically reserved for foreign companies that the United States deems a national security risk. Anthropic is based in San Francisco.

The plaintiff argues the Trump administration is violating its First Amendment rights based on the president’s order and Hegseth’s implementation, but the administration insists the two actions are “distinct” and should be separated.

“At the outset, although Anthropic has lumped them together, it is important to distinguish the Presidential Directive from the Secretary’s actions—each is distinct and grounded in unique authority,” the filing reads. “The Directive flows from the President’s Article II power to supervise the Executive Branch, as confirmed by judicial precedent, while the Secretary acted pursuant to statute.”

The government maintains that the plaintiff’s First Amendment claim is unlikely to succeed on the merits because the case is unrelated to the company’s practice of free speech.

“The record reflects that the President and the Secretary were motivated by concerns about Anthropic’s potential future conduct if it retained access to the Government’s IT infrastructure,” the document says. “Those concerns are unrelated to Anthropic’s speech, and no one has purported to restrict Anthropic’s expressive activity.”

Anthropic’s request for a preliminary injunction will be considered in a court hearing next Tuesday.

ANTHROPIC SUES TRUMP ADMINISTRATION OVER NATIONAL SECURITY RISK LABEL

Days after its public blow-up with Anthropic, the Pentagon announced it reached a deal with OpenAI to use its technology. Anthropic operates the Claude chatbot, and OpenAI’s main service is ChatGPT.

The Pentagon signed a similar deal with Elon Musk’s xAI last month, allowing the military to use its Grok large language model. Claude was once the only model used by the military for sensitive intelligence and combat operations.

Related Content