A California judge said on Tuesday that it appears the Department of War’s decision to label Anthropic a “supply-chain risk” amid a dispute about how the department can use the platform “looks like an attempt to cripple” the company.
U.S. District Judge Rita Lin started the Tuesday hearing by saying in part, “It looks like DoW is punishing Anthropic for trying to bring public scrutiny to this contracting dispute, which, of course, would be a violation of the First Amendment.”
Anthropic has filed two federal lawsuits arguing the Trump administration’s decision to label it a security risk amounted to illegal retaliation. Tuesday’s hearing was for the case filed in San Francisco.
Last month, the Pentagon tagged the AI company as a “supply-chain risk,” a designation usually reserved for foreign entities, after bitter negotiations between Anthropic and the department failed to make a breakthrough regarding the use of Claude, its AI chatbot.
Tuesday’s hearing ended without a decision, with Lin, a Biden-appointed judge, telling lawyers from both sides she “anticipate[s] issuing an order in the next few days.”
Anthropic wanted assurances from the Pentagon that its AI chatbot would not be used for mass domestic surveillance or to operate fully autonomous weapons. War Department officials disputed the claims, arguing that they would not allow any private company to dictate how it uses systems in war and maintained that they wanted complete authorization for “any lawful use.”
President Donald Trump announced on Feb. 27 that every federal agency must “immediately” stop using Claude amid the dispute, and shortly thereafter, Hegseth stated that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Trump administration attorney Eric Hamilton said during the hearing, “The worry is that Anthropic, instead of merely raising concerns and pushing back, will say we have a problem with what DoW is doing and will manipulate the software … so it doesn’t operate in the way DoW expects and wants it to.”
Hegseth’s statement, specifically stating that any company that works with Anthropic cannot also do business with DoW, was a focus of Tuesday’s hearing. Hamilton acknowledged that he has no legal authority to bar military contractors from using Anthropic for work unrelated to the department.
Hamilton said it’s his “understanding” that the department’s “present concern is with DoW personnel and contracting partners using anthropic for DoW work, not for non-DoW work.”
Hamilton acknowledged that 10 U.S.C. § 3252, which grants the secretary the ability to exclude certain companies from competing for contracts, “does not go as far” as Hegseth’s statement.
“Standing here today, I’m not aware of any authorities that would permit DoW to categorically bar contractors from using a company’s products or services for non-DoW work,” Hamilton said. “But to reiterate, this post does not itself impose obligations on contractors or sub-contractors; that instead flows through the supply chain risk designations.”
HOW DOES ISRAEL FIT IN THE US-IRAN TALKS?
When asked by Lin why Hegseth included the comment in his longer statement even though it “has no legal effect,” he responded, “I don’t know.”
The case has sparked a broader conversation about the military’s use of artificial intelligence in weapons systems and the level of deference AI companies should give to the government in how their technology is used.
