The Department of War gave Anthropic, the company behind the artificial intelligence chatbot Claude, an ultimatum that will expire at 5:01 p.m. on Friday: agree to grant the military unrestricted use of its AI for all legal purposes, or risk losing government contracts with potentially more severe consequences.
Officials have warned that they could either designate the company a supply chain risk or invoke the Defense Production Act to take more control of its products if it doesn’t comply.
The Pentagon wants Anthropic, and other AI companies that have contracts with the department, to allow the military to use their products for “any lawful use.” Anthropic CEO Dario Amodei said in a statement on Thursday that the company does not want Claude to be used for mass domestic surveillance or fully autonomous weapons.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei said. “Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”
“But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons,” he continued. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on [research and development] to improve the reliability of these systems, but they have not accepted this offer.”
Pentagon officials dispute that they want to use Claude for such purposes, but also believe they should be the ones making the calls about the military’s use of the program.
Undersecretary of War for Research and Engineering Emil Michael called Amodei “a liar,” said he “has a God-complex,” and claimed he “wants nothing more than to try to personally control the U.S. military and is ok putting our nation’s safety at risk.”
Additionally, top Pentagon spokesman Sean Parnell disputed Amodei’s comments, saying the department has “no interest in using AI to conduct mass surveillance of Americans (Which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
“Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell added.
Claude is the only AI model used in the military’s classified systems, and it was reportedly used during last month’s raid mission to capture former Venezuelan dictator Nicolas Maduro.
Former President Harry Truman signed the Defense Production Act into law in 1950 amid supply concerns during the Korean War. It allows the federal government broad authority to force private companies to meet its needs in the name of national defense.
If the government invokes the act, it could use Claude however it wants, even if Anthropic disagrees.
In the last several years, both President Donald Trump and former President Joe Biden used the DPA to increase the supplies needed to slow the COVID-19 pandemic. Biden also used the law to speed up the production of formula during a shortage in 2022.
Alternatively, if the Pentagon declares Anthropic to be a “supply chain risk,” it would force any company that does business with the military to cut ties with the company. Anthropic said earlier this month that 8 of the 10 biggest companies in the country use Claude. The designation has historically been used for foreign companies, not American ones.
Experts have noted the apparent contradiction of threatening to either invoke a law allowing the government to take control of a company or accusing it of being a threat to national security.
The dispute is likely being closely followed by other AI companies.
Anthropic digs in on maintaining guardrails against the unethical use of its AI tools by the Pentagon.
ANTHROPIC DIGS IN ON MAINTAINING GUARDRAILS AGAINST THE UNETHICAL USE OF ITS AI TOOLS BY PENTAGON
“I don’t personally think the Pentagon should be threatening DPA against these companies,” Sam Altman, the CEO of OpenAI, said on CNBC on Friday. “But I also think that companies that choose to work with the Pentagon, as long as it is going to comply with legal protections and the sort of the few red lines that the field we have, I think we share with Anthropic and that other companies also independently agree with.”
Altman is open to making a deal with the department if the Anthropic relationship severs, but wants similar guardrails in place, according to the Wall Street Journal.
