In Silicon Valley’s quest to win hearts and minds over its artificial intelligence innovations, the spiraling public spat between Anthropic and its government client, the Pentagon, offers a revealing look at how public relations, and not AI, has become the real weapon of note.
A recent Washington Post opinion piece claimed that “standing up to unreasonable and ethically challenging requests from the government ought to be commonplace in a free country. That’s what the artificial intelligence powerhouse Anthropic did last week.”
We can all agree on the first sentence. It’s the second that bears more scrutiny.
WAR DEPARTMENT’S SECONDARY BOYCOTT STRATEGY AGAINST ANTHROPIC COULD BACKFIRE
By its own public statements, and in some apparently leaked communications, Anthropic, as a private company, appears to be attempting to bend the Pentagon, and, by extension, our national security establishment, to its own boardroom’s political and cultural preferences.
These preferences, apparently influenced in part by the waning Effective Altruism movement, whose odd at‑best tenets once infused so many late‑night, tech-bro-group-house philosophy 101 discussions, are pushing Anthropic toward rigid, abstract constraints that are ultimately incompatible with the real‑world demands that our nation’s warfighters may one day face.
As the leader of a coalition letter sent to the Pentagon last week urging War Secretary Pete Hegseth to cancel Anthropic’s contract and treat its behavior as a supply chain risk, I applaud the president for taking the first step in shutting down a critical national security risk.
So, what’s the next step? It’s for the rest of us to take a hard look at how Anthropic is framing its arguments. What initially appeared to be Anthropic’s “loss” has quickly turned into a branding coup.
For good reasons, the Pentagon asked Anthropic to set aside its off-the-shelf usage policy agreement – usually meant for private companies and individuals – for matters of national security and global defense. They asked because a corporate terms of‑service document should not dictate or override the lawful missions of the United States military.
Anthropic CEO and Kamala Harris donor Dario Amodei responded by framing this as a moment to shield Claude’s “ethical” coding from Pentagon “mass surveillance” efforts on Americans and fully autonomous weapons. Language matters in public communications.
The Pentagon reminded Anthropic that its directive was to use Claude for “all lawful purposes.” By the government’s own insistence, mass surveillance of Americans isn’t legal, meaning that, on paper, Anthropic already had the protections it claimed to be fighting for. As the dispute deepened, the company’s public handwringing began to look less like principle and more like performance.
Anthropic also objected to the idea that Claude could be used to help autonomous weapons fire without human involvement, which the Pentagon policy disavows and isn’t happening anyway. U.S. guidance emphasizes human responsibility and safeguards, no matter what type of weapon is augmented by AI. This matters. It says the Pentagon prioritizes civilian and soldier safety, while Anthropic fixates on hypothetical technicalities that conveniently bolster its image.
The outlet writes in its piece, “Anthropic’s original sin with Trump world was its advocacy of AI regulation.” But, in truth, Anthropic’s original sin may have been its inability to strike a real balance between its own values and its desires.
Anthropic claims to want to create the most ethical AI, yet it’s getting into the morally perilous world of defense contracting. Anthropic invoked its First Amendment free speech arguments in its fight against the Pentagon, yet it built one of the most heavily censored AI models on the market. And it’s drawing “red lines” over the use of its technology after having paid more than $1 billion in a class-action lawsuit over its misuse of thousands of authors’ books.
Yet much of the coverage of this moment cedes all credulity to Anthropic, and that dissonance gets lost in coverage. Perhaps it’s hoping its audience won’t look too closely at reality. It’s sure done a great job guiding the public conversation back to the company’s mythologies and “red lines.”
And it works. As Anthropic’s CEO gave interviews, sent Slack messages that quickly leaked, and otherwise tuned the public relations language models behind his company’s hype machine, the outlet points out downloads of Anthropic’s Claude soared, and the tech world buzzed anew about the prospects of an initial public offering for Anthropic later this year. Who can argue with that kind of PR return on investment?
Ultimately, getting lost in the public relations back‑and‑forth means we’re not talking about the fundamental issue: Private companies can’t be allowed to dictate terms to our military.
Our increasingly connected world is already offering us real-world examples of the sometimes blurry lines between a government contractor’s product and the contractor’s personal preferences. Would the same observers who applaud Anthropic’s stand in this case applaud Elon Musk drawing red lines according to his personal, Trumpian politics over the use of Starlink in Ukraine or the use of SpaceX’s capabilities in classified missions?
Setting aside Anthropic’s “ethical AI” branding, are we truly comfortable with the idea that a private company can turn critical software into a bargaining chip for ideological concessions, at the risk of real‑world impact to American service members in theatres of war?
TRUMP ORDERS EVERY FEDERAL AGENCY TO STOP USING ANTHROPIC AN HOUR BEFORE PENTAGON DEADLINE
Our adversaries would be delighted to hear that one of the world’s most powerful and vocal private companies is pressuring the Pentagon to stay woke.
Anthropic is using its PR chops to have this moment both ways, shaming the Pentagon while fighting to continue cashing its checks. In the end, we don’t want our warfighters to be the ones who pay the price.
George Landrith is the president of the Frontiers of Freedom Institute and the author of Let Freedom Ring… Again: Can Self-Evident Truths Save America from Further Decline?


