Anthropic has sued the U.S. Department of War over classifying it as a “supply chain risk,” effectively banning its AI model Claude from the defense industry. While the lawsuit focuses on the legitimacy of the Department of War’s “supply chain risk” classification, one of the issues of contention that led to that classification — the use of AI for mass domestic surveillance — calls into question Americans’ existing and future privacy rights.
In January 2026, the Department of War asked AI companies to provide access to their models without restrictions. This led to a dispute with Anthropic because the company’s terms of use with the Department of War included: no mass surveillance of Americans and no fully autonomous weapons. These terms became unacceptable to the administration, leading to an ultimatum and to Anthropic holding their ground.
The genesis of this argument is shown in Anthropic’s CEO Dario Amodei’s article on the dangers of AI. Amodei reads at times a leftwing utopian, lending truth to Trump’s narrative around him. However, his concerns are serious and legitimate. Amodei specifies that, “we should use AI for national defense in all ways except those which would make us more like our autocratic adversaries” citing the Chinese Communist Party’s repression of their own people. Anthropic is not a corporation; legally, it is a public benefit company. Anthropic interprets its duty as “balanc[ing] stockholder interests with its public benefit purpose of responsibly developing and maintaining advanced AI for the long-term benefit of humanity.” When considering Anthropic’s published constitution, acceptable usage guidelines, and exceptions for governments, their general commitment to their version of public benefit is obvious.
The only contested use cases were mass domestic surveillance and fully autonomous weapons. Anthropic’s argument against fully autonomous weapons is practical — current AI models cannot be responsibly used for fully autonomous weaponry yet. They are just not good enough. Anthropic’s argument against mass domestic surveillance is philosophical. Military weapons are directed at enemies; surveillance is directed inward.
A 2022 report from then Director of National Intelligence Avril Haines, and cited by Anthropic, demonstrated commercially available information is already being harvested for intelligence purposes. The adage that “whenever a service is free, you are the product” — meaning your private information is what is sold to finance otherwise free services — is important here. Many Americans implicitly understand this; many simply accept updated terms of service without ever reading or considering the implications of their data collection. AI based surveillance should cause many Americans to read more carefully the terms of service that they agree to, and to consider which companies and for which purposes they are willing to provide their personal information.
The federal government purchasing this data and using it for military operations could touch Americans’ constitutional rights.
Consider how many Second Amendment rights groups correctly identify that a gun registry violates American’s rights. Imagine the government making such a registry through mass data collection and AI systems. The same outcome would be reached without ever even needing to consider the previous methods legality. That is the implication coming from Anthropic’s protest.
Is an American datastate along the lines of the Chinese Communist Party imminent? No, because existing laws would already prohibit that. However, AI systems have made it feasible for the government to purchase and analyze data in ways that were previously impossible. This can be a good thing but it requires lawmakers to re-examine and potentially re-write existing laws with an eye to what’s now possible through AI.
American courts have to a lesser degree written into the Constitution a right to privacy, largely derived from “penumbras” in the bill of rights. Yet something as important as data describing how one lives their life should not be protected from a mere penumbra.
Anthropic accurately notes the law has not kept pace with technology. They argue “[t]hese techniques would have been unimaginable when Congress enacted the existing frameworks regulating how the Executive Branch may conduct surveillance” and they are right. Mass surveillance requires data on scores of people who are not the direct target of an investigation and requires computational power that did not exist a decade ago. Congress should respond and engage in a thoughtful debate to cleanly make explicit the rights of Americans in this new digital world.
If AI systems have the ability to perform mass domestic surveillance, then the most effective way to prevent such mass surveillance is to not make privacy-violating inputs available for sale. Yet data is an incredibly important part of legitimate government purposes. Thus, the new capability of AI surveillance should prompt a renegotiation of what should be for sale and to what degree rather than a tearing down of data infrastructure.
For example, AI based mass domestic surveillance should inform the discussion around video cameras. If that video feed can be used to create real time tracking of American’s movements and behavior, then communities should consider this possibility when setting up these systems.
HOW STAGED ACCIDENTS ARE DRIVING UP YOUR INSURANCE COSTS
No one wants the military to be luddites. Yet the clear message for average Americans should be the current data protections were not created for the world we now live in. History is unambiguous: what can be done will eventually be done. Congress must act before that logic runs its course.
John Peluso is a Policy Analyst at the Plymouth Institute for Free Enterprise at Advancing American Freedom.


