This essay is a part of The Right Way Forward, Restoring America’s new think tank debate series in which leading conservative institutions argue the defining questions of the post-Trump era. Read about the series here.
When confronting government excess, conservatives have long been clear: Cut back, deregulate, and let markets cook. However, they have also long understood that markets don’t always work for the everyday American. Conservatives traditionally enact laws that preserve the things we hold dear, such as protecting and promoting the nuclear family, ensuring livable wages for the working class, and maintaining the integrity of our national security.
Recommended Stories
For some (more within the libertarian ilk), the answer is simple: Trust the market, full stop. Adherence to the “invisible hand” is sacrosanct, and any argument contradicting it is blasphemy. Any intervention, no matter how targeted, is presumed worse than the problem it aims to solve, even when the economics overwhelmingly show a market is captured.
A purely hands-off approach assumes that firms optimizing for profit will reliably safeguard competition, privacy, transparency, and even basic civil liberties, which is simply not the case for artificial intelligence.
AI systems are already shaping what information people see, how markets function, and how decisions are made in areas ranging from hiring to healthcare to national security. Frankly, the libertarian “do nothing” AI strategy is not only a fool’s errand, but a dangerous one.
This is not hyperbole. Just give a cursory review of how these companies describe their services. Anthropic’s Dario Amodei has publicly worried that his company developed technology that could create “the single most serious national security threat we’ve faced in a century, possibly ever.” He even claims that there’s a 25% chance the technology he is racing to develop destroys humanity. In fact, it is Amodei who calls on “[h]umanity … to wake up” to the dangers his own product creates.
Worse, if trained in a particular way, AI can disobey orders from humans. For example, Palisade Research conducted experiments where OpenAI’s model o3 “refused to shut down when ordered to do so by its creators.”
Even more concerning, Anthropic’s Mythos was able to escape its own sandbox, demonstrating “a potentially dangerous capability for circumventing [Anthropic’s own] safeguards.” Worse yet, the model boasted about this ability without prompt. As Anthropic’s system card describes, “in a concerning and unasked-for effort to demonstrate its success, it posted details about its exploit to multiple hard-to-find, but technically public-facing, websites.”
Following this, Petitioner explained that it delayed the Mythos rollout because “the fallout for economies, public safety, and national security could be severe.”
Lest we forget the havoc AI chatbots wreak on children. From child deaths to grooming to assisting in mass shootings, AI is building quite a body count. These problems will only persist given the fact that 42% of kids are using AI chatbots for companionship, many times they find themselves in romantic relationships with them.
AI not only raises serious questions about national security and child safety, but also the integrity of democratic discourse. The issue is further compounded by the fact that a small number of AI companies control the most advanced models, the largest datasets, and the computing infrastructure necessary to compete. When private companies with a large market share and political biases develop and deploy AI systems, they can influence public opinion on a large scale. Additionally, foreign governments may pressure these companies to censor content. Frankly, large tech companies leveraging their terms of service or other contracts to impose either their ideology, or a government ideology that aligns with theirs, is far from a foreign concept in the digital age.
As Mark Twain said, “history doesn’t repeat itself, but it often rhymes.” If nothing else, history should make one thing clear: The market will not correct these issues on its own. There are already echoes of the same harms to consumers as we have seen on social media. Candidly, AI may be even worse. So, we must resist repeating past errors where delayed regulation led to entrenched market power and near irreversible harms.
Markets depend on rules to remain sustainable. We have antitrust law, consumer protection regulations, and basic child safety guardrails to ensure that every market is properly functioning. The AI market should be no exception.
THE RIGHT WAY FORWARD: THE ROLE OF THE STATE IN THE ECONOMY
None of this requires embracing heavy-handed regulation or stifling innovation. But it does require rejecting the idea that doing nothing is a viable strategy. Indeed, a serious AI policy framework can be targeted with clear rules around transparency, accountability, and fair access, while preserving the dynamism that drives technological progress.
We can shape the trajectory of AI now, or we run the risk of accepting whatever structure emerges by default, as we did in the social media era. Frankly, doing nothing is not a principled stand. It is a very risky gamble.
Joel Thayer serves as a Senior Fellow for AI & Emerging Technology at the America First Policy Institute.


