What the driverless car debate can teach us about AI

Published April 10, 2026 5:22am EST



From 2018 to 2024, there were more than 278,000 motor vehicle traffic fatalities. That number will likely appear in future public policy textbooks in a section on “Mistakes in Emerging Technology Policy.” Students will learn how Congress had a chance to make autonomous vehicles far more ubiquitous as early as 2017, but passed on the opportunity. Despite clear evidence that AVs were safer than the average human driver in a myriad of contexts, two bills to support and spread AV progress failed. While those bills would not have prevented all of the subsequent deaths, hindsight makes it quite clear that introducing AVs sooner than later in more locations would have saved many lives. The policy upshot is clear: The status quo may be just as risky, if not more so, than a tech-forward future.

Policymakers face a similar conundrum today. Will state and federal officials write a new chapter that signals the proper way to govern new technology or add another case study to the annals of missed opportunities? It is worth considering in more detail why that national AV policy effort fizzled, what it has meant for AV markets and public safety, and what lessons it can teach us for artificial intelligence and robotics policy going forward.

Safety advocates vs. actual public safety

In 2017, Congress had the chance to pass the AV START Act and SELF DRIVE Act. Though they differed slightly, passage of either would have created a federal framework for evaluating the safety of AVs, preempting some state-level regulations, and creating a path for introducing AVs into interstate commerce. Debate over these bills was muddled and ultimately derailed by bad-faith arguments not rooted in facts.

Inside of a Waymo autonomous taxi car in Austin, Texas. (Mandel Ngan/AFP via Getty Images)
Inside of a Waymo autonomous taxi car in Austin, Texas. (Mandel Ngan/AFP via Getty Images)

Self-anointed safety advocates feared that the measures’ relatively minimal preemption standards “would undermine critical state and local responsibilities related to highway safety” and warned that, “the public will be the crash test dummies in this dangerous experiment.” Those arguments were persuasive due to the political cache of the advocates more than their evidence. Unions, trial lawyers, insurers, and others all spoke out against such measures as hasty and destined to hinder public well-being.

These objections were all rubbish. States and localities would have remained free to regulate road safety, mostly as they already did. The federal bills only dealt with the design, construction, or performance of AVs. The measures could have gone much further and addressed the problem of overly restrictive state and local regulations that would impede market development. But both bills left plenty of space for state action.

Congress, however, did not even get that minimal federal AV legislation done. The entire effort fell apart in 2017, and Congress still hasn’t been able to get anything over the finish line. There are some renewed federal AV policy efforts underway in the House of Representatives, but they are already experiencing pushback from the same opponents as before.

Regardless, nine years have now passed, and America is still struggling to get AVs on the road — just 10 cities have authorized them. Meanwhile, car crash deaths remain distressingly high. The years following the failure of Congress to pass AV legislation actually saw a significant uptick in roadway deaths, with over 40,000 Americans losing their lives in many of those years.

Transportation Secretary Sean Duffy sits in a Tesla Cybercab autonomous vehicle during the National AV Safety Forum at the U.S. Department of Transportation on March 10 in Washington. (Rod Lamkey, Jr./AP)
Transportation Secretary Sean Duffy sits in a Tesla Cybercab autonomous vehicle during the National AV Safety Forum at the U.S. Department of Transportation on March 10 in Washington. (Rod Lamkey, Jr./AP)

We cannot pin the blame for all those deaths on AV opponents, but we do know that there are always opportunity costs associated with inaction and perpetuating technological stasis. Overly precautionary constraints on innovation can result in an “invisible graveyard” of lives lost due to excessive hesitancy and an unwillingness to embrace new solutions that can advance public health and safety.

When the auto safety advocates were going around in 2017 terrifying policymakers and the public with predictions about how AVs would turn us into “crash test dummies,” they were implicitly selling us a lie that slowing down or stopping driverless car innovation translated to greater public safety. And that’s just not true. We know today that AVs are far safer than human drivers and getting better every year.

Those safety advocates were also far too willing to accept the traditional regulatory status quo and imply that the old auto safety regime served as an adequate baseline for improving public safety going forward. To be clear, some auto safety regulations have moved the needle on reducing fatalities and improving safety in important ways. Safety bumpers, seat belts, and back-up cameras have reduced the severity and frequency of traffic incidents. These incremental measures to make human drivers safer, however, can only go so far because humans will never have 360-degree vision or perfect attention. Notably, AVs have all that and more — they can see through cars, scope around corners, and eliminate any blind spots. Robots also don’t get drunk, drowsy, or distracted, which are just three of the many factors that contribute to the reality that humans cause the overwhelming majority of car crashes.

The public policy ramifications are obvious: Real-world results, not speculative fears, must dictate governance. America needs a technological revolution on our roadways if we really want to make big safety gains. Yet, the nation still lacks a national framework for AVs that could help advance that goal.

Lessons for AI policy today

This episode offers some lessons for AI policy debates today. New AI proposals continue to spread like wildfire across the nation, with over 1,500 proposals currently pending. Not all of these laws are regulatory in character, but many of them would impose complex, contradictory, and costly new mandates on AI innovators. The thousands of proposals undergoing legislative review cover an astonishing array of activities: frontier AI lab development practices, “AI bias” standards, algorithmic pricing restrictions, chatbot speech controls, “robot taxes,” and much more.

These measures share key similarities with policies in the AV space. For one, they assume that the status quo is worth maintaining relative to a more AI-forward future. Many of these bills address topics already illegal under existing law, and many more propose policy interventions with little to no evidence that they will have a net positive impact.

Additionally, many AI proposals rely on state actors stepping into what ought to be federal regulatory territory. The question of federal-state balance of regulatory responsibilities has become a major bone of contention in AI policy debates, and plenty of special interests and regulatory advocates are out in force opposing any federal AI legislation that would limit state and local over-regulation of algorithmic and robotic development.

This situation now threatens to spiral out of control and delay the adoption of AI tools that could meaningfully address some of the most vexing problems facing society. We have written about an unfolding “AI Articles of Confederation” scenario, with state lawmakers coming at these issues from so many different angles that it results in parochial, protectionist policies that undermine national innovation, investment, consumer choice, and even public health and safety. The algorithmic systems of 2026 are not the agricultural markets of 1776.

Whereas rules surrounding which farmer could sell which goods and when had no national consequences in the 18th century, state AI bills of seemingly minor consequences may steer AI development away from the path that is best for the nation as a whole. To advance and yield benefits, our modern technological governance processes must appreciate the nature of systems that are inherently interconnected and interstate in character. A well-functioning national market requires some degree of policy harmony, or we risk undermining important goals involving public health and national development in markets of global significance.

The unfolding “AI Cold War” between the U.S. and China for geopolitical dominance in AI and advanced computation will come down to many social and economic factors: talent, investment, and new entrepreneurial entry and competition. But those factors are strongly influenced by law and, most specifically, they are encouraged by policies that offer clarity and embrace innovation opportunities. It would be sadly ironic and disastrous for our nation if China raced ahead on this front simply by creating a more consistent legal environment for AI innovation while America layers on dozens of different legal regimes and liability standards on our innovators.

The auto innovation experience is again instructive here. China was once viewed as far behind in electric vehicles and AVs, but now has caught up. BYD, Huawei, and Pony.ai are pushing the nation forward at a rapid clip. By 2030, more than 20% of all cars sold in China will be driverless. This progress will beget all sorts of additional progress. Lives will be saved. What’s more, the underlying technology will also be improved. As more AVs hit the road, their training becomes richer and more nuanced.

One of the key factors behind China’s success is the fact that the nation regards AVs as a “strategic industry,” one that demands national policies while allowing local experimentation that complements but does not conflict with those policies. Likewise, whereas it was once assumed that the U.S. had a sizable advantage over China in the AI race, that’s no longer the case. China has made clear that it will take a whole-of-nation approach to building out the AI infrastructure necessary to lead on this front.

National technological advantage can slip away when public policy becomes confusing and cumbersome, as more and more laws stack the deck against entrepreneurs. In his recent book, “Where Is My Flying Car?” J. Storrs Hall examines the post-WWII history of energy and transportation markets and identifies how anti-technological thinking and overregulation “clobbered the learning curve” for several important technologies, such as nuclear power and advanced aviation (including flying cars). Hall shows convincingly that, by the 1950s, the science and proof-of-concept were already there for flying cars and widespread, cheap nuclear energy for the masses. What was needed was more policy encouragement to enable market experimentation and investment.

Alas, we got the opposite: overregulation and excessive lawsuits combined to kill both of those dreams. Derailing the learning curve in those sectors also encouraged a mass talent migration away from science and engineering into law and the soft sciences. As Hall notes, it became more attractive to become a “taker” instead of a “maker” in those fields. Why try to develop cutting-edge technology in a sector where you are more likely to get rich suing someone who does? Incentives matter.

Coherent governance needed now

We have testified before Congress on AI governance issues and advocated for lawmakers to craft a national framework delineating the balance of federal and state governance responsibilities. Congress must first clarify that states have no regulatory authority over AI policy issues that demand a uniform regulatory response, such as matters of economic and national security. Absent a clear indication from Congress as to which questions are truly national, states will carry on passing more and more laws. Each new state law amounts to a new barrier to competition, innovation, and a robust interstate marketplace. Just as states have enacted laws that stand in the way of a coherent national privacy governance framework and driverless car innovation, the same can be expected in the AI domain if Congress fails to act.

AI IS EVERYWHERE. NOBODY IS TEACHING US HOW TO USE IT

An obvious place to start is to require that states forgo regulating AI development — training, evaluating, and launching leading models. Next, Congress ought to signal its support for state laws that complement the national policy of enhancing global AI leadership. This may include support for state AI laws, such as regulatory sandboxes and AI literacy initiatives. By supporting such laws and initiatives, Congress can ensure states and the federal government march toward shared national objectives.

But it can no longer be the case that single states can entirely opt out of a shared project of leading in AI. A balkanized approach to AI, robotics, and advanced computation will derail the most important technological revolutions of our time, leaving the public and the country less well off. 

Kevin Frazier is a senior fellow at the Abundance Institute, an adjunct research fellow at the Cato Institute, and the Director of the AI Innovation and Law Program at the University of Texas School of Law. Adam Thierer is a senior fellow at the R Street Institute in Washington, D.C.