Imagine a new medical treatment that could prevent 85% of serious crash injuries and save 34,000 American lives every year. Now imagine that instead of fast-tracking this treatment, we subjected it to endless regulatory hurdles, allowed widespread public skepticism to go unchallenged, and generally treated it with a level of suspicion that we reserve for pyramid schemes and door-to-door vacuum salesmen.
This isn't a hypothetical scenario. It's exactly what's happening with self-driving cars.
The data is in, and it's staggering. Waymo's autonomous vehicles show an 85% reduction in crashes with suspected serious or worse injuries, a 96% reduction in intersection crashes, and a 92% reduction in pedestrian injuries compared to human drivers in the same areas.
Let me repeat that: an 85% reduction in serious injuries. If we applied this to the entire US, we would save approximately 34,800 lives every year. That's more than ten times the number of people who died in the 9/11 attacks—every single year.
And yet, somehow, this isn't front-page news. We're not seeing congressional hearings about how to accelerate deployment. The President isn't giving speeches about our moral obligation to save these lives. Instead, we're stuck in a cycle of excessive caution, regulatory confusion, and what I can only describe as a profound form of AI prejudice.
When a self-driving car is involved in an accident, it makes headlines. When 100 human-driven cars crash in the same day, it's just Tuesday. This is the availability heuristic in action—we overweight vivid, unusual events and underweight common ones, even if the common ones are collectively far more harmful.
Consider the infamous case of the Uber self-driving test vehicle that struck and killed a pedestrian in 2018. This single incident set back public acceptance of autonomous vehicles by years. Meanwhile, human drivers killed 6,283 pedestrians that same year, and hardly anyone noticed.
This is what I call the "One Crash Problem." A single high-profile accident involving new technology can trigger a disproportionate response that ignores the baseline carnage we've simply grown accustomed to. It's as if we've decided that deaths caused by human error are natural and acceptable, while deaths caused by algorithmic error are moral abominations.
Sam Bowman
@s8mb
If Waymo's serious injury reductions held across all US road deaths, a Waymo-only America would have 34,800 fewer road fatalities every year.
This cognitive bias is costing lives—tens of thousands of them. And it's not just the general public; it's regulators, politicians, and even some safety advocates who should know better.
Another cognitive error at play is the Nirvana fallacy—the idea that if a solution isn't perfect, it's not worth pursuing. We demand that self-driving cars be virtually flawless before widespread adoption, while accepting the horrific imperfection of human drivers as an immutable fact of life.
The reality is that self-driving cars don't need to be perfect; they just need to be better than humans. And the evidence suggests they already are—significantly so. Waymo vehicles are demonstrating what can only be described as a masterclass in driving safety, with performance that far exceeds human capabilities in almost every scenario.
Yet we continue to apply a double standard. When a human driver makes a fatal error, we call it an accident. When an autonomous system makes a similar error, we question the entire enterprise. This asymmetric scrutiny isn't rational risk assessment; it's prejudice against artificial intelligence.
And this prejudice has real consequences. Every year of delay in widespread adoption means thousands more preventable deaths. It means families shattered, lives cut short, and a level of suffering that we have within our power to reduce dramatically.
So what have we actually done about this life-saving technology? Have we created a streamlined regulatory environment to accelerate its development and deployment? Have we established a rational framework that balances innovation with appropriate safety guardrails?
Of course not. That would be too sensible.
Instead, we've created a regulatory labyrinth so byzantine that Daedalus himself would get lost in it. We've constructed a system seemingly designed to ensure that autonomous vehicles never reach widespread deployment—or at least not until the heat death of the universe.
AV companies must file separate reports to NHTSA, state DMVs, and local authorities. California alone requires "disengagement reports" documenting every instance a human takes control—companies collectively reported over 20,000 disengagements in 2023, most for minor, non-safety issues like a vehicle hesitating at an intersection.
NHTSA has a Standing General Order requiring reporting of all crashes involving AVs, regardless of severity. Many reports involve incidents like brief software reboots that pose no safety risk whatsoever.
This documentation burden creates significant overhead costs—an estimated 5-10% of operational expenses—and diverts engineering resources from actual safety improvements. It's as if we asked pharmaceutical companies to document every time a pill manufacturing machine paused for a millisecond, regardless of whether any pills were affected.
But wait, it gets better (by which I mean worse).
The approval process is so fragmented it makes the Holy Roman Empire look streamlined. AVs must navigate:
Different states have radically different requirements: Arizona has minimal restrictions, California has extensive testing requirements, and some states have no AV-specific legislation at all.
AV companies report spending 15-18 months on average navigating approvals across jurisdictions before deployment, with legal and regulatory compliance teams sometimes larger than core engineering teams. Imagine if we required this level of bureaucracy for human drivers—we'd still be riding horses.
The result of this regulatory morass? Companies that try to navigate it often end up crushed under its weight.
Cruise: In October 2023, California DMV suspended Cruise's permits after an incident where a pedestrian was dragged 20 feet. GM subsequently cut funding by 70% and halted all operations. The regulatory complexity and extremely high compliance costs (~$500M annually) made the program financially unsustainable. The entire 950-vehicle fleet was recalled over software safety concerns.
Uber: After a fatal collision with pedestrian Elaine Herzberg in March 2018, Uber faced immediate suspension of testing permits in multiple states. The program was eventually sold to Aurora Technologies in December 2020 after regulatory hurdles made internal development prohibitively expensive. An estimated $1B+ in development costs were written off.
Both programs collapsed not just from single incidents but from the cumulative burden of fragmented regulations, extensive reporting requirements, and inability to navigate the complex approval landscape cost-effectively.
Now, I'm not suggesting we should have no regulations. Safety matters. But our current approach isn't optimizing for safety—it's optimizing for bureaucratic self-preservation and risk aversion.
Consider this: if we applied the same standards to human drivers that we apply to autonomous vehicles, no one would ever get a license. Imagine if every time a student driver made a minor mistake during practice—like braking too hard or hesitating at a stop sign—it triggered a mandatory report to three different government agencies. Imagine if every fender bender required a full investigation by the NTSB.
The system would collapse under its own weight. Yet this is precisely what we're doing with autonomous vehicles.
And the cost isn't just measured in regulatory burden or corporate expenses. It's measured in human lives—34,000 of them every year. Lives that could be saved if we had a rational regulatory framework that balanced innovation with appropriate safety measures.
Instead, we have a system that treats every minor AV incident as a catastrophe while accepting 40,000+ annual traffic fatalities from human drivers as an unavoidable fact of life. This isn't just irrational—it's morally indefensible.
I love freedom too much to advocate for mandatory self-driving cars, though the utilitarian calculus will eventually become clear: even more than seatbelts, AI saves lives, and should likely be mandatory. Every human stepping behind the wheel, too often sleep-deprived, drunk, or facing skill issues, is needlessly and recklessly endangering the lives of everyone around them.
But we can start with less drastic measures. Here's a modest proposal:
None of these proposals involve forcing anyone to give up their steering wheel. They simply create conditions that would accelerate the development and adoption of life-saving technology that is currently being held back by irrational fears and regulatory inertia.
Let's be clear about what's at stake. If we had fully deployed autonomous vehicle technology nationwide tomorrow, we could potentially save 34,000 lives next year. That's roughly equivalent to eliminating all gun homicides in America—twice over.
Every year we delay is another 34,000 lives lost. Every regulatory hurdle, every unfounded fear campaign, every politician who prioritizes caution over action has blood on their hands—not metaphorically, but literally.
This isn't hyperbole; it's simple arithmetic. If a technology can reduce fatalities by 85%, and we choose not to implement it, we are choosing to accept those preventable deaths. We are saying, in effect, that our discomfort with AI is worth more than those human lives.
I find this morally indefensible. We would never accept this calculus in any other context. If a pharmaceutical company developed a drug that could prevent 85% of cancer deaths, but we held it back because people were uncomfortable with the idea of synthetic molecules, we would rightly view this as monstrous.
Yet this is precisely what we're doing with autonomous vehicles. We're allowing AI prejudice—a visceral, irrational discomfort with machines making decisions—to override clear empirical evidence about safety benefits.
This prejudice is no different from other prejudices throughout history. It's based on fear rather than facts, on anecdotes rather than data, on an instinctive distrust of the unfamiliar rather than a rational assessment of risks and benefits.
And like all prejudices, it's killing people.
The fact that self-driving technology is deeply underfunded and faces unnecessary regulatory barriers despite its enormous positive externalities is a travesty of modern policymaking. It represents a collective failure to see past our prejudices, a triumph of fear over evidence, and a moral abdication of our responsibility to save lives when we have the means to do so.
We need to recognize our AI prejudice for what it is—an irrational bias that's costing thousands of lives. We need to apply the same standards to autonomous vehicles that we apply to human drivers, acknowledging that "better than humans" is more than good enough when lives are at stake. And we need to implement policies that correct the market failures preventing optimal investment in this life-saving technology.
The data is clear. The moral calculus is straightforward. The only question is whether we have the collective wisdom to overcome our prejudices and embrace a technology that could save tens of thousands of lives every year.
Or will we continue to let people die because we're more comfortable with human error than machine learning?
For more information, see the full Waymo safety study and their blog post on the findings.
Written May 2025