Support CleanTechnica’s work through a Substack subscription or on Stripe.
Or support our Kickstarter campaign!
Tesla’s decision to remove Autopilot and Autosteer as standard features in North America initially struck me as a step backward for safety, a cash grab for the Full Self Driving monthly subscription and as such an attempt to boost TSLA stock price. That reaction was almost automatic. I’ve used and appreciated Autopilot and Autosteer in rented Teslas, liking that it smoothed the boring bits of driving while still letting me have fun in the twisty windy bits. For years, Autopilot has been framed, implicitly and explicitly, as a safety feature, and many drivers believe it makes driving safer by reducing workload and smoothing control. I’ve often said that I’d prefer to be on a highway on Autopilot surrounded by Teslas on Autopilot than driving myself surrounded by human drivers. But that was an assumption, and one that deserved to be tested rather than defended.
The question that mattered was not whether Autopilot felt safer or whether drivers liked it, but whether it produced measurable reductions in crashes, injuries, or fatalities when evaluated using independent, auditable data at scale. Traffic safety is an area where intuition is frequently wrong, because the events that matter most are rare. Fatal crashes in the United States, where transparent data collection and access has until the past year been closer to oversharing than not, occur at roughly one per 100 million miles driven. Serious injury crashes are more common, but still infrequent on a per mile basis. When outcomes are that rare, small datasets produce misleading signals with ease. This is where the law of small numbers becomes central, not as a rhetorical device but as a constraint on what can be known with confidence.
The law of small numbers describes the tendency to draw strong conclusions from small samples that are dominated by randomness rather than signal. In traffic safety, this shows up constantly. A system can go tens of millions of miles without a fatality and appear dramatically safer than average, only for the apparent advantage to evaporate as exposure increases. Early trends are unstable, confidence intervals are wide, and selective framing can make almost any outcome look impressive. This applies just as much to advanced driver assistance systems as it does to fully autonomous driving claims. The rarer the outcome, the larger the dataset required to make credible claims.
I recently explored this question in a CleanTechnica article titled “Why Autonomous Vehicles Need Billions of Miles Before We Can Trust the Trend Lines,” where I considered the law of small numbers and its relationship to autonomous driving safety. I showed that even datasets like Waymo’s 96 million rider-only miles are too small to draw strong conclusions because serious crashes are rare events, with fatalities occurring at roughly one per 100 million miles, so early trends can easily reflect randomness rather than underlying safety performance. I pointed out that to reach confidence that autonomous systems are safer than human drivers in a wide range of environments, datasets need to grow into the billions of miles across diverse cities, weather, traffic mix, and road conditions, because without that scale the statistical noise overwhelms the signal and overinterpretation is common.
With that framing in mind, I went looking for independent, large numbers evidence that Autopilot or Autosteer reduces crashes or injuries. Tesla publishes its own safety statistics, comparing miles between crashes with Autopilot engaged versus without it and versus national averages. The problem is not that these numbers are fabricated, but that they are not independent and they lack adequate controls. Tesla alone defines what counts as a crash, how miles are categorized, and how engagement is measured. The comparisons are not normalized for road type, driver behavior, or exposure context. Highway miles dominate Autopilot use, and highways are already much safer per mile than urban and suburban roads. That alone can explain much of the apparent benefit. Large numbers alone are not enough if the data comes from a single party with no external audit and no transparent denominator.
Government data offers independence, but not scale in the way that matters. The US National Highway Traffic Safety Administration requires reporting of certain crashes involving Level 2 driver assistance systems. These datasets include hundreds of crashes, not hundreds of thousands, and they do not include exposure data such as miles driven with the system engaged. Without a denominator, rates cannot be calculated. The presence of serious crashes while Autopilot is engaged demonstrates that the system is not fail-safe, but it does not establish whether it reduces or increases risk overall. The numbers are simply too small and too incomplete to support strong conclusions in either direction.
Insurance claims data is where traffic safety evidence becomes robust, because it covers millions of insured vehicle years across diverse drivers, geographies, and conditions. This is the domain of the Insurance Institute for Highway Safety and its research arm, the Highway Loss Data Institute. These organizations have evaluated many active safety technologies over time, comparing claim frequency and severity across large populations. When a system delivers a real safety benefit, it shows up here. Automatic emergency braking is the clearest example. Across manufacturers and model years, rear end crash rates drop by around 50% when AEB is present, and rear end injury crashes drop by a similar margin. These results have been replicated repeatedly and hold up under scrutiny because the sample sizes are large and the intervention is narrow and well defined.
When partial automation systems like Autopilot are examined through the same lens, the signal largely disappears. Insurance data does not show a clear reduction in overall crash claim frequency attributable to lane centering or partial automation. Injury claims are not meaningfully reduced. This is not because the data is biased against Tesla or because insurers are missing something obvious, but because partial automation creates a complex interaction between human and machine. Engagement varies, supervision quality varies, and behavioral adaptation plays a role. Drivers may pay less attention, may engage the system in marginal conditions, or may rely on it in ways that dilute any theoretical benefit. From a statistical perspective, whatever benefits may exist are not strong enough or consistent enough to rise above the noise in large population datasets.
If Autopilot and Autosteer do not have independently demonstrated safety benefits at scale, then the next question is what safety systems Tesla retains as standard equipment. This matters because Tesla did not strip its vehicles of active safety. Automatic emergency braking remains standard. Forward collision warning remains standard. Basic lane departure avoidance remains standard. These are not branding features, but intervention systems that operate in specific, high risk scenarios and have been shown to reduce crashes and injuries in large numbers studies.
Automatic emergency braking stands out because of its clarity. It intervenes only when a collision is imminent, it does not require sustained driver supervision, and it does not encourage drivers to cede responsibility during normal driving. The causal mechanism is simple. When a rear end collision is about to occur, the system applies the brakes faster than most humans can react. Because rear end crashes are common, the datasets are large, and the effect size is unmistakable. Forward collision warning complements this by alerting drivers earlier, reducing reaction time even when AEB does not fully engage. Lane departure avoidance, in its basic form, applies steering input only when the vehicle is about to leave its lane unintentionally. It does not center the car or manage curves continuously. Its benefits are more modest, often in the range of 10% to 25% reductions in run off road or lane departure crashes, but they are real and they appear in population level analyses.
This combination of systems aligns closely with what the evidence supports. They are boring, targeted, and limited in scope. They intervene briefly and decisively, rather than offering ongoing automation that blurs the line between driver and system responsibility. From a safety science perspective, they remove specific human failure modes rather than reshaping human behavior in complex ways.
Revisiting Autopilot and Autosteer through this lens reframes them as convenience features rather than safety features. They reduce workload on long highway drives, smooth steering and speed control, and can make driving less tiring. None of that is trivial, but convenience is not the same as safety, and the data do not support the claim that these systems reduce crashes or injuries at scale. The absence of evidence is not proof of harm, but it does matter when evaluating the impact of removing a feature. Taking away an unproven system does not remove a demonstrated safety benefit.
This is where my initial assumption fell apart. I expected that removing Autopilot and Autosteer would make Teslas less safe, but the evidence does not support that conclusion. The systems that deliver clear, auditable safety benefits remain in place. The system that was removed lacks independent proof of benefit and is subject to exactly the kind of small numbers reasoning that the law of small numbers warns against. Early trends, selective datasets, and intuitive narratives can be persuasive, but they are not a substitute for large scale evidence. Personally, I’ll be disappointed not to have these features if the occasional rental car turns out to be a Tesla, but that’s clearly a First World problem.
There is a broader lesson here for how safety technology is evaluated and communicated. Systems that produce large, measurable benefits tend to be narrow, specific, and unglamorous. Systems that promise broad capability and intelligence tend to generate compelling stories long before they generate robust evidence. Regulators and consumers alike should be wary of confusing the two. Mandating or prioritizing features should follow demonstrated outcomes, not perceived sophistication.
After doing the work, the conclusion is not that Tesla has abandoned safety, but that it has stripped away a feature whose safety value has not been independently demonstrated, while retaining the systems that actually reduce crashes and injuries in measurable ways. That result surprised me. It ran counter to my initial belief. But in traffic safety, surprise is often a sign that intuition has been corrected by data. The law of small numbers explains why this debate persists and why it will likely continue until claims about partial automation are supported by evidence on the same scale and quality as the systems they are often compared against.
This doesn’t, of course, mean that the other half of my perspective was incorrect. Tesla is clearly trying to drive a lot more owners to pay the monthly $100 for Full Self Driving in order to boost its stock price. But the roads won’t be statistically less safe because of it.
Support CleanTechnica via Kickstarter
Sign up for CleanTechnica’s Weekly Substack for Zach and Scott’s in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.
CleanTechnica uses affiliate links. See our policy here.
CleanTechnica’s Comment Policy
