There's also a denominator problem. The mileage figure appears to be cumulative miles "as of November," while the crashes are drawn from a specific July-November window in Austin. It's not clear that those miles line up with the same geography and time period.
The sample size is tiny (nine crashes), uncertainty is huge, and the analysis doesn't distinguish between at-fault and not-at-fault incidents, or between preventable and non-preventable ones.
Also, the comparison to Waymo is stated without harmonizing crash definitions and reporting practices.
I think its weird to characterize it as legitimate and the say "Go Tesla convince me ohterwise" as if the same audience would ever be reached by Tesla or people would care to do their due diligence.
That just sounds like a cope. The OP's claim is that the article rests on shaky evidence, and you haven't really refuted that. Instead, you just retreated from the bailey of "Tesla's Robotaxi data confirms crash rate 3x worse ..." to the motte of "the burden of proof here on Tesla".
https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy
More broadly I think the internet is going to be a better place if comments/articles with bad reasoning are rebuked from both sides, rather than getting a pass from one side because it's directionally correct, eg. "the evidence WMDs in Iraq is flimsy but that doesn't matter because Hussein was still a bad dictator".
You. are. welcome.
This is a statement of fact but based on this assumption:
> low-speed contact events that would often never show up as police-reported crashes for human drivers
Assumptions work just as well both ways. Musk and Tesla have been consistently opaque when it comes to the real numbers they base their advertising on. Given this past history of total lack of transparency and outright lies it's safe to assume that any data provided by Tesla that can't be independently verified by multiple sources is heavily skewed in Tesla's favor. Whatever safety numbers Tesla puts out you can bet your hat they're worse in reality.
If strong opposition to that kind of evil makes me deranged, count me in.
1: https://hsph.harvard.edu/news/usaid-shutdown-has-led-to-hund...
It is consensus seeking derangement at best
Yeah, no. I thought so as well initially but then I saw the video. The guy throws out his arm straight out multiple times.
Musk glazing from Electrek was very significant 2002-2024 at least
Not OK.
That's... entirely expected of someone that has memories and a personality? It's like showing up to /r/starwars and telling some random person "you sure like star wars a lot"
It would be ironic that people are claiming the Tesla numbers for Autopilot are to optimistic, as it is used on highways only and at the same time don't notice that city-only numbers for the FSD would be pessimistic statistics-wise.
No human driver would report this kind of incident. A human driver would probably forget it after lunch.
> the fleet has traveled approximately 500,000 miles
Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.
That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.
One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.
At first, I think you’re right - these are (thankfully) rare events. And because of this, the accident rate is Poisson distributed. At this low of a rate, it’s really hard to know what the true average is, so we do really need more time to know how good/bad the Teslas are performing. I also suspect they are getting safer over time, but again… more data required. But, we do have the statistical models to work with these rare events.
But the I think about your comment about it only being 30 cars operating over 6 months. Which, makes sense, except for the fact that it’s not like having a fleet of individual drivers. These robotaxis should all be running the same software, so it’s statistically more like one person driving 500,000 miles. This is a lot of miles! I’ve been driving for over 30 years and I don’t think I’ve driven that many miles.
If we are comparing the Tesla accident rate to people in a consistent manner, it’s a valid comparison.
At least 3 of them sound dangerous already, and it's on Tesla to convince us they're safe. It could be a statistical anomaly so far, but hovering at 9x the alternative doesn't provide confidence.
So if the crash statistics are insufficient, then we cannot trust the deep learning.
One crash in 500,000 miles would merely put them on par with a human driver.
One crash every 50,000 miles would be more like having my sister behind the wheel.
I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!
If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.
> One crash every 50,000 miles would be more like having my sister behind the wheel.
I'm not sure if that leads to the conclusion that you want it to.
I’m a decent driver, I never use my phone while driving and actively avoid distractions (sometimes I have to tell everyone in the car to stop talking), and yet features like lane assist and automatic braking have helped me avoid possible collisions simply because I’m human and I’m not perfect. Sometimes a random thought takes my attention away for a moment, or I’m distracted by sudden movement in my peripheral vision, or any number of things. I can drive very safely, but I can not drive perfectly all the time. No one can.
These features make safe drivers even safer. They even make the dangerous drivers (relatively) safer.
Driving a car takes effort. ADAS features (or even just plain regular "driving systems") can reduce the cognitive load, which makes for safer driving. As much as I enjoy driving with a manual transmission, an automatic is less tiring for long journeys. Not having to occupy my mind with gear changes frees me up to pay more attention to my surroundings. Adaptive cruise control further reduces cognitive load.
The danger comes when assistance starts to replace attention. Tesla's "full self-driving" falls into this category, where the car doesn't need continuous inputs but the driver is still de jure in charge of the vehicle. Humans just aren't capable of concentrating on monitoring for an extended period.
Driver fatigue is real, no matter how much coffee you take.
Lane-keep is a game changer if the UX is well done. I'm way more rested when I arrive at destination with my Model 3 compared to when I use the regular ICE with bad lane-assist UX.
EDIT: the fact that people that look at their phones will still look at their phones with lane-keep active, only makes it a little safer for them and everyone else, really.
If someone is driving dangerously despite these safety features, they should not have a license to operate a motor vehicle on public roads.
These features are still valuable even to safe drivers simply because safe drivers are human and will still make mistakes.
There isn't enough money and most importantly margin in the car industry to warrant such a valuation, so he has to pivot away from cars into the next thing.
Just to make an example of how risky it is to be a car company for Tesla.
In 2025 Toyota has had: 3.5 times Tesla's revenue, 8 times the net income and twice the margin.
And Toyota has a market cap that is 6 times lower than Tesla.
It would take tesla a gargantuan effort to match Toyota's numbers and margins, and if it matched it...it would be a disaster for Tesla's stock.
Hell, Tesla makes much less money than Mercedes Benz and with a smaller margin..
Mercedes has 60% more revenue and twice the net income. Yet, Tesla is valued around 40 times Mercedes-Benz.
Tesla *must* pivot away from cars and make it a side business or sooner or later that stuff is crashing, and it will crash fast and hard.
Musk understands that, which is why he focusing on robo taxis and robots. It's the only way to sell Tesla to naive investors.
Look how much longer, and more experience Waymo has and they still have multiple issues a week popping up online, and thats with running them in a very small well mapped and planned out area. Musk wants robo taxies globally, that's just not happening, not any time soon and certainly not by the 10 year limit for him to get his trillion dollar bonus from Tesla, which is the only reason he's pushing so hard to make it happen.
I see nothing wrong here, correction back to reality.
I understand why people adored him blindly in the early days, but liking him now after its clear what sort of person he is and always will be is same as liking trump. Many people still do it, but its hardly a defensible position unless on is already invested in his empire.
Comparing stats from this many miles to just over 1 trillion miles driven collectively in the US in a similar time period is a bad idea. Any noise in Tesla's data will change the ratio a lot. You can already see it from the monthly numbers varying between 1 and 4.
This is a bad comparison with not enough data. Like my household average for the number of teeth per person is ~25% higher than world average! (Includes one baby)
Edit: feel free to actually respond to the claim rather than downvote
Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.
Sorry that does not compute.
It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.
Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..
But if the number of crashes had been lower than for human drivers, this would tell us nothing at all.
I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.
In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.
In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.
The tech needs to be at least 100x more error free vs humans. It cannot be on par with human error rate.
As engineers, we understand that the technology will go from unsafe, to par-with-humans, to safer-than-humans, but in order for it to get to the latter, it requires much validation and training in an intermediate state, with appropriate safeguards.
Tesla's approach has been more risk averse and conservative than others. It has compiled data and trained its models on billions of miles of real world telemetry from its own fleet (all of which are equipped with advanced internet-connected computers). Then it has rolled out the robotaxi tech slowly and cautiously, with human safety drivers, and only in two areas.
I defend Tesla's tech, because I've owned and driven a Tesla (Model S) for many years, and its ten-year-old Autopilot (autosteer and cruise control with lane shift) is actually smoother and more reliable than many of its competitors current offerings.
I've also watched hours of footage of Tesla's current FSD on YouTube, and seen it evolve into something quite remarkable. I think the end-to-end neural net with human-like sensors is more sensible than other approaches, which use sensors like LIDAR as a crutch for their more rudimentary software.
Unlike many commenters on this platform I have no political issues with Elon, so that doesn't colour my judgement of Tesla as a company, and its technological achievements. I wish others would set aside their partisan tribablism and recognise that Tesla has completely revolutionised the EV market and continues to make significant positive contributions to technology as a whole, all while opening all its patents and opening its Supercharger network to vehicles from competitors. Its ethics are sound.
I expect self-driving cars to be launched unsupervised on public roads in only an order-of-magnitude safer than human drivers shape. Or not launch at all.
One can pay thousands of people to babysit these cars with their hands on the wheel for many years until that threshold is reached, and if no one is ready to pay for that effort then we'll just drive ourselves until the end of time.
Do me a favor and take Musk and get on a plane with just a bunch of cameras instead of sevsors like radar, airspeed sensor, altimeter, GPS, ILS, etc.
No need for those crutches. Do autopiloting like a real man!
Tesla cars have speed sensors as well as GPS. (Altimeter and ILS not being relevant). I agree with Musk's claim they don't need LIDAR because human drivers don't; it's self-evidently true. But I think they _should_ have it because they can then be safer than humans; why settle for our current accident and death rate?
> The "salute" in particular is simply a politically-expedient freeze-frame from a Musk speech, where he said "my heart goes out to you all" and happened to raise his arm. I could provide freeze-frame images of Obama and Hilary Clinton doing similar "salutes" and claim this makes them "far right fascists" but I would never insult the reader's intelligence by doing so.
For Obama and Clinton you can find freeze frames showing their arm in a similar position, but when you look at the full video it was in the middle of something that does not match a Nazi salute. Here are several examples: https://x.com/ExposingNV/status/1881647306724049116?t=CGKtg0...
If you had a camera in my kitchen you could find similar freeze frames of me whenever I make a sausge/egg/cheese on an English muffin breakfast sandwich because the ramekin I use to shape the egg patty is on the top shelf.
With Musk the full video shows it matches from when his arm starts moving to the end of the gesture. See https://x.com/BartoSitek/status/1882081868423860315?t=8F0hL-...
"Rear collision while backing" could mean they tapped a bollard. Doesn't sound like a crash. A human driver might never even report this. What does "Incident at 18 mph" even mean?
By my own subjective count, only three descriptions sound unambiguously bad, and only one mentions a "minor injury".
I'm not saying it's great, and I can imagine Tesla being selective in publishing, but based on this I wouldn't say it seems dire.
For example, roundabouts in cities (in Europe anyway) tend to increase the number of crashes, but they are overall of lower severity, leading to an overall improvement of safety. Judging by TFA alone I can't tell this isn't the case here. I can imagine a robotaxi having a different distribution of frequency and severity of accidents than a human driver.
> roundabouts in cities (in Europe anyway) tend to increase the number of crashes
Not in France, according to data. It depends on the speed limit, but they decrease accident by 34% overall, and almost 20% when the speed limit is 30 or 50 km/h.
If a human had eyes on every angle of their car and they still did that it would represent a lapse in focus or control -- humans don't have the same advantages here.
With that said : i would be more concerned about what it represents when my sensor covered auto-car makes an error like that, it would make me presume there was an error in detection -- a big problem.