182 pointsby breve3 hours ago13 comments
  • z72 hours ago
    The comparison isn't really like-for-like. NHTSA SGO AV reports can include very minor, low-speed contact events that would often never show up as police-reported crashes for human drivers, meaning the Tesla crash count may be drawing from a broader category than the human baseline it's being compared to.

    There's also a denominator problem. The mileage figure appears to be cumulative miles "as of November," while the crashes are drawn from a specific July-November window in Austin. It's not clear that those miles line up with the same geography and time period.

    The sample size is tiny (nine crashes), uncertainty is huge, and the analysis doesn't distinguish between at-fault and not-at-fault incidents, or between preventable and non-preventable ones.

    Also, the comparison to Waymo is stated without harmonizing crash definitions and reporting practices.

    • fabian2k2 hours ago
      I think it's fair to put the burden of proof here on Tesla. They should convince people that their Robotaxis are safe. If they redact the details about all incidents so that you cannot figure out who's at fault, that's on Tesla alone.
      • LunicLynx16 minutes ago
        While I think Tesla should be transparent, this article doesn't really make sure it is comparing apples to apples either.

        I think its weird to characterize it as legitimate and the say "Go Tesla convince me ohterwise" as if the same audience would ever be reached by Tesla or people would care to do their due diligence.

      • gruez5 minutes ago
        >I think it's fair to put the burden of proof here on Tesla.

        That just sounds like a cope. The OP's claim is that the article rests on shaky evidence, and you haven't really refuted that. Instead, you just retreated from the bailey of "Tesla's Robotaxi data confirms crash rate 3x worse ..." to the motte of "the burden of proof here on Tesla".

        https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy

        More broadly I think the internet is going to be a better place if comments/articles with bad reasoning are rebuked from both sides, rather than getting a pass from one side because it's directionally correct, eg. "the evidence WMDs in Iraq is flimsy but that doesn't matter because Hussein was still a bad dictator".

    • xnx2 hours ago
      Tesla could share real/complete data at any time. The fact that they don't is likely and indicator the data does not look good.
      • LunicLynx14 minutes ago
        You can do this with every topic. XYZ does not share this, so IT MUST BE BAD.
        • 7 minutes ago
          undefined
        • happymellon11 minutes ago
          And it usually is.
          • LunicLynx4 minutes ago
            happymellon seams to be a pseudonym. I bet knowing who he really is can't be good.

            You. are. welcome.

    • sigmoid102 hours ago
      I've actually started ignoring all these reports. There is so much bad faith going on in self-driving tech on all sides, it is nearly impossible to come up with clean and controlled data, much less objective opinions. At this point the only thing I'd be willing to base an opinion on is if insurers ask for higher (or lower) rates for self-driving. Because then I can be sure they have the data and did the math right to maximise their profits.
    • an hour ago
      undefined
    • silon422 hours ago
      "insurance-reported" or "damage/repair-needed" would be a better criteria for problematic events than "police-reported".
    • buran77an hour ago
      > The comparison isn't really like-for-like.

      This is a statement of fact but based on this assumption:

      > low-speed contact events that would often never show up as police-reported crashes for human drivers

      Assumptions work just as well both ways. Musk and Tesla have been consistently opaque when it comes to the real numbers they base their advertising on. Given this past history of total lack of transparency and outright lies it's safe to assume that any data provided by Tesla that can't be independently verified by multiple sources is heavily skewed in Tesla's favor. Whatever safety numbers Tesla puts out you can bet your hat they're worse in reality.

    • cbeachan hour ago
      [flagged]
      • touweran hour ago
        He's probably smart then
      • adriandan hour ago
        I would call strong opposition to Musk a democratic responsibility, not a derangement. We are talking about a guy with a fondness for the far right and throwing Nazi salutes, and whose destruction of USAID had, by November 2025, resulted in “hundreds of thousands of deaths”. [1] Those, of course, are just a couple of examples.

        If strong opposition to that kind of evil makes me deranged, count me in.

        1: https://hsph.harvard.edu/news/usaid-shutdown-has-led-to-hund...

        • dash240 minutes ago
          Sure, but that is not a defence against the claim that his journalistic coverage is biased.
        • thegreatpeter41 minutes ago
          Did you even read the article you sent? It’s all based on estimates.

          It is consensus seeking derangement at best

        • gulfofamericaan hour ago
          [dead]
        • cbeachan hour ago
          [flagged]
          • ginkoan hour ago
            >The "salute" in particular is simply a politically-expedient freeze-frame from a Musk speech, where he said "my heart goes out to you all" and happened to raise his arm.

            Yeah, no. I thought so as well initially but then I saw the video. The guy throws out his arm straight out multiple times.

      • thegreatpeter44 minutes ago
        I noticed the same thing. Not sure why you're being downvoted. The whole publican has turned sour recently.
        • JumpinJack_Cash39 minutes ago
          After every glazing there is a sourness.

          Musk glazing from Electrek was very significant 2002-2024 at least

      • kakacikan hour ago
        You sure like defending him a lot, your other already removed comments I've seen were much worse and borderline nazi apologism and whataboutism.

        Not OK.

        • gruez13 minutes ago
          >You sure like defending him a lot, [...]

          That's... entirely expected of someone that has memories and a personality? It's like showing up to /r/starwars and telling some random person "you sure like star wars a lot"

  • _ph_8 minutes ago
    As far as I understand, those Robotaxis are only available within Austin so far. That is slow city traffic, the number of miles per ride is very small. However the number for human drivers seem to take all kind of roads into respect. Of course, highways are the roads where you drive most of the distance at the least risk for an accident. Has this been taken into account for the evaluation?

    It would be ironic that people are claiming the Tesla numbers for Autopilot are to optimistic, as it is used on highways only and at the same time don't notice that city-only numbers for the FSD would be pessimistic statistics-wise.

    • raincolea minute ago
      It does look extremely pessimistic. Like one of the 'incident' is that they hit a curb at a parking lot at 6 MPH.

      No human driver would report this kind of incident. A human driver would probably forget it after lunch.

  • SilverBirch3 hours ago
    To be honest I think the true story here is:

    > the fleet has traveled approximately 500,000 miles

    Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.

    That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.

    One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.

    • mbreese7 minutes ago
      The more I think about your comment on statistics, the more I change my mind.

      At first, I think you’re right - these are (thankfully) rare events. And because of this, the accident rate is Poisson distributed. At this low of a rate, it’s really hard to know what the true average is, so we do really need more time to know how good/bad the Teslas are performing. I also suspect they are getting safer over time, but again… more data required. But, we do have the statistical models to work with these rare events.

      But the I think about your comment about it only being 30 cars operating over 6 months. Which, makes sense, except for the fact that it’s not like having a fleet of individual drivers. These robotaxis should all be running the same software, so it’s statistically more like one person driving 500,000 miles. This is a lot of miles! I’ve been driving for over 30 years and I don’t think I’ve driven that many miles.

      If we are comparing the Tesla accident rate to people in a consistent manner, it’s a valid comparison.

    • marricksan hour ago
      Wait, so your argument is there's only 9 crashes so we should wait until there's possibly 9,000 crashes to make an assessment? That's crazy dangerous.

      At least 3 of them sound dangerous already, and it's on Tesla to convince us they're safe. It could be a statistical anomaly so far, but hovering at 9x the alternative doesn't provide confidence.

    • mcherm2 hours ago
      > The real take away is that the Robotaxis don't really exist

      More accurately, the real takeaway is that Tesla's robo-taxis don't really exist.

      • UltraSane25 minutes ago
        Because it is fraud trying to inflate Tesla stock price.
    • ameliusan hour ago
      But deep learning is also about statistics.

      So if the crash statistics are insufficient, then we cannot trust the deep learning.

    • razingeden3 hours ago
      >One crash in this context is going to just completely blow out their statistics.

      One crash in 500,000 miles would merely put them on par with a human driver.

      One crash every 50,000 miles would be more like having my sister behind the wheel.

      I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!

      If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.

      • jacquesman hour ago
        > One crash in 500,000 miles would merely put them on par with a human driver.

        > One crash every 50,000 miles would be more like having my sister behind the wheel.

        I'm not sure if that leads to the conclusion that you want it to.

      • martin_aan hour ago
        [flagged]
        • nsjdkdkdkan hour ago
          [dead]
        • burnishedan hour ago
          They might have forgotten how to share an anecdote and their sister might just be a regular awful driver
  • mikkupikku2 hours ago
    All these self driving and "drivers assistance" features like lane keeping exist to satisfy consumer demand for a way to multitask when driving. Tesla's is particularly cancerous, but all of them should be banned. I don't care how good you think your lane keeping in whatever car you have is, you won't need it if you keep your hands on the wheel, eyes on the road, and don't drive when drowsy. Turn it off and stop trying to delegate your responsibility for what your two ton speeding death machine does!
    • nkrisc2 hours ago
      I think it’s unfair to group all those features into “things for people who want to multitask while driving”.

      I’m a decent driver, I never use my phone while driving and actively avoid distractions (sometimes I have to tell everyone in the car to stop talking), and yet features like lane assist and automatic braking have helped me avoid possible collisions simply because I’m human and I’m not perfect. Sometimes a random thought takes my attention away for a moment, or I’m distracted by sudden movement in my peripheral vision, or any number of things. I can drive very safely, but I can not drive perfectly all the time. No one can.

      These features make safe drivers even safer. They even make the dangerous drivers (relatively) safer.

      • andrewaylett2 hours ago
        There are two layers, both relating to concentration.

        Driving a car takes effort. ADAS features (or even just plain regular "driving systems") can reduce the cognitive load, which makes for safer driving. As much as I enjoy driving with a manual transmission, an automatic is less tiring for long journeys. Not having to occupy my mind with gear changes frees me up to pay more attention to my surroundings. Adaptive cruise control further reduces cognitive load.

        The danger comes when assistance starts to replace attention. Tesla's "full self-driving" falls into this category, where the car doesn't need continuous inputs but the driver is still de jure in charge of the vehicle. Humans just aren't capable of concentrating on monitoring for an extended period.

        • andsoitis4 minutes ago
          What about lane assist and follow technology in other cars? Do they also fall in the category of thing that replace attention?
    • plqbfbv2 hours ago
      Have you ever driven more than 200km at an average of 80km/h with enough turns on the highway? Perhaps after work, just to see your family once a month?

      Driver fatigue is real, no matter how much coffee you take.

      Lane-keep is a game changer if the UX is well done. I'm way more rested when I arrive at destination with my Model 3 compared to when I use the regular ICE with bad lane-assist UX.

      EDIT: the fact that people that look at their phones will still look at their phones with lane-keep active, only makes it a little safer for them and everyone else, really.

      • mikkupikkuan hour ago
        If you're on a road trip, pull the fuck over and sleep. Your schedule isn't worth somebody else's life. If that's your commute, get a new apartment or get a new job. Endangering everybody else with drowsy driving isn't an option you should ever find tenable.
        • bluGillan hour ago
          You are correct - but the reality is many humans do those stupid things.
    • melagonster2 hours ago
      But this is why people bought Tesla. Musk promised that the car is automatic.
      • Spooky232 hours ago
        Don’t be silly. Why would a reasonable person think “Full Self Driving” meant that a car would fully drive itself?
    • fragmede2 hours ago
      We made drunk driving super illegal and that still doesn't stop people. I would rather they didn't in the first place, but since they're going to anyway, I'd really rather they have a computer that does it better than they do. FSD will pull over and stop if the driver has passed out.
      • mikkupikku2 hours ago
        If we could ensure that only drunk people use driver assistance features, I'd be all for that. The reality is that 90% of the sober public are now driving like chronic drunks because they think their car has assumed the responsibility of watching the road. Ban it ALL.
        • nkrisc3 minutes ago
          No, remove their licenses if they can’t drive safely. Let safe and responsible drivers use these safety-enhancing features.

          If someone is driving dangerously despite these safety features, they should not have a license to operate a motor vehicle on public roads.

          These features are still valuable even to safe drivers simply because safe drivers are human and will still make mistakes.

        • michaelsshaw2 hours ago
          What I'm hearing here is anecdotal and largely based on feelings. The facts are that automatic emergency braking (which should not activate under normal driving circumstances as it is highly uncomfortable) and lane-keeping are basic safety features that have objectively improved safety on the roads. Everything you've said is merely conjecture.
      • UltraSane24 minutes ago
        Modern cars could easily detect drunk like driving and stop or call the cops.
  • ProfessorZoom6 minutes ago
    Fred Lambert still writing illogical articles 7+ years later
  • sammyjoe72an hour ago
    Elon promised self driving cars in 12 months back in 2017? He’s also promising Optimus robots doing surgery on humans in 3 years? Extrapolating…………… Optimus is going to kill some humans and it will all be worth it!
    • epolanskian hour ago
      Elon is aware that Tesla insane market valuation would crash 10x if it stays a car company.

      There isn't enough money and most importantly margin in the car industry to warrant such a valuation, so he has to pivot away from cars into the next thing.

      Just to make an example of how risky it is to be a car company for Tesla.

      In 2025 Toyota has had: 3.5 times Tesla's revenue, 8 times the net income and twice the margin.

      And Toyota has a market cap that is 6 times lower than Tesla.

      It would take tesla a gargantuan effort to match Toyota's numbers and margins, and if it matched it...it would be a disaster for Tesla's stock.

      Hell, Tesla makes much less money than Mercedes Benz and with a smaller margin..

      Mercedes has 60% more revenue and twice the net income. Yet, Tesla is valued around 40 times Mercedes-Benz.

      Tesla *must* pivot away from cars and make it a side business or sooner or later that stuff is crashing, and it will crash fast and hard.

      Musk understands that, which is why he focusing on robo taxis and robots. It's the only way to sell Tesla to naive investors.

      • UltraSane19 minutes ago
        I really hope I live to see Tesla stock crash to a reasonable valuation.
      • esskay36 minutes ago
        The best part of all of this is given their history, and the state of robotaxies as a whole, they will fail, and Tesla will crash. And it'll be a great day. The hype and obscene over valuation of them is utterly moronic.

        Look how much longer, and more experience Waymo has and they still have multiple issues a week popping up online, and thats with running them in a very small well mapped and planned out area. Musk wants robo taxies globally, that's just not happening, not any time soon and certainly not by the 10 year limit for him to get his trillion dollar bonus from Tesla, which is the only reason he's pushing so hard to make it happen.

      • kakacikan hour ago
        > Elon is aware that Tesla insane market valuation would crash 10x if it stays a car company.

        I see nothing wrong here, correction back to reality.

        I understand why people adored him blindly in the early days, but liking him now after its clear what sort of person he is and always will be is same as liking trump. Many people still do it, but its hardly a defensible position unless on is already invested in his empire.

        • acdha33 minutes ago
          It’d be best for everyone outside of the company but he and the board would be buried in lawsuits for the rest of their lives. They have a strong personal interest in avoiding that even if it’s well-deserved based on sober data analysis, so they’re pushing the Hail Mary play trying to jump into a bigger new market which they haven’t already ceded to the competition.
  • viraptoran hour ago
    > showing cumulative robotaxi miles, the fleet has traveled approximately 500,000 miles as of November 2025.

    Comparing stats from this many miles to just over 1 trillion miles driven collectively in the US in a similar time period is a bad idea. Any noise in Tesla's data will change the ratio a lot. You can already see it from the monthly numbers varying between 1 and 4.

    This is a bad comparison with not enough data. Like my household average for the number of teeth per person is ~25% higher than world average! (Includes one baby)

    Edit: feel free to actually respond to the claim rather than downvote

    • dchftcs34 minutes ago
      I think what you say would have be fair if Elon's and his fanboys' stance was "we need more data" rather than "we will be able to scale self-driving cars very quickly, very soon".
  • fabian2k2 hours ago
    As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.

    Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.

    • fransje262 hours ago
      > As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.

      Sorry that does not compute.

      It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.

      Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..

      • fabian2k2 hours ago
        I'm a bit hesitant to draw strong conclusions here because there is so little data. I would personally assume that it means the AI isn't ready at all, but without knowing any details at all about the crashes this is hard to state for sure.

        But if the number of crashes had been lower than for human drivers, this would tell us nothing at all.

    • repelsteeltje2 hours ago
      > As long as there are still safety drivers, the data doesn't really tell you if the AI is any good.

      I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.

      In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.

      In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.

  • artembugara3 hours ago
    By the law of large numbers, it's not a significant distance.
  • 3 hours ago
    undefined
  • onetokeoverthe2 hours ago
    [dead]
  • lvl1552 hours ago
    I am so tired of people defending Tesla. I’ve wrote off Tesla long time ago but what gets me are the people defending their tech. We all can go see the products and experience them.

    The tech needs to be at least 100x more error free vs humans. It cannot be on par with human error rate.

    • cbeachan hour ago
      We tend to defend companies that push the frontiers of self-driving cars, because the technology has the potential to save lives and make life easier and cheaper for everyone.

      As engineers, we understand that the technology will go from unsafe, to par-with-humans, to safer-than-humans, but in order for it to get to the latter, it requires much validation and training in an intermediate state, with appropriate safeguards.

      Tesla's approach has been more risk averse and conservative than others. It has compiled data and trained its models on billions of miles of real world telemetry from its own fleet (all of which are equipped with advanced internet-connected computers). Then it has rolled out the robotaxi tech slowly and cautiously, with human safety drivers, and only in two areas.

      I defend Tesla's tech, because I've owned and driven a Tesla (Model S) for many years, and its ten-year-old Autopilot (autosteer and cruise control with lane shift) is actually smoother and more reliable than many of its competitors current offerings.

      I've also watched hours of footage of Tesla's current FSD on YouTube, and seen it evolve into something quite remarkable. I think the end-to-end neural net with human-like sensors is more sensible than other approaches, which use sensors like LIDAR as a crutch for their more rudimentary software.

      Unlike many commenters on this platform I have no political issues with Elon, so that doesn't colour my judgement of Tesla as a company, and its technological achievements. I wish others would set aside their partisan tribablism and recognise that Tesla has completely revolutionised the EV market and continues to make significant positive contributions to technology as a whole, all while opening all its patents and opening its Supercharger network to vehicles from competitors. Its ethics are sound.

      • alkonaut19 minutes ago
        > but in order for it to get to the latter, it requires much validation and training in an intermediate state, with appropriate safeguards.

        I expect self-driving cars to be launched unsupervised on public roads in only an order-of-magnitude safer than human drivers shape. Or not launch at all.

        One can pay thousands of people to babysit these cars with their hands on the wheel for many years until that threshold is reached, and if no one is ready to pay for that effort then we'll just drive ourselves until the end of time.

      • noncomlan hour ago
        > other approaches, which use sensors like LIDAR as a crutch for their more rudimentary software.

        Do me a favor and take Musk and get on a plane with just a bunch of cameras instead of sevsors like radar, airspeed sensor, altimeter, GPS, ILS, etc.

        No need for those crutches. Do autopiloting like a real man!

        • coryrc10 minutes ago
          Human-piloted planes have altimeters and airspeed indicators; the failure of which have caused many accidents.

          Tesla cars have speed sensors as well as GPS. (Altimeter and ILS not being relevant). I agree with Musk's claim they don't need LIDAR because human drivers don't; it's self-evidently true. But I think they _should_ have it because they can then be safer than humans; why settle for our current accident and death rate?

      • tzs29 minutes ago
        Note: this is in response to https://news.ycombinator.com/item?id=46823760 which is from the same commenter but got killed before there was time to post any links refuting its claims.

        > The "salute" in particular is simply a politically-expedient freeze-frame from a Musk speech, where he said "my heart goes out to you all" and happened to raise his arm. I could provide freeze-frame images of Obama and Hilary Clinton doing similar "salutes" and claim this makes them "far right fascists" but I would never insult the reader's intelligence by doing so.

        For Obama and Clinton you can find freeze frames showing their arm in a similar position, but when you look at the full video it was in the middle of something that does not match a Nazi salute. Here are several examples: https://x.com/ExposingNV/status/1881647306724049116?t=CGKtg0...

        If you had a camera in my kitchen you could find similar freeze frames of me whenever I make a sausge/egg/cheese on an English muffin breakfast sandwich because the ramekin I use to shape the egg patty is on the top shelf.

        With Musk the full video shows it matches from when his arm starts moving to the end of the gesture. See https://x.com/BartoSitek/status/1882081868423860315?t=8F0hL-...

    • flanked-everglan hour ago
      Cite?
  • rich_sasha2 hours ago
    As much as I'd love to pile in on Tesla, it's unclear to me the severity of the incidents (I know they are listed) and if human drivers would report such things.

    "Rear collision while backing" could mean they tapped a bollard. Doesn't sound like a crash. A human driver might never even report this. What does "Incident at 18 mph" even mean?

    By my own subjective count, only three descriptions sound unambiguously bad, and only one mentions a "minor injury".

    I'm not saying it's great, and I can imagine Tesla being selective in publishing, but based on this I wouldn't say it seems dire.

    For example, roundabouts in cities (in Europe anyway) tend to increase the number of crashes, but they are overall of lower severity, leading to an overall improvement of safety. Judging by TFA alone I can't tell this isn't the case here. I can imagine a robotaxi having a different distribution of frequency and severity of accidents than a human driver.

    • orwin2 hours ago
      He compared to the estimated statistics for non-reported accident (typically your example, that involve only one vehicle and only result in scratched paint) to estimate the 3x. Else the title would have been 9x (which is in line with 10x a data analyst blogger wrote ~ 3month ago).

      > roundabouts in cities (in Europe anyway) tend to increase the number of crashes

      Not in France, according to data. It depends on the speed limit, but they decrease accident by 34% overall, and almost 20% when the speed limit is 30 or 50 km/h.

    • serfan hour ago
      >they tapped a bollard

      If a human had eyes on every angle of their car and they still did that it would represent a lapse in focus or control -- humans don't have the same advantages here.

      With that said : i would be more concerned about what it represents when my sensor covered auto-car makes an error like that, it would make me presume there was an error in detection -- a big problem.