He really embodies the ethos of "move fast and break things". So let's fire 80% of the staff, see what falls down, and rehire where we made "mistakes". I really think he has an alarmingly high threshold for the number of lives we can lose if it accelerates the pace of progress.
"Not a single other automobile manufacturer or ADAS self-driving technology provider reported a single motorcycle fatality in the same time frame."
As for "best technology available," the galaxy brains in this thread are tossing out numbers assuming that sensor fusion never fails and that correlated failure modes can be neglected, which is wild. I never thought Elon was a genius -- he's a business guy willing to make big bets on interesting tech, nothing more nothing less -- but if the confidently incorrect engineering claims on display in this thread are any indication, maybe you guys should be calling him a genius after all, because this ain't it.
That's true, but it would obviously be completely unreasonable to stop all flying in the meantime.
Tesla releases all self driving statistics including time since disengagement of self-driving before accident, right?
The numbers include all incidents where self-driving was active within 30 seconds of the crash, because of course they do. The meme that Tesla is allowed to simply bypass reporting on a technicality would be absurd if it weren't so voraciously spread by post-truth luddites. Look, if you want to dunk on Elon, I get it, but he does so many real shitty things that I would ask you to focus on those and not make shit up.
https://www.reuters.com/business/autos-transportation/trump-...
A upper bound could be easily produced by Tesla. All vehicle fatalities identified by police are recorded in the national Fatality Analysis Reporting System (FARS) [1]. Tesla could investigate every recorded Tesla fatality by VIN and cross-reference with their telemetry to determine if their systems were available at all or active on or near the time of the fatal crash. This would cost them between a few thousand dollars to a few million dollars annually, depending on desired precision, which compared to the billions it makes annually is minuscule sum to affirmatively establish safety.
They intentionally choose to not do so. Instead deliberately deceiving consumers and the public by conflating the lower bound with a upper bound in all official safety messaging.
Even ignoring their gross incompetence or maliciousness in not doing simple safety analysis when numerous lives are on the line. The mere fact they conflate a lower bound with a upper bound in their messaging is scientific malfeasance of the highest order. Their reporting has zero credibility when they make such clear and intentional misreporting purely for their own benefit to the detriment of the public and even their own customers.
[1] https://www.nhtsa.gov/research-data/fatality-analysis-report...
Of course, at first this just substitutes one type of problem for another because even shitty drivers have to drive many lifetimes of distance before killing anyone so you wind up with the unfortunately named "shot noise" from small-N. But Tesla FSD drives so many miles that the statistics are no longer small-N. If supervised FSD was worse than human, we should be seeing many more bodies than we do, so supervised FSD is clearly better than human.
Of course, that number needs to be adjusted for critical interventions before we can say anything about unsupervised FSD, and in an ideal universe I wish Tesla was forced to disclose those figures. Unfortunately, we live in a universe where luddites are firmly in control of this conversation and I cannot deny that a forced disclosure would be heavily abused in a way that costs lives. Still, regulators get to see the numbers before approving unsupervised rollouts, and as a compromise this makes neither myself nor the luddites happy but I suppose it will do.
To your point about luddites, that’s why I think it’s erroneous to use “as good as a human” as the metric. It will need to be much better than a human but those people reluctantly trust it enough to hand over control.
What I literally said was Tesla doesn't release all of the self-driving statistics, like time to disengagement. They are noticeably more cagey about releasing that kind of info claiming it as "trade secret". Did you see the data sets you pointed to before pointing to them? Full of Tesla crashes, and full of redactions.
I don't need to dunk on Elon. He dunks on himself. Some people just worship him too hard to realize it.
While it was absolutely vital to getting the costs of the original Tesla Roadster and SpaceX launches way down… it can only work when you are able to accept "no, stop" as an answer.
Rockets explode when you get them wrong, you can't miss it.
Cars crashing more than other models? That's statistics, which can be massaged.
Government work? There's always someone complaining no matter what you do, very easy to convince yourself that all criticism is unimportant, no matter how bad it gets. (And it gets much worse than the worst we've actually seen from DOGE and Trump — I don't actually think they'll get to be as bad as the Irish Potato Famine, but that is an example of leaders refusing to accept what was going on).
Which seems like an antidote to the current culture of "don't build anything anywhere."
Now the errors are not all independent so it's not as good as that, but many classes of errors are independent (e.g. two pilots getting a heart attack versus just one) so you can get pretty close to that p^n. Musk did not understand this. He's just not as smart as he's made out to be.
You have a main and a backup pilot, but either one must be 100% capable of doing it on their own. The backup is silently double checking, but their assignments are more about ensuring that the copilot doesn't just check out because they're human. If the copilot ever has to say "don't do that it's going to kill us all" it's a crisis.
Lidar is a good backup but the car must be able to with without it. You can't drive just with lidar; it's like driving by Braille. Lidar can't even read a stop light. If they cannot handle it with just the visuals, the car should be not be allowed on the road.
I concur that is terrifying that he was allowed to go without the backup that stops it from killing people. Human co drivers are not a good enough backup.
But he's also not wrong that the visual system must be practically perfect -- if it's possible at all. Which it surely isn't yet.
Trusting vision systems for 30 seconds or even 30 minutes over the lifetime of a car is very different than trusting them for 30,000 hours. So for edge cases sure, add a “this is an emergency drive to a hospital without LiDAR” mode but you don’t need to handle normal driving without them.
* train tracks as mentioned by another comment
* getting out of the way of emergency vehicles or following instructions from emergency workers
I don’t think this detracts from the overall point—if the uptime is very good then the times when LiDAR doesn’t work and the car needs to move might practically never happen. Or the car might be able to move forward to a safe position in some limp-along mode.
WRT “this is an emergency drive to a hospital without LiDAR,” I think that would be pretty bad to include. What exactly qualifies as an emergency (this will be abused by some users)? And anyway, in most cases it is better to have an ambulance for an emergency. Finally, an emergency on my part doesn’t entitle me to endanger society. Rather, the types of behavior that the car can perform without all the sensors should be kept track of. If the car can limp along without LiDAR, then that’s something it can do (with the caveat that some roads are not safe to drive far under the speed limit on).
At minimum cars need to handle being on a curve when LiDAR or vision etc cuts off. Stopping on a freeway or railroad is a real risk even if people can get out, but now you’re doing risk mitigation calculations not just trying to handle normal driving. “What if someone is asleep in a level 5 car?” is a question worth considering, but we’re a long way self driving without any from of human backup.
The hospital thing is for the very real possibility of being outside of cellphone range, and represents the apex of gracefully shutting down the ride vs refusing to travel while in a less safe condition.
A self driving Taxi with a tire pressure sensor error vs total break failure are simply wildly different situations and each should get different responses. Further, designing for a possibility sleeping but valid human driver is very different than designing for a likely empty vehicle.
If you aren’t mitigating it with the appropriate controls, you aren’t managing risk. My point is just passing the buck to the human is not an appropriate control in many critical scenarios.
No. A break failure doesn’t guarantee a specific negative outcome, it dramatically raises the probability of various negative consequences.
My point is risk mitigation is about lowering risk but there may be no reasonably safe options. A car stopped on a freeway is still a high risk situation, but it beats traveling at 70 MPH without working cameras.
I agree there may be cases where there are no reasonably safe options. That means your engineered system (especially in a public-facing product) is not ready for production because you haven't met a reasonable risk threshold.
> I agree there may be cases where there are no reasonably safe options. That means your engineered system (especially in a public-facing product) is not ready for production because you haven't met a reasonable risk threshold.
Individual failures should never result in such scenarios, but ditching in the ocean may be the best option after a major fuel leak, loss of all engine power, etc.
I don’t think any FMEA is going to list “ditch into the ocean” as an acceptable mitigation. Ie it will never be a way to buy risk down to an acceptable level.
>it isn’t an analysis of mitigation strategies.
Take a look at NASAs FMEA guidebook. It clearly lists identifying mitigations as part of the FMEA process. You’ll see similar in other organizations but possibly with different names (“control” instead of “mitigation”)
https://standards.nasa.gov/sites/default/files/standards/GSF...
100% AI systems can be safer than human in the loop systems avoiding suicide by pilot etc, but conversely that means the AI must also deal with extreme edge cases. It’s a subtle but critical nuance.
An FMEA (and other safety-critical/quality products) go through an approval process. So if "ditch into the ocean" is not on the FMEA, it means they should have other mitigations/controls that bought the risk down to an acceptable level. They can't/shouldn't just push forward with a risk that exceeds their acceptable tolerance. If implemented correctly, the FMEA is complete insomuch as it ensured each hazard was brought to an acceptable risk level. And certainly, a safety officer isn't going to say the system doesn't need further controls because they put "ditch into the ocean" in the manuals. If that's the rationale, it begs the question "Why wasn't the risk mitigated in the FMEA and hazard analysis?" Usually it's because they're trying to move fast due to cost/schedule pressure, not because they managed the risk. There are edge cases, but even something like a double bird strike can be considered an acceptable risk because the probability is so low. Not impossible, but low enough. That’s what “ditch in the ocean” operations are for.
I agree that software system can improve safety but we shouldn't assume so without the relevant rigor that includes formal risk mitigation. Software tends to elicit interfacing faults; the implication being as the number of interfaces increases the potential number of fault modes can increase geometrically. This means it is much harder to test/mitigate software faults, especially when implementing a black-box AI model. My hunch is that many of those trying to implement AI in safety-critical applications are not rigorously mitigating risk like more mature domains. Because, you know, move fast and break things.
As a comparison for planes, there are not "this is an emergency, please land yourself" buttons for smaller aircraft:
* https://www.garmin.com/en-US/blog/aviation/five-ways-garmin-...
They do if they're crossing railroad tracks.
That’s why I said 30 seconds / 30 minutes not 3 seconds. The idea is to go to somewhere safe and pull off the road not just slam on the breaks and hope your not on a curve.
This is not how flying works in a multi-crew environment. It’s a common misconception about the dynamic.
Both pilots have active roles. Pilots also generally take turns who is manipulating the flight controls (“flying the airplane”) each flight.
Don't think that's the right analogy. Realistically you'd aim to combine them meaningfully. A bit like two eyes gives you depth perception.
You assume 1+1 is less than two, when really you'd aim for >2
If you have two sensors, one says everything's fine but the other says you're about to crash, which one do you trust? What if the one that says you're about to crash is feeding you bad data? And what if the resulting course correction leads to a different failure?
I'd hope that we've learned these lessons from the 737 Max crashes. In both cases, one sensor thought that the plane was at imminent risk of stalling, and so it forced the nose of the plane down, thereby leading to an entirely different failure mode.
Now, of course, having two sensors is better than just having the one faulty sensor. But it's worth emphasizing that not all sensor failures are created equal. And of course, it's important to monitor your monitoring.
Neither. You stop the car and tow it to the nearest repair facility. (Or have a human driver take over until the fault is repaired.)
You don’t gouge out your ears because you hear something walking around at night that doesn’t appear to be accurate. As a human, your executive function makes judgements based on context and what you know. Your eyesight is degraded in the dark, so your brain pays more attention to unexpected sound.
The argument for lidar, sonar or radar isn’t that cameras are “bad”, it’s that they perform very well in circumstances where visual input may not. As an engineer, you have an ethical obligation to consider the use case of the product.
It’s not at all like the 737max issue - many companies have been able to successfully implement these features I have a almost decade old Honda that uses camera and radar sensors to manage adaptive cruise and lane keeping features flawlessly.
In the case of Tesla and their dear leader, they tend to make dogmatic engineering decisions based on personal priorities. They then spackle in legal and astroturf marketing bullshit to dodge accountability. The folly of relying on cameras or putting your headlights inside of a narrow cavity in the car body (cybertruck) is pretty obvious if you live in a place that has winter weather and road salt.
I agree that optimizing for vehicle cost and shipping shoddy software at the expense of human lives is the wrong tradeoff, and having different sensors that provide more coverage of situations is generally preferable if you can afford it.
But suppose you have a radar sensor that's faulty, and so it periodically thinks that there's something right in front of it, and so it causes the car to slam on its brakes. That's likely going to cause an accident if you're traveling at highway speeds. Does that mean that we shouldn't use radar in any circumstances? Of course not. But sensors are not infallible, and we need to keep that in mind when designing these systems.
If you’re Waymo, you’ve built in a higher standard of sensor and diagnostics, at the expense of the aesthetic.
That’s always been the issue with Tesla… they push 99% solutions for 99.99% problems.
That's fair. I was likely remembering that there were two sensors, since there's one for the pilot and one for the copilot, but not remembering that only one of them was in use.
The other lessons worth noting are that 1) you need to give the human some indicator that something's wrong, and 2) you need to give the human some way of overriding the system.
> And unlike planes, you can safely stop a car in almost all scenarios.
Agreed, mostly. We do have to be careful about how the vehicle disengages and comes to a safe stop, or else we risk repeating the incident where Cruise hit a pedestrian, then pulled to the side of the road while dragging the pedestrian under the vehicle.
https://www.wired.com/story/lidar-cheap-make-self-driving-re...
Maybe I'm just old, but development roadmaps are always vapor, until they aren't, which happens sometimes. Always been that way.
To be clear, I'd much rather my vehicle have superhuman multisensory awareness than only superhuman awareness. And I think it's fair for regulators to involve themselves with vehicle engineering, as all our safety depends on it. I've also watched the AI day presentations about their vehicle training system, and read their disclaimer text for enabling FSD, and it seems like they're doing a lot to advance the state of the art.
And not just a little bit - it’s way overpriced.
There are a bunch of possible explanations for this. One is that investors believe full self driving will come out really soon and work really well.
I own a 1978 Suzuki Carry and a Miles Electric ZX40ST.
I have watched a fair amount of https://www.youtube.com/@MunroLive with interest about the implementations from all manufacturers. Sandy is in my home state of Michigan, birthplace of the auto industry, in which I've been multi-generationally involved, and he knows his stuff. He has criticisms for all manufacturers, but over the years Tesla seem to have listened more than most, to the point of Elon speaking for hours with Sandy on podcasts about technical aspects of the vehicles and production. I also appreciate that Tesla seem to make more of their cars in the US than any other manufacturer. Honda seems to be the only one comparable. I think the future's electric. I don't really care who makes it, but I'd like it to be well engineered, and made locally.
I'd personally probably think traditional ICE manufacturers and oil industries are overvalued, but I'm probably wrong as there's clearly lots of business for those industries which doesn't seem to be going anywhere.
In medicine, the ethics commitee would shot the project early.
Edit: Comparing "ev vehicle" with "autonomous vehicle"
It seems like Tesla's been pretty clear with drivers who've opted in to FSD that they're helping to test and develop a system which isn't perfect.
Just off the top of the head, look up Henrietta Lacks for a notable example of how medicine has handled informed consent.
Strawman No matter how many contracts and disclaimers they get in their favor. The "Full self driving" system is causing accidents (and deaths)
Turn lights, rearview cameras, safety belts, all exist and are regulated to prevent the manufacturer dumping responsibility on the drivers.
I emphasize "Full self driving" Tesla brand because the name was declared unlawful in California. https://www.govtech.com/policy/new-california-law-bans-tesla...
Well yeah, and so is every other driver on the road. That is not the relevant metric. The real question is whether or not it is safer than the average driver. Or safer than great aunt Marge who doesn't have the best eyesight or hearing, but somehow still has a drivers license.
It's more like, a pilot has access to multiple sensors. Which they do.
more likely
he is a ruthless ** who doesn't care about people dying and slightly increasing the profit margin is for him worth more then a some people dying
regulators/law allowing self driving companies to wriggle out of responsibility didn't help either
lidar was interesting for him as long as it seemed light Tesla could maybe dominate the self driving marked through technological excellency the moment it was clear it won't work he abandoned technological excellence in favor of micro optimizing profit at the cost of read safety
which shouldn't be surprising for anyone I mean he also micro optimized workplace safety in SpaceX away not only until it killed someone but even after it did (stuff like this is why there where multiple investigations against his companies until Trump magiced them away)
the thing is he has intelligent people informing him about stuff, including how removing lidar will statistically seen kill people, so it's not that he doesn't know, it's that he doesn't care
The same argument likely applies to you (assuming you are in a wealthy nation) so you are likely just as ruthless::
We could all slightly decrease our disposable incomes (spent on shit we don't need) and increase the life expectancy or QoL for someone in a poor country.
Me too: I spent thousands on a holiday (profit for my soul) and I didn't give the money to a worthy charity. I'm no fan of Musk, but I think there's better arguments for dumping on him if you really need to do that.
I also never said there aren't many other things bad about Musk.
And comparing doing decisions out of greed which will straight forward risk the live of many and you aren't even gaining that much from it (compared with what you already do gain from the same source) with "we as a society could if we where a hive mind life more frugal and help another society else where" is just a very pointless thing.
But even if they where the same, what does that change, just because you also do something bad doesn't mean it's less bad or more tolerable or should be tolerated.
> We could all slightly decrease our disposable incomes (spent on shit we don't need) and increase the life expectancy or QoL for someone in a poor country.
But we can't, or more specifically any specific individual can't, they can at best try to influence things by voting with their money and votes. But that is a completely different context.
And that doesn't mean you shouldn't spend you money with care and donate money if you can afford it.
Someone said in an interview “Elon desperately wants the world saved. But only if by him.”
There is no planet B.
I believe that's also Lex Luthor's motivation in All Star (?) Superman.
Just see him talking about things at NeuraLink. Musk wouldn't exist if it weren't for the people working for him. It's a clown that made it to the top in a very dubious way.
In reality getting rich has more to do with opportunity, connections, and luck - but accepting that means you've got to accept that the American Dream has always been a lie. It's much easier to convince yourself that people like Elon are geniuses.
I've decided Musk core talent is creating and running an engineering team. He's done it many times now - Tesla, SpaceX, Paypal, even twitter.
It's interesting because I suspect he isn't a particular good engineer himself, although the only evidence I have for that is tried to convert Paypal from Linux to Windows. His addiction to AI's getting results quickly isn't a good look either. To make the product work in the long term the technique has to get you 100% of the way there, not the 70% we see in Tesla and of now DOGE. He isn't particularly good at running businesses either, as both twitter shows and his solar roof's show.
But that doesn't matter. He's assembled lots of engineering teams now, and he just needs a few of them to work to make him rich. Long ago it was people who could build train lines faster and cheaper than anyone else that drove the economy, then it was oil fields, then I dunno - maybe assembly lines powered by humans. But now wealth creation is driven by teams of very high level engineers duking it out, whether they be developing 5G, car assembly lines or rockets. Build the best team and you win. Musk has won several times now, in very different fields.
The others don't claim the extremes of power and genius, based to a large extent on what their teams do. They also build good teams - look at DOGE, for example.
Put another way - would giving humans superhuman vision significantly reduce the accident rate?
The issue here is that the vision based system failed to even match human capabilities, which is a different issue from whether it can be better than humans by using some different vision tech.
Yes? Incredibly?
If people had 360° instant 3D awareness of all objects, that would avoid so many accidents. No more blind spots, no more missing objects because you were looking in one spot instead of another. No more missing dark objects at night.
It would be a gigantic improvement in the rate of accidents.
They don’t leave enough space/time to react even if they did have enough awareness
If you assume a constant rate of attention, but then massively increase what people are aware of during that attention, that's a massive increase in safety.
Safety is affected by lots of factors. Increasing any of them increases safety -- you don't need to increase all of them to get improvements.
Many people are in accidents that are entirely not their fault, e.g. being rear-ended. You can't do anything about those who drive unsafely, distracted, under the influence, etc.
Assuming you have a "good" driver, there are still plenty of times they might be into accidents for reasons that aren't because of awareness. For example, road conditions, behavior of other drivers, and just normal, honest mistakes.
At least for me, my own sense aren't really the limiting factor in my safety. My eyes are good enough. Modern cars have the equivalent of bowling lane bumpers with auto-centering, pre-collision warning, and blind spot warning/indicators.
Your eyes aren't good enough, compared to if you had 360° LIDAR awareness, which is the comparison here.
You're listing all these other causes of accidents, which no one disputes. There's still a whole range of accidents caused by limitations in our spatial awareness because we can only ever be looking in one direction at a time.
You're talking about "normal, honest mistakes". That's what I'm talking about. There would be less of those if we had magic 360° LIDAR in our bodies. A lot less. Especially at night, but during the day too.
This isn't about blaming anybody. It's just the simple scientific fact that human senses are limited, and self-driving cars shouldn't limit themselves to human senses. Human senses aren't good enough for preventing all preventable accidents, no matter how "honest" drivers are.
> Put another way - would giving humans superhuman vision significantly reduce the accident rate?
I'm saying that human vision is not the cause of most accidents. Most accidents are caused by distraction, incorrect decisions/errors of judgement, road conditions, etc.
You're using existing human vision as your baseline, which is not the right comparison here. The question is a baseline of superhuman abilities. So there's a ton of accidents we wouldn't get into if we had those superhuman abilities.
Basically everything involving a collision with an object we weren't aware of in time because it wasn't in our limited field of view and wasn't well-lit, that could have been avoided if we had been. Which is a decent proportion of accidents.
But the bigger, original point is that LIDAR is better than cameras, and better for avoiding accidents.
Of these, only dark objects at night is related to lidar vs vision. What percent of accidents is attributable to not seeing dark objects at night?
Limited field of view is solved by having a bunch of cameras, that is what is orthogonal to LiDAR vs vision.
I’m open to an argument here, but you have not provided any compelling rebuttal. You apparently feel like you have, though.
You use some made-up idea like "superhuman vision" and then you're asserting that I haven't provided a "compelling rebuttal"?
You're not engaging in good-faith conversation here. If you want to understand the clear, obvious benefits of LIDAR over cameras, it's a Google search away. And it's not just about at night -- it's about weather, it's about greater accuracy, especially at greater distances, it's about more accurate velocity, and so forth. It's not rocket science to understand that earlier, more confident detection of a car or deer or child suddenly moving into the street is going to reduce collisions.
Obviously LiDAR improves the maximum achievable capabilities of a self driving system. I am not disputing that. I don’t think anyone would or could dispute that, it’s just trivially true.
The point I am trying to make is more nuanced than that.
Basically I’m thinking about whether this decision to ditch lidar is actually stupid, or if there could be a sensible explanation that is plausibly true. I am proposing what I believe is a plausibly true sensible explanation for the decision.
If you think about whether accidents would be reduced by having lidar, the answer is again obviously yes. But if you think about this as a business decision, it is not so simple. The question is not whether accidents would be reduced by having lidar. The question is whether accidents would be reduced enough to justify the added cost and complexity to the system.
So then, how do we figure that out? Well, we think about the causes of human car accidents ranked by percent of car accidents caused. What would be the items in the top 95% let’s say? Then, which of those items can be addressed exclusively by a lidar enabled system.
To isolate can-only-be-fixed-by-lidar problems, I imagine a thought experiment where we have a human with lidar for eyeballs. Which of the top k causes of accidents would be fixed by giving humans lidar eyeballs. Well, maybe not that many, actually. That is my point.
This could be argued to be immoral due to I guess making decisions about death based on business considerations. But it’s probably fairly easy to convince oneself as a Tesla executive that the car (M3 anyway) being affordable is critical to getting the safer-than-human driving tech adopted, so making the car more expensive actually fails to save more people than the lidar would save, etc.
Looking in one spot instead of another is included in what I’m calling “attention”. Of course paying attention to everything all the time would be and is a huge improvement. That is orthogonal to the type of vision tech being used. All approaches used in self driving systems today look everywhere all the time.
So I'm assuming that the "superhuman vision" you described meant specifically if people had LIDAR, since that was the subject at hand.
LIDAR is superior to cameras because it much more accurately detects objects. It's an advantage all the time, and an especially huge advantage at night.
So LIDAR isn't orthogonal to anything. It's the entire point. If people had LIDAR, the accident rate would be significantly reduced.
You're arguing that LIDAR won't help cars, because by analogy it wouldn't help people if it was a native biological sense. But that's wrong.
Never driven before?
Maybe if your standard is "scheduled commercial passenger flights" level of safety.
>Otherwise why not just drive the car yourself?
There's plenty of reasons why humans can be more dangerous outside of vision quality. For instance, being distracted, poor reaction times, or not being able to monitor all angles simultaneously.
As a pitch for self driving, it's going to be a long time before I trust a computer to do the above better than I do. At the very least adding sensors I don't have access to will give me assurance the car won't drive into a wall with a road painted on it. I don't know how on earth you'd market self-driving as competent without being absurdly conservative about what functionality you claim to be able to deliver. Aggregate statistics about safety aren't going to make me feel emotionally stable when I am familiar with how jerky and skittish the driving is under visually confusing driving conditions.
Perhaps vision is sufficient, but it seems hopelessly optimistic to expect to be able to pitch it without some core improvement over human driving (aside from my ability to take the hand off the wheel while driving).
Edit: hilariously, there's already a video demonstrating this exact scenario: https://youtu.be/IQJL3htsDyQ
This is as relevant as self driving cars not being able to detect anti-tank mines. If you want to intentionally cause harm, there are far easier ways than erecting a wall in the middle of a roadway and then painting a mural on it. If you're worried about it accidentally occurring, the fact that there's no incidents suggests it's at least unlikely enough to not worry about.
>Aggregate statistics about safety aren't going to make me feel emotionally stable when I am familiar with how jerky and skittish the driving is under visually confusing driving conditions.
Sounds like this is less about the tech used (ie. cameras vs lidar) and how "smooth" the car appears to behave.
Driving requirements in many states demand 20/40 vision in at least one eye [1]. 20/20 visual acuity is a arc-resolution of approximately 1 arc-minute [2] thus 20/40 vision is approximately a arc-resolution of 2 arc-minutes or 30 pixels per degree of field of view. Legally blind is usually cited as approximately 20/200 which is approximately 10 arc-minutes or 6 pixels per degree of field of view.
Tesla Vision HW3 contains 3 adjacent forward cameras at different focal lengths and Tesla Vision HW4 contains 2 adjacent forward cameras of different focal lengths and as such those cameras can not be used in conjunction to establish binocular vision [3]. As such, we should view each camera is a zero-redundancy single-sensor and is thus a "single-eye" case.
We observe that Tesla Vision HW3 has a 35 degree camera for 250m, 50 degree camera for 150m, and 120 degree camera for 60m [4]. The Tesla Vision HW4 has a 50 degree camera for 150m, and 120 degree camera for 60m [4]. A speed of 100 km/h corresponds to ~28 m/s as such the cameras trailing times of ~10s, ~6s, ~2s. Standard safe driving practices dictates a 2-3 second follow, so most maneuvers would be dictated by the 60m camera and predictive maneuvers would be dictated by the 150m camera.
We observe that the HW3 forward cameras have a horizontal resolution of 1280 pixels resulting in a arc-resolution of ~25.6 pixels per degree for the 150m camera and ~11 pixels per degree for the 60m camera, the camera used for the majority of actions. Both values are below minimum vision requirements for driving with most states with the wide angle view within a factor of two of being considered legally blind.
We observe that the HW4 forward cameras have a horizontal resolution of 2896 pixels resulting in a arc-resolution of ~58 pixels per degree for the 150m camera and ~24 pixels per degree for the 60m camera. The 60m camera, which should be the primary camera for most maneuvers, fails to meet minimum vision requirements in most states.
It is important to note that there are literally hundreds of thousands if not millions of HW3 vehicles on the road using sensors that fail to meet minimum vision requirements. Tesla determined that a product that fails to meet minimum vision requirements is fit for use and sold it for their own enrichment. The same company that convinced customers to purchase systems when they promised: "We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."[5] in 2016 when they were delivering HW2. Despite the systems being delivered clearly not reaching even minimum vision requirements and, in fact, being nearly legally blind.
[1] https://eyewiki.org/Driving_Restrictions_per_State
[2] https://en.wikipedia.org/wiki/Visual_acuity
[3] https://en.wikipedia.org/wiki/Tesla_Autopilot_hardware
[4] https://www.blogordie.com/2023/09/hw4-tesla-new-self-driving...
[5] https://web.archive.org/web/20240730071548/https://tesla.com...
This isn't as much as a slam dunk as you think it is. The fallacy is that assuming visual acuity requirements are chosen because they're required for safe maneuvering, when in reality they're likely chosen for other tasks, like reading signs. A Tesla doesn't have to do those things, so it can potentially get away with lower visual acuity. Moreover if you look at camera feeds from HW3/HW4 hardware, you'll see they're totally serviceable for discerning cars. It definitely doesn't feel like I'm driving "legally blind" or whatever.
There is no affordable vision system that's as good as human vision in key situations. LiDAR+vision is the only way to actually get superhuman vision. The issue isn't the choice of vision system, it's to choose vision itself, and besides the lesson from the human sensory system is to have sensors that go well with your processing system, which again would mean LiDAR.
If humans could integrate a LiDAR-like system where we could be warned of approaching objects from any angle and accurately gauge the speed and distance of multiple objects simultaneously, we would surely be better drivers.
That was Karpathy's decision [1] and, yes, I also have that perception of him.
I know this is not going to be well received because he's one of HN's pet prodigies but, objectively, it was him.
1: https://www.forbes.com/sites/bradtempleton/2022/10/31/former...
(one of many)
I read that as Musk wanted this done and asked Karpathy to find a way.
Seems to me that the guy is pretty convinced, but sure your mind belongs to you and you can make it whatever you want, maybe Musk was behind with a gun, who knows.
Yeah, he was arguably wrong about one thing so his building both the world's leading EV company and the world's leading private rocket company was fake.
As they say, the proof of the pudding is in the eating. Between Tesla, SpaceX, and arguably now xAI, the probability of Musk's genius being a fluke or fraud is close to zero.
We already know he's an objective fraud because he literally cheats at video games and was caught cheating. As in, he hired people to play for him and then pretended the accomplishments were his own. Which maps very well to literally everything he's done.
But, he is frequently wrong, it just does not matter. He was occasionally right, like with Tesla back then.
But why make the jump to "fraud/charlatan"? Every system needs to be finite. We can invest in every bell and whistle. Furthermore, he's upfront about the decision. Fraud requires deception.
So, I was deceived. I didn't buy the car because of the deception, but I did buy FSD because of it.
Also, FSD disengaging when it gets sensor confusion should be considered criminal fraud. FSD should never disengage without a driver action.
So if your goal is to pump out $20k self driving cars, then you need cameras to be good enough. So the logic becomes "If humans can do it, so can cameras, otherwise we have no product, no promise."
More importantly, expectations are higher when an automated system is driving the car. It is not sufficient if, in aggregate, self-driving cars have fewer accidents. If you lose a loved one in an accident where the accident could have been easily avoided if a human was driving, then you're not going to be mollified to hear that in aggregate, fewer people are being killed by self-driving cars! You'd be outraged to hear such a justification! The expectation therefore is that in each individual injury accident a human clearly could not have handled the situation any better. Self-driving cars have to be significantly better than humans to be accepted by society, and that means it has to have better-than-human levels of vision (which lidars provide).
Computer Vision has turned out to be a very tough nut to crack and that should have been visible from anyone doing serious work in the field since at least 15 years ago.
In any case, any safety critical system should be build with redundancy in mind, with several sub systems working independently.
Using more and better sensors is only a problem when building a cost sensitive system, not a safety critical one, and very often those sensors are expensive because they are niche, that can be mediated with mass scale.
To be fair, the ghost brakes on TACC have reduced. But I tend to control my wipers with voice.
[0] https://x.com/SethAbramson/status/1892710698683142638
Instead of critiquing the article for liberal use of words like "overwhelmingly", "unique", and " 100% of Teslas" on an n=5 cars, with limited data and a very questionable analysis of the Snohomish accident, we discuss how Musk is a fraud.
Why? Because it provided information that people had to infer, and that you couldn't easily get from camera. So absent the human inference engine that allowed human drivers to work, we would have to rely on highly-precise measurement instruments like LiDAR instead.
Musk's huge error was in thinking "Well humans have eyes and those are kind of like cameras, therefore all you need are cameras to drive"
But no! Eyes are not cameras, they are extensions of our brains. And we use more than our eyes to navigate roads, in fact there's a huge social aspect to driving. It's not just an engineering challenge but a social one. So from the get-go he's solving the wrong problem.
I knew this guy was full of it when he started talking about driverless cars being 5 years out in 2015. Just utter nonsense to anyone who was actually in that field, especially if he thought he could do it without LiDAR. He called his system "autopilot" which was deceptive, but I was completely off him when he released "full self driving - beta" onto public streets. Reckless insanity. What made me believe he is criminally insane is this particular timeline (these are headlines, you can search them if you want to read the stories):
2016 - Self-Driving Tesla Was Involved in Fatal Crash, U.S. Says
2016 - Tesla working on Autopilot radar changes after crash
2017 - NTSB Issues Final Report and Comments on Fatal Tesla Autopilot Crash
2019 - Tesla didn’t fix an Autopilot problem for three years, and now another person is dead
2021 - Inside Tesla as Elon Musk Pushed an Unflinching Vision for Self-Driving Cars Tesla announces transition to ‘Tesla Vision’ without radar, warns of limitations at first
2022 - Former Head Of Tesla AI Explains Why They’ve Removed Sensors; Others Differ
2022 - Tesla Dropping Radar Was a Mistake, Here is Why
2023 - Tesla reportedly saw an uptick in crashes and mistakes after Elon Musk removed radar from its cars
2023 - Elon Musk Overruled Tesla Engineers Who Said Removing Radar Would Be Problematic: Report
2023 - How Elon Musk knocked Tesla’s ‘Full Self-Driving’ off course
2023 - The final 11 seconds of a fatal Tesla Autopilot crash
Now I get to add TFA to the chronical.The man and his cars are a menace to society. Tesla would be so much further along on driverless cars without Musk.
What made cars successful at the DUC was omnidirectional 3D distances to everything around you provided by the Velodyne LiDAR. So if we have a way to get that kind of data without a LiDAR, that would be fine.
Moreover, what a LIDAR gives you is an honest-to-god measurement. This whole idea of using deep learning to get range data from camera data is not measuring anything, it's making an inference. Which is why it's fooled by a looney tunes wall.
And like I said, we lack any inference engine that is better than the human brain. So betting your entire strategy on an inference engine that doesn't exist, bucking industry practice and the consensus of the engineering community, gets the following results:
- The NHTSA’s self-driving crash data reveals that Tesla’s self-driving technology is, by far, the most dangerous for motorcyclists, with five fatal crashes that we know of.
- This issue is unique to Tesla. Other self-driving manufacturers have logged zero motorcycle fatalities in the same time frame.
- The crashes are overwhelmingly Teslas rear-ending motorcyclists.
If this problem is unique to Tesla, and Tesla is unique in relying solely on optical sensory input, then we can conclude relying solely on optical sensory input is bad. Nothing to do with the fact that Musk is the champion.The error on range estimates from stereo “disparity” goes up as range squared for fundamental-physics reasons. Accurate calculation of disparity relies on well calibrated optics (clean, physically rigid) and it’s easy to disrupt that.
Stereo is also target-sensitive. Stereo ranging is enhanced by certain target features, like large, smooth surfaces with texture (say, stucco walls, or car grilles). It is made more difficult by smaller target surfaces with weird curvature.
I’m sure that stereo that did well on auto-sized surfaces would do much worse for pedestrian or motorcycle (size and shape) surfaces.
Besides, rangefinding optics weren't exactly amazing in the 1940s either. It's why the introduction of radar made such a massive difference in naval warfare.
I'll reiterate:
2019 - Tesla didn’t fix an Autopilot problem for three years, and now another person is dead
That's not a hypothetical. How do you figure this result indicates the Tesla theory of camera-only navigation is working out "just fine"? This level of professional negligence should be considered a crime.> And if it was truly unsafe or compromised we'd have seen a good deal of evidence
We see that evidence all the time. Teslas veering into oncoming traffic, hitting parked vehicles, driving through loony tunes walls where other cars stop, being fooled by smokescreens where other cars are not, and decapitating multiple people in a similar way that would have been mitigated by LiDAR. And now apparently real-ending motorcyclists.
This whole self driving scam is an exercise in the 80/20 rule. They spent 20% of the time to get 80% of the results, and that's why you claim "I see teslas self driving just fine"
But Tesla has been promising the other 20% will be here in 5 years for 10 years. That last 20% is the difference between the system being "Full" self driving and a fraud. Right now what we see is a vaporware fraud, and they're not going to be able to deliver.
You are the one who raised the 1940s optical rangefinder question, and now talking about autofocus, which is a totally different problem than dense ranging from optical stereo disparity.
If you don't want to argue hypotheticals, don't do it.
If you choose to argue technical details in the absence of expertise, expect pushback from people with domain knowledge.
I spent 5 years working on long-distance optical stereo as part of a DARPA program, with a team that was very committed to stereo vision. Our limited success left me with an appreciation of the challenges versus lidar.
https://www.bizjournals.com/sanjose/news/2022/11/09/heres-wh...
He began selling shares because he didn't like the direction of the company and was unhappy about being forced out of it, he told the Business Journal. From almost the moment he was pushed out until this spring, he fought a war with Velodyne in the press, at shareholder meetings and in the courts. Although the lawsuit he filed against the company is ongoing, he eventually decided the company was a "dump" and couldn't be salvaged.
https://archive.is/oxwohThe "with almost perfect accuracy and very little false positives" part is not true.
If you look at euroncap data, you'll see how most cars are not close to 100 in Safety Assist category (and Teslas with just vision are among the top). And these EuroNCAP are fairly easy and ideal. So it's clearly not a solved problem, as you portray.
https://www.euroncap.com/en/ratings-rewards/latest-safety-ra...
Radar can absolutely detect a stationary object.
The problem is not, "moving or not moving", it's "is the energy reflected back to the detector," as alluded to by your second qualification.
So something that scatters or absorbs the transmitted energy is hard to measure with radar because the energy doesn't get back to the detector completing the measurement. This is the guiding principle behind stealth.
And, as you mentioned, things with this property naturally occur. For example, trees with low hanging branches and bushes with sparse leaves can be difficult to get an accurate (say within 1 meter) distance measurement from.
Non-moving vehicles are seen as approaching you at whatever speed you are moving at. Along with all the other things you mentioned.
So they all have doppler shift but the "stationary" things approaching your car at your speed actually have much higher shift than the traffic around you.
It's hard to tell something sticking off the back of a truck or a motorcycle behind a vehicle without false positive triggering off of other stuff and panic braking at dumb times, something early systems (generically, not any particular OEM) were known for, which is why they were mostly limited to warnings not actual braking.
And while one can make bad faith comments all day about that not technically being the fault of the system doing the braking allowing such systems to proliferate would be a big class action lawsuit, and maybe even a revision of how liability is handled, waiting to happen.
Earlier in the thread people were saying removing lidar was bad becuase multiple sensors types is good, presuming the camera stay either way, and one is not replacing camera with radar. I agree with this. It's usually trivially easy to corner case defeat one sensor type, as your example shows, regardless of sensor type. They all have one weakness or another.
That's why things like military systems have many sensor types. They really don't want to miss the incoming object so they measure it many different ways. Defeating many different sensor types is just way harder and therefore more unlikely to occur naturally.
And yes, control systems can absolutely reliably combine the input of many sensors. This has been true for decades.
Frankly I surprised more of these systems don't take advantage of sound. It's crazy cheap and society has been adding sound alerts to driving for a long time (sirens, car horns, train horns, etc.)
No, that's how lidar works. Lidars have a single frequency and a very narrow bandwidth. Automotive radars have a bandwidth of 1-5 GHz. They operate around 80 GHz, which is very well reflected by water (including people) and moderately reflected by things like plastic. 80 GHz is industrially used to measure levels of plastic feedstock.
Compare TSA scanner images, which are ~300 GHz: https://www.researchgate.net/figure/a-Front-and-back-millime...
You are correct that most automotive radars like Bosch units [1] are very low detail though. Most of them don't output images or anything- they run proprietary algorithms that identify the largest detection frequencies (usually a limited number of them) and calculate the direction and distance to them. Unlike cameras and lidars they do not return raw data, so naturally when building driver assistance companies instead relied on cameras and lidar. Progress was instead driven by the manufacturers and with smaller incentives the progress is slower.
[1]: https://www.bosch-mobility.com/en/solutions/sensors/front-ra...
For a long, long time automotive radar was a pipe dream technology. Steering a phased array of antennas means delaying each antenna by 1/10,000s of a wave period. Dynamically steering means being able to adjust those timings![1] You're approaching picosecond timing, and doing that with 10s or 100s antennas. Reading that data stream is still beyond affordable technology. Sampling 100 antennas 10x per period at 16 bit precision is 160 terabytes per second, 100x more data than the best high speed cameras. Since the fourier transform is O(nlogn), that's tens of petaflops to transform. Hundreds of 5090s, fully maxed out, before running object recognition.
Obviously we cut some corners instead. Current techniques way underutilize the potential of 80 GHz. Processing power trickles down slowly and new methods are created unpredictably, but improvement is happening. IMO radar has the highest ceiling potential of any of the sensing methods, it's the cheapest, and it's the most resistant to interference from other vehicles. Lidar can't hop frequencies or do any of the things we do to multiplex radar.
[1]: In reality you don't scan left-right-up-down like that. You don't even use just an 80 GHz wave, or even just a chirp (a pulsing wave that oscillates between 77-80 GHz). You direct different beams in all different directions at the same time, and more importantly you listen from all different directions at the same time.
(Also I wouldn't say it's _irrelevant_ that they don't have lidar, as if they did it would cover some of the same weaknesses as radar.)
The analysis is useless if it doesn't account for the base rate fallacy (https://en.m.wikipedia.org/wiki/Base_rate_fallacy)
The first thing I thought before even reading the analysis was "Does the author account for it?" And indeed he makes no mention that he did.
So after reading the whole article I have no idea whether Tesla's automatic driving is any worse at detecting motorcycles than my Subaru's (which BTW also uses only visual sensors).
Antidisclaimer: I hate both Teslas and Musk. And my hate for one is not tied to the other.
> It’s not just that self-driving cars in general are dangerous for motorcycles, either: this problem is unique to Tesla. Not a single other automobile manufacturer or ADAS self-driving technology provider reported a single motorcycle fatality in the same time frame.
There are not many other cars out there (in comparison), with a self-driving mode. There are so many Teslas in the World out there driving around, that I think you'd have to considerably multiply all the others combined to get close to that number.
As such, while 5 > 0, and that's a problem, what we don't know (and perhaps can't know), is how that adjusts for population size. I'd want to see a motorcycle fatality rate per auto-driver-mile number, and even then, I'd want it adjusting for prevalence of motorcycles in the local population: the number in India, Rome, London and South California vary quite a bit.
This puts the burden on companies which may hesitate to put their “self driving” methods out there because it has trouble with detecting motorcyclists. There is a solid possibility that self driving isn’t being rolled out by others because they have higher regard for human life than Tesla and its exec.
ADAS is fairly common. It was in my VW and BMW, and I’m certain many other cars have it too.
To take a hypothetical extreme: If all cars but one on the road were Teslas, it would not be meaningful to point out that there have been far more fatalities with Teslas.
Even more illustrative, if 10 people on motorcycles had died from Teslas, and 1 person had died from that sole non-Tesla, then that non-Tesla would be deemed much, much more dangerous than Tesla.
The replies to my comment seem to me to be addressing the question of what the appropriate reference class is, not the base rate fallacy.
The base rate fallacy is fundamentally about the relative rates in the population, and I don't see that data in the article.
Seems like semantics to me, I don't think we actually disagree on much.
However, in such a case, “base rate fallacy” would prevent you from blaming Tesla even if it had a 98% fatality rate. How do you square that? What happens if other companies aren’t putting self driving cars out yet because they aren’t happy with the current rate of accidents, but Tesla just doesn’t care?
You handle it the same way any new technology is introduced. Standards and regulations, and these evolve over time.
When the first motor car company started selling cars, pedestrians died. The response wasn't to ban cars altogether.
The appropriate response would be to set some rules, examine the incidents, see if any useful information can be gleaned.
And of course, once more models are out there with self driving abilities, we compare between them as well.
Here, we can get better data than what's in the article: What is the motorcycle death rate with cars with no automated driving? If, per mile, it's higher than with Teslas with automated driving, then Tesla is already ahead. The article is biased right from the get go: It compares only cars with "self-driving" (whatever that means) capabilities, and inappropriately frames the conversation.
If I'm a motorcyclist, I want to know two things:
1. If all cars were replaced with Teslas with self driving capabilities, am I safer than the status quo?
2. If all self driving cars were replaced with other cars with self driving capabilities, am I safer than the status quo?
The article fails to answer these basic questions.
As all car manufacturers point out: A prerequisite to enabling any safety mechanism is that the driver overrides when it fails. This includes blind spot detection, lane drift detection/correction, and adaptive cruise control. It is understood that when I enable it, I'm responsible for its behavior given that I can override it.
But that's all an aside. The point isn't that I continue to drive it, but that this is not something special about Teslas.
And autonomy features are a different domain altogether.
It is a bad article.
* They basically invented the number of miles travelled, which is off by a large factor compared to the official figure from Tesla
* If you take into account the fact that the standard deviation is proportional to the square root of the number of fatal accidents, the comparison has absolutely no statistical significance whatsoever
That said -- and I might have missed this if it was in the linked sources, I'm on mobile -- what is the breakdown of other (supposed) AVs adoption currently? What other types of crashes are there? Are these 5+ fatalities statistically significant?
Doesnt give number of driving hours for Tesla vs others though.
Or that any collision that doesn’t involve airbag deployment is not actually an accident, according to Tesla.
You were saying something about stats?
It's worth noting Waymo's rider-only miles is a stronger claim than "FSD" miles. "Fully self driving" is Tesla branding (and very misleading, and expects an attentive human behind the wheel ready to take over in a split second.)
How many other self-driving vehicles are on the road vs Tesla? What percentage of traffic consists of motorcycles in the place where those other brands have deployed bs in Florida, etc.
Combine the two, and the regulations will be written such that it excludes Tesla and includes Waymo. Not by name, just that the safety regulations will require a safety record better than Tesla's but worse than Waymo's. Likely nobody but Waymo will have that record, and now nobody will be able to because they won't have access to the public roads to attain it.
This might be the ultimate regulatory lock in monopoly we've ever seen.
The solution seems easier, if only the regulators would pick up upon it.
Under the current human driven auto regime, it is the human that is operating the machine who is liable for any and all accidents.
For a self-driving car, that human driver is now a "passenger". The "operator" of the machine is the software written (or licensed) by the car maker. So the regulation that assures self-driving is up-to-snuff is:
When operating in "self driving" mode, 100% of all liability for any and all accidents rests on the auto manufacturer.
The reason the makers don't seem to care much about the safety of their self driving systems is that they are not the owners of the risk and liability their systems present. Make them own 100% of the risk and liability for "self driving" and all of a sudden they will very much want the self-driving systems to be 100% safe.
Nor is it sufficient to ensure that self driving is significantly safer than human drivers. I don't think the public wants "slightly safer than humans".
Doesn't even seem that crazy when you consider the government is already licensing them to be able to use their private data anyway. Biggest issue is someone didn't set it up this way from the start.
If a competitor resold their system to other car companies, another possible scenario might be a duopoly like Apple versus Android.
The regulations are doing really well, it’s a big victory for regulators, why not make Teslas abide by the same rules? Why not roll out such strict scrutiny gradually to all vehicles and drivers?
You are talking about regulatory degrees that are about safety. It seems the thing that lawmakers change is reactive to other things. Like how much does the community depend on cars to survive? If you cannot eliminate car dependence you can’t really achieve a more moral legal stance than “People can and will buy cars that kill other people so long as it doesn’t kill the driver.”
In fairness to the regulators they have been pretty reasonable so far.
That's not a bad thing if Tesla is significantly worse than Waymo. That's desirable.
The solution here seems like it would be for Tesla to become as safe as Waymo. If they can't achieve that, that's on them. Unfair press doesn't cause that.
I mean, I care about not dying in a car accident. If Tesla is less safe, and this leads to people taking safer Waymos instead, I can't see that as anything but a good thing. I don't want to sacrifice my life so another company can put out more dangerous vehicles.
tesla.mult = c(1/(5:2),1:5)
data.frame(tesla.mult = tesla.mult, p.value = sapply(tesla.mult, (function(tesla.mult) { poisson.test(c(5, 0), c(tesla.mult, 1))$p.value })))
tesla.mult p.value
1 0.2000000 0.0001286008
2 0.2500000 0.0003200000
3 0.3333333 0.0009765625
4 0.5000000 0.0041152263
5 1.0000000 0.0625000000
6 2.0000000 0.1769547325
7 3.0000000 0.3408203125
8 4.0000000 0.5904000000
9 5.0000000 1.0000000000
tesla.mult is how many times more total miles Teslas have driven with level-2 ADAS engaged compared to all other makers. We don't have data for what that number should be because automakers are not required to report it. I think that it is probably somewhere between 1/5 and 5. If you believe that the number is more than 1, then the result is not statistically significant.> ADAS that are considered level 1 are: adaptive cruise control, emergency brake assist, automatic emergency brake assist, lane-keeping, and lane centering. ADAS that are considered level 2 are: highway assist, autonomous obstacle avoidance, and autonomous parking.
https://en.m.wikipedia.org/wiki/Advanced_driver-assistance_s...
I think that Level 2 requires something more than adaptive cruise control and lane-keep assist, but that several automakers have a system available that qualifies.
My intuition is that there are more non-Tesla cars sold with Level 2 ADAS, but Tesla drivers probably use the ADAS more often.
So I don’t have high confidence what tesla.mult should be. I wish that we had that data.
The article cites 5 motorcycle fatalities in this data.
Four of the five were in 2022, when Tesla FSD was still closed beta.
The remaining incident was in April 2024.
(The article also cites one additional incident in 2023 where the injury severity was "unknown", but the author speculates it may have been fatal.)
I dunno, to me this specific data suggests a technology that has improved a lot. There are far more drivers on the road using FSD today than there were in 2022, and yet fewer incidents?
But this seems like a pretty legitimate accusation, and certainly a well researched write-up at the very least.
The author kind of plays this up a bit by insinuating that there are incidents we don't know of, and they probably aren't wrong that if there are five fatalities there are going to be many more near misses and non-fatal fender bender collisions.
But for the number of millions of miles on the road covered by all vehicles, extrapolating from five incidents is doing a lot of statistical heavy lifting.
The competitors have to use pre-mapped roads and availability is spotty at best. There is also risk as Chevy already deprecated their first gen "FSD", leaving early adopters with gimped ability and shutout from future expansions.
Level 4 is a commercially viable product. Mapping allows verification by simulation before deployment. Tesla offers level 3, which is not monetizable beyond being a gimmick.
What is clear is Tesla is not currently capable of self driving and he has lied year after year after year about it.
I think carmakers should have to be liable for their cars capabilities in the areas they allow them to be used.
Tesla has already said that some of its vehicles, sold with “all the hardware necessary for FSD” will never get it.
No they didn't. They said it turned out the vehicles didn't have all the hardware necessary, but that a free retrofit to add it will be forthcoming.
As far as I am aware, everyone else's offerings only work in pre-mapped areas, i.e. Chevy's system only covers half my commute.
We know this is one of the core issues of Tesla FSD: its capabilities have been hyped and over promised time and time again. We have countless examples of drivers trusting it far more than they should. And who’s to blame for that? In large part the driver, sure. But Elon has to take a lot of that blame as well. Many of those drivers would not have trusted it as much if it wasn’t for his statements and the media image he has crafted for Tesla.
I'd wager far more motorcyclists get hit by humans driving non-Teslas in non-autonomous modes. I could rephrase your comment to:
"Yeah, but if humans drive cars without safety features, and that leads to a bunch of motorcyclists getting hit, then maybe that’s exactly the problem."
... to make the (faulty[1]) argument that driving with FSD turned on is better for motorcyclists.
[1] Faulty not because it's false, but because it is a logical fallacy.
Tesla is a victim of their own success, they’ve set the bar so high people now expect it to have 0.0000% failure rate.
People arguing over base rates of motorcycle accidents as if Tesla didn't get fooled by a loony tunes wall. If Waymo had killed 5 motorcyclists in SF we would know. But they operate there without an incident for years.
Meanwhile, Tesla just after releasing autopilot to the world a man is decapitated because the system is deficient. Then it happened again in 2016 under eerily similar circumstance. Then we observe Teslas hitting broad objects like fire trucks and busses.
The correct response to that is not to say "Well what's the base rate for decapitations and hitting broad objects?"
No, you find out the reason for this thing happening. And the reason is known: a deficient sensor suit that it prone to missing objects clearly in its field of view.
So with the motorcycle situation, we already know what the problem is. This isn't a matter of a Tesla just interfacing with the statistical reality of getting rear-ended by a car. Because we know Teslas have a deficient sensor suit.
Important distinction: FSD didn't get fooled by a looney tunes wall. Legacy Autopilot did.
But other people have tried to reproduce the experiment with FSD, and it wasn't fooled.
TEST 1 (FSD): https://youtu.be/9KyIWpAevNs?feature=shared&t=112
"Show FSD is activated on video, we did that... Here we go feet aren't touching, hands aren't touching.... it's going to hit the wall!" *slams on breaks, ends up about 2 meters from the wall* "Cannot see the wall" *inches forward until the car can see the wall* "Only sees the wall when I'm barely touching it"
TEST 2 (FSD):
https://youtu.be/9KyIWpAevNs?feature=shared&t=169 "Self drive, not touching anything." *manually slams on breaks, stops a few meters short* "That was gonna hit the wall" *inches forward* "Car... does... not... see... now it does." *only inches from the wall* "That would have been too late" (ya think??)
TEST 3 (Autopilot): https://youtu.be/9KyIWpAevNs?feature=shared&t=267 "Does not see the wall... does not see the wall.... does not see the wall...." *manually slams on breaks to avoid hitting the wall*
The FSD tests are not any better than the AP results from the original looney tunes test. So that's why I don't agree FSD vs. AP is an important distinction.Maybe sometimes the FSD is not fooled by the cartoon wall trick. But the results show that even in ideal conditions - full light, no weather, clean course - the thing can still fail.
Research prototypes from two decades ago would not be fooled by this. The sophomores in my intro to robotics class could build a robot that would not be fooled by this. And yet Elon Musk and the geniuses at Tesla can't build a car that isn't fooled.
Sidenote: It's amazing to me that we can't look up data and see the extensive government reports on the safety and capabilities of these systems. We are literally just deploying them on public streets and relying on random youtube celebrities to conduct these evaluations, because Musk has fully captured the people who would do this kind of regulation.
Watching how fast these things accelerate and how close they have to be to a literal wall to see it is terrifying.
The issue is not quite how good the automation is in absolute terms, it's how good it is vs. how it is sold. Tesla is an outlier here, right down to the use of the term "FSD" i.e. "Full Self-Driving", when it's nothing of the sort.
Tesla could very well have the best FSD marketing on the software. And that's dangerous.
Data point: the "First To Gain U.S. Approval For Level 3 Automated Driving System" is ... Mercedes-Benz
https://www.forbes.com/sites/kyleedward/2023/09/28/mercedes-...
The second will be, IDK, maybe BMW? https://www.bmwblog.com/2024/06/25/bmw-approval-for-combinin...
It’s actually more a testament to Mercedes Benz’s ability to navigate regulatory slog.
VW would’ve passed emissions if the technicians didn’t take it out of the lab to test for on-road conditions.
Not just “the driver is only in the seat for legal reasons” bullshit (“but watch us throw a press conference to parade your talented data for the world if we think it will protect our reputation at your expense”).
I for one find this line of argument - that videos on YouTube are a stronger signal than regulatory approval - to be absolutely wrongheaded, delusional and laughable. YouTube proves nothing, and as a platform, was never intended to. Unlike regulatory approval.
Comparing fsd to whatever Mercedes is doing is like chatgpt v Markov chains
If you get rear ended at a stoplight or stopsign it’s very likely the motorcyclist is not at fault. The motorcyclist suffers significantly more bodily injury than a car driver would in a similar collision. As a motorcyclist, you can tell that sometimes people just don’t see you because their brain is looking for something car shaped.
When I ride, every time I stop at a stoplight or stopsign I am watching my rear view mirror to judge if the person behind me is going to stop, and I have an exit strategy if they don’t. Ive had some close calls.
And I rarely find myself splitting when the light changes; I simply dip back into the lane when the light changes before traffic starts moving.
Still, I always leave the bike in gear until the car behind me has stopped, and if I can, I stop slightly diagonally with enough space to move left or right to avoid getting sandwiched.
2nd hand collision, still pretty dangerous.
- A drunk driver doing 100 in a 45 (by pressing down on the pedal) through a yellow light
- A driver who “didn’t see the motorcyclist” because he was looking at his PHONE, but who had the go pedal pressed down at 95-100% for as many as 10 seconds after hitting him, to the point where witnesses say the front wheels were spinning while up in the air
- Others with no detail- not the authors fault but from the ones we have, clearly there are often circumstances which would require more analysis before coming to this conclusion
The video title is "Tesla Autopilot Crashes into Motorcycle Riders - Why?"
The amazing part is the one guy who created this video covered all insightful comments here in HN in one concise video 2 years ago.
[1] https://www.youtube.com/watch?v=IQJL3htsDyQ&pp=ygUQbWFyayByb...
The problem is that we don't have certification process before a tech is deployed on the streets - your AV must pass this and this tests before you are allowed to use it. We don't care how you got there lidar, gps/astrology, camera/ai are all ok as long as you pass
I have had a Tesla for several years now. The visual object detection display very frequently misidentifies objects. Just the other day, it detected my garage (which the car was parked next to) as a semi truck.
I have never, with my wildly imperfect human vision, mistaken a building for a semi truck.
More to the point of the OP, sometimes objects show up and disappear repeatedly.
As I’ve said in other comments, I think EVs and autonomous driving will eventually improve the lot of humanity greatly, but there’s a lot to be desired in the current tech and not any point in trying to pretend it is better than it really is.
Transient errors happen all the time, but somehow you trust yourself more to keep driving.
What I’ve not done is seen a motocyclist in front of me, then decided it wasn’t really there, and then hit said motorcyclist.
We have zero evidence that vision-based computers are safe enough to drive.
No, we don't.
We have evidence that visual detection (well, the whole human set of senses, but for driving most of the reliance is on visual detection), combined with human level intelligence, is sufficient for the human-level driving which is the absolute minimum bar for fully automated driving.
Which is very different than “visual detection works quite well”.
The signal we get is enough. It is the processing that is still lacking. And that is why there should be benchmarks to be passed. Cameras and microphones do provide enough information for successful driving.
I'm not sure what your point is, but what you actually said overstates the case for visual detection.
> The signal we get is enough.
Sure, we know that with processing far beyond what we currently can replicate, it can at least just meet the minimum bar.
> And that is why there should be benchmarks to be passed
Is it? I would think there should be proof and validation no matter which pieces were demonstrated adequate and which have not yet been.
But the thing is that we don't even know enough to make a set of benchmarks that can be tested that provide confidence, and we maybe won't until we've had the several systems that demonstrate that they work well enough in real conditions from which to generalize a minimum set of standards that can be verified prior to getting into real conditions.
So we do supervised testing in progressively-less-limited real world conditions of each system until we get to that point.
What are you talking about specifically?
But they seem to be conflating human vision with computer vision.
That said, what if we make these self driving tools fully available to drivers. Drivers are pretty good at driving already with with relatively short training. Give them the superpowers of teslas like 360 view, predictive object path, and object detection and you have super drivers.
If you're now shifting to the idea we need to use state-of-the art sophisticated camera technologies instead of the commodity stuff, now you're back to paying LiDAR prices, so why not just use that?
And anyway, Teslas sold today and for a long time now are supposed to have been sold with cameras sufficient to solve full self driving (not beta). If state of the art cameras are needed, I've got bad news for all those Tesla customers.
Doesn't matter when mileage isn't what's being compared - it's whether or not others have caused the same problem - PERIOD.
There's definitely an issue with Tesla's approach, though.
Maybe the mileage on other brands is lower, because Tesla is unique in hyping as "full self-driving" a product that's not ready? It would still be a problem unique to Tesla then.
That is: until self-driving cars ride much more, obviously.
In the video Chuck Cook states he was involved in a fatal motorcycle accident in Florida in April 2022 where the motorcycle wheel fell off and then careened into his Tesla while Navigate on Autopilot was engaged. The reporting source for that NHTSA incident is "Field Report" and "Media" which lines up with his statement that a Florida Highway Patrol officer reviewed the footage and that Tesla most likely first learned of the crash from Chuck Cook's page classified as "media".
If the incident was not the crash involving Chuck Cook, then Chuck Cook's crash would be a crash that Tesla has left illegally unreported as I can see no other identifiable crash that could correspond to Chuck Cook's crash.
It is crazy that me that anyone trusts FSD given the downside.
Maybe wouldn’t be an issue if bike infrastructure was separate from car traffic. Or USA had even a fraction of the foresight as the NL when it came to transportation infrastructure.
They do not accelerate to turn in front of me but instead slow down and pull in behind with patience for their turn off. I have a few areas i need to take up a whole lane for a short time (no bike line) and they don't get angry and honk. In general i feel safe if they are around vs an unpredictable human driver.
I think self-driving cars are going to save a lot of lives. Not Tesla's tech of course, they're too far behind, but the actual on the road right now self-driving cars are fantastic.
Currently over 40K people are killed in the US every year from automobile fatalities and people think that’s just the price we pay for driving. I am not making a value judgement about that statistic.
But if even 1 person dies because of an autonomous vehicle fatality, there are going to be investigations and experience is that the entire platform is going to be paused.
Besides who is liable for an autonomous vehicle fatality? Is Tesla going to assume liability as the “driver”? Are insurance companies?
On the other hand, Tesla is not exactly known for reliability.
https://insideevs.com/news/731559/tesla-least-reliable-used-...
And its owner is not exactly known for honesty.
I believe that has been pretty much figured out, and insurance companies offer policies that cover it.
Sure, they are usually not "intelligent", but are auto-pilots in cars?
Also dangerous workplaces are carefully controlled access environments where only the people that work there are in danger.
Imagine if airplane fatalities happen even 1% as often as car fatalities? Right now the ratio is around 770:1 per passenger mile.
I don't think I agree that workplace accidents are that different. It's not primarily the workers who die who benefit from those machines that kill them - I'd say the workers _are_ someone else.
It's just that we've gotten used to that, and we feel like the number is steady and low enough that it's easily outweighed by the gigantic benefits we get from industrialization. My assumption is that it's the same way for cars and car-related deaths, but we haven't extended it to self-driving cars yet.
Individually, everybody sees the benefit, but once integration of FSD hits the commercial side and consumer prices are being reduced as a direct consequence, deaths become work-place accidents. Like a truck with a human driver killing an "innocent" pedestrian - it's accepted because we really want the trucks.
I’m not saying only the workers benefit. I’m saying there is no outrage if some coal miners die in West Virginia and definitely there is no outrage about the working conditions of immigrant farmers, factory workers at Amazon warehouses, and definitely not the supply chains overseas.
But if little Becky got killed by an autonomous car, you would have all sorts of hearings.
My point being that FSD cars are even more awesome and society will realize that once they are more numerous. They'll be safer and cheaper, and some collateral damage will be accepted just like it is today.
The American citizens won’t accept vaccines, fluoride, and believe that immigrants are eating pets. The very definition of “conservative” is not accepting change.
Won't happen soon. Previous Tesla's promises turned out to be hype to prop up the stock price. You should not put faith in this one.
Looking at the camera preview screen on a Tesla with the Sun in frame is all you need to do to prove that false.
My question is this: would Teslas be improved with Lidar?
But yeah, like Sam Harris says now, perhaps I didn't know the real Musk.
People accusing me and others of "you understand this only now?" are full of shit. You didn't know, just like me. Holier than thou is distasteful.
Human vision is nothing like the cameras on a Tesla. Our eyes have far more advanced systems including the ability to microscan, pivot, capture polarisation, and a much more sophisticated field of depth. Frankly, current generation cameras are in many ways far more primitive than human eyes. And that's not even considering the over 20% of our brain mass dedicated to post processing.
All because Musk wants to pad his profits and keep the hype without paying for a lidar.
Jesus Christ I know the zeitgeist is a cultural backlash against political correctness but goddamn we're getting a little cavalier using 1930s-esque language about non minority groups here aren't we?
You couldn't have said serious injuries vs deaths? Or deaths vs maimings?
I have a slight limp and am a member of a scapegoated minority who uses slurs in-community, I'm not easily offended (I'm inherently offensive) and your lil' shoutdown is gross.
For good reason, as soon as Trump winked, all of the corporations and colleges immediately dropped the programs.
My minority-status is only relevant to further illuminate the emptiness of the argument being made. Most people can claim some degree of structural disadvantage in their lives.
The revival of eugenicist language is not a positive trend for any member of a disadvantaged demographic, regardless of how they wield their position in society.
Before October 2024, the highway stack was still running on Tesla’s older software. A major shift happened with version 12.5.6, which brought the end-to-end neural network to highways for the first time.
Then in December, version 13.2.2 pushed things even further by scaling the model specifically for HW4/AI4. It’s a major step up from earlier versions like 12.3.x from April.
I absolutely feel for anyone who’s been involved in an accident while using FSD. But at the end of the day, the system still requires driver supervision. It’s not autonomous, and the responsibility ultimately falls on the person behind the wheel.
If you’re going to evaluate how FSD performs today, you really need to be looking at version 13 or greater. Anything older just doesn’t reflect what the system is capable of now.
But they don't, and I'm not sure if there have been many successful cases of suing car manufacturers because their cars let people drive drunk(?)
Lane changes used to feel robotic. Speed had to be adjusted manually all the time just to feel comfortable.
The new system feels much more human. It has driving profiles and adapts based on traffic, which makes the experience way smoother.
Have you tried fsd? I use it almost everyday. I’m more arguing that the article is misleading about the current version of fsd. I do not doubt that the accidents happened on the older version.
FSD cannot be turned on above 85 mph. If you try to accelerate above 85 mph it disengages.
> That’s how you would strike a motorcyclist at such extreme speed, simply press the accelerator and all other inputs are apparently overridden.
If the user is pressing the accelerator pedal, it overrides the autopilot. I’m not sure how FSD could reasonably be blamed in that situation.
The article also talks about Traffic Aware Cruise Control (TACC). There is a nuance between the different systems. This is not Full Self-Driving (FSD). It is an older system that Tesla provides for free. All it does is try to maintain a set speed, and you still have to steer. If the user is pressing the accelerator pedal, it overrides the system. I am not sure how FSD could reasonably be blamed in that situation.