This is because the vision system thinks there is something obstructing its view when in reality it is usually bright sunlight -- and sometimes, absolutely nothing that I can see.
The wipers are, of course, the most harmless way this goes wrong. The more dangerous type is when it phantom-brakes at highway speeds with no warning on a clear road and a clear day. I've had multiple other scary incidents of different types (swerving back and forth at exits is a fun one), but phantom braking is the one that happens quasi-regularly. Twice when another car was right behind me.
As an engineer, this tells me volumes about what's going on in the computer vision system, and it's pretty scary. Basically, the system detects patterns that are inferred as its vision being obstructed, and so it is programmed to brush away some (non-existent) debris. Like, it thinks there could be a physical object where there is none. If this was an LLM you would call it a hallucination.
But if it's hallucinating crud on a windshield, it can also hallucinate objects on the road. And it could be doing it every so often! So maybe there are filters to disregard unlikely objects as irrelevant, which act as guardrails against random braking. And those filters are pretty damn good -- I mean, the technology is impressive -- but they can probabistically fail, resulting in things that we've already seen, such as phantom-braking, or worse, driving through actual things.
This raises so many questions: What other things is it hallucinating? And how many hardcoded guardrails are in place against these edge cases? And what else can it hallucinate against which there are no guardrails yet?
And why not just use LIDAR that can literally see around corners in 3D?
There is none with Musk's "vision only" approach. Vision can fail for a multitude of reasons --- sunlight, rain, darkness, bad road markers, even glare from a dirty windshield. And when it fails, there is no backup plan -- the car is effectively driving blind.
Driving is a dynamic activity that involves a lot more than just vision. Safe automated driving can use all the help it can get.
Both LIDAR and vision have edge cases where they fail. So you ideally want both, but then the challenge is reconciling disagreements with calibrated, and probabilistic fusion. People seem to be under the mistaken impression that vision is dirty input and LIDAR is somehow clean, when in reality both are noisy inputs with different strengths and weaknesses.
I guess my point is: Yes, 100% bring in LIDAR, I believe the future is LIDAR + vision. But when you do that, early iterations can regress significantly from vision-only until the fusion is tuned and calibration is tight, because you have to resolve contradictory data. Ultimately the payoff is higher robustness in exchange for more R&D and development workload (i.e. more cost).
The same reason why Tesla needed vision-only to work (cost & timeline) is the same reason why vision+LIDAR is so challenging.
It's the ability to detect sensor disagreements at all.
With single modality sensors, you have no way of truly detecting failures in that modality, other than hacks like time-series normalizing (aka expected scenarios).
If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.
But we'd think that the budget config of the Boeing 737 MAX would have taught us that tying safety critical systems to single sources of truth is a bad idea... (in that case, critical modality / single physical sensor)
"A man with a watch always knows what time it is. If he gains another, he is never sure"
Most safety critical systems actually need at least three redundant sensors. Two is kinda useless: if they disagree, which is right?
EDIT:
> If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.
This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
What do you do?
They don't work by merely taking a straw poll. They effectively build the joint probability distribution, which improves accuracy with any number of sensors, including two.
> You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
Any realistic system would see them long before your eyes do. If you are so worried, override the AI in the moment.
Lots of safety critical systems actually do operate by "voting". The space shuttle control computers are one famous example [1], but there are plenty of others in aerospace. I have personally worked on a few such systems.
It's the simplest thing that can obviously work. Simplicity is a virtue when safety is involved.
You can of course do sensor fusion and other more complicated things, but the core problem I outlined remains.
> If you are so worried, override the AI in the moment.
This is sneakily inserting a third set of sensors (your own). It can be a valid solution to the problem, but Waymo famously does not have a steering wheel you can just hop behind.
This might seem like an edge case, but edge cases matter when failure might kill somebody.
1. https://space.stackexchange.com/questions/9827/if-the-space-...
This is completely different from systems that cover different domains, like vision and lidar.
I see in many domains a tendency to oversimplify decision making algorithms for human understanding convenience (eg vote rather that develop a joint probability distribution in this case, supply chain and manufacturing in particular seem to love rules of thumb) rather than use better algorithms that modern compute enables higher performance, safety etc
I will not pretend to be an expert. I would suggest that "human understanding convenience" is pretty important in safety domains. The famous Brian Kernighan quote comes to mind:
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
When it comes to obscure corner cases, it seems to me that simpler is better. But Waymo does seem to have chosen a different path! They employ a lot of smart folk, and appear to be the state of the art for autonomous driving. I wouldn't bet against them.
No, when it comes to not killing people, I'd say that safer is usually better.
Remember the core function of the system is safety, simplicity is nice to have, but explicitly not as important.
That said, beware of calling something 'complicated' just because you don't understand it, especially if you don't have training and experience in that thing. What's more relevant is whether the people building the systems think it is too complicated.
Cars can stop in quite a short distance. The only way this could happen is if the pedestrian was obscured behind an object until the car was dangerously close. A safe system will recognize potential hiding spots and slow down preemptively - good human drivers do this.
"Quite a short distance" is doing a lot of lifting. It's been a while since I've been to driver's school, but I remember them making a point of how long it could take to stop, and how your senses could trick you to the contrary. Especially at highway speeds.
I can personally recall a couple (fortunately low stakes) situations where I had to change lanes to avoid an obstacle that I was pretty certain I would hit if I had to stop.
While it's true they don't stop instantaneously at highway speeds, cars shouldn't be driving highway speeds when a pedestrian suddenly being in front of you is a realistic risk.
I don't think these problems can just be assumed away.
For example, say you have a pedestrian that's partially obscured by a car or another object, and maybe they're wearing a hat or a mask or wearing a backpack or carrying a kid or something, it may look unusual enough that either the camera or the lidar isn't going to recognize it as a person reliably. However, since the camera is generally looking at color, texture, etc in 2D, and the Lidar is looking at 3D shapes, they'll tend to fail in different situations. If the car thinks there's a substantial probability of a human in the driving path, it's going to swerve or hit the brakes.
> This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.
> What do you do?
Go into your failure mode. At least you have a check to indicate a possible issue with 2 signals.
The issue is not recognising that optimising for Ux at the expense of safety here is the wrong call, motivated likely by optimism and a desire for autonomous cars, more than reasonable system design. I.e. if the sensors disagree so often that it makes the system unusable, maybe the solution is “we’re not ready for this kind of technology and we should slow down” rather than “let’s figure out non-UX breaking edge case heuristics to maintain the illusion of autonomous driving being behind the corner”.
Part of this problem is not even technological - human drivers tradeoff safety for UX all the time - so the expectation for self driving is unrealistic and your system has to have the ethically unacceptable system configuration in order to have any chance of competing.
Which is why - in my mind - it’s a fools endeavour in personal car space, but not in public transport space. So go waymo, boo tesla.
People are underweighting the alternative single system hypothetical -- what does a Tesla do when its vision-only system erroneously thinks a pedestrian is one lane over?
This is why good redundant systems have at least 3... in your scenario, without a tie-breaker, all you can do is guess at random which one to trust.
For example jet aircraft commonly have three pitot static tubes, and you can just compare/contrast the data to look for the outlier. It works, and it works well.
If you tried to do that with e.g. LIDAR, vision, and radar with no common point of reference, solving for trust/resolving disagreements is an incredibly difficult technical challenge. Other variations (e.g. two vision + one LIDAR), does not really make it much easier either.
Tie-breaking during sensor fusion is a billion+ dollar problem, and will always be.
Also, this is probably when Waymo calls up a human assistant in a developing-country callcentre.
But vision only hasn't worked --- not as promised, not after a decade's worth of timeline. And it probably won't any time soon either --- for valid engineering reasons.
Engineering 101 --- *needing* something to work doesn't make it possible or practical.
It was maybe a valid argument 10 years ago, but in 2025 many companies have shown sensor fusion works just fine. I mean, Waymo has clocked 100M+ miles, so it works. The AV industry has moved on to more interesting problems, while Tesla and Musk are still stuck in the past arguing about sensor choices.
The old ambition is dead.
[1] https://electrek.co/2025/05/16/tesla-robotaxi-fleet-powered-...
I keep reading arguments like this, but I really don't understand what the problem here is supposed to be. Yes, in a rule based system, this is a challenge, but in an end-to-end neural network, another sensor is just another input, regardless of whether it's another camera, LIDAR, or a sensor measuring the adrenaline level of the driver.
If you have enough training data, the model training will converge to a reasonable set of weights for various scenarios. In fact, training data with a richer set of sensors would also allow you to determine whether some of the sensors do not in fact contribute meaningfully to overall performance.
It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if one considers the venture to be a vehicle (ha!) to shovel as much money as possible from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing.
Infra-red of a few different wavelengths as well as optical light ranges seems like it'd give a superior result?
It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if the venture is a vehicle (ha!) to shovel money from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing.
Notably, sensor confusion is also an “unsolved” problem in humans, eg vision and vestibular (inner ear) conflicts possibly explaining motion sickness/vertigo <https://www.nature.com/articles/s44172-025-00417-2>
The results of both tournaments: <https://carnewschina.com/2025/07/24/chinas-massive-adas-test...> Counterintuitively, vision scored best (Tesla Model X)
The videos are fascinating to watch (subtitles are available): Tournament 1 (36 cars, 6 Highway Scenarios): <https://www.youtube.com/watch?v=0xumyEf-WRI> Tournament 2 (26 cars, 9 Urban Scenarios): <https://www.youtube.com/watch?v=GcJnNbm-jUI>
Highway Scenarios: “tests...included other active vehicles nearby to increase complexity and realism”: <https://electrek.co/2025/07/26/a-chinese-real-world-self-dri...>
Urban Scenarios: “a massive, complex roundabout and another segment of road with a few unsignaled intersections and a long straight...The first four tests incorporated portions of this huge roundabout, which would be complex for human drivers, but in situations for which there is quite an obvious solution: don’t hit that car/pedestrian in front of you” <https://electrek.co/2025/07/29/another-huge-chinese-self-dri...>
But we're far from plateauing on what can be done with vision - Humans can drive quite well with essentially just sight, so we're far from extinguishing what can be done with it.
So like a human driver. Problem is, automatic drivers need to be substantially better than humans to be accepted.
>The EX90's LiDAR enhances ADAS features like collision mitigation and lane-keeping, which are active and assisting drivers. However, full autonomy (Level 3) is not yet available, as the Ride Pilot feature is still under development and not activated.
All of that combined is probably closer to $1k than to $140.
And, again, that's - what - 10 years after Tesla originally made the decision to go vision only.
It wasn't a terrible idea at the time, but they should've pivoted at some point.
They could've had a massive lead in data if they pivoted as late as 3 years ago, when the total cost would probably be under $2.5k, and that could've led to a positive feedback loop, cause they'd probably have a system better than Waymo by now.
Instead, they've got a pile of garbage, and no path to improve it substantially.
They might be!
But I doubt it.
I don't know enough about Tesla's cameras, but it's not implausible to think there are LIDARs of low enough quality that you'd be better off with a good quality camera for your sensor.
Again, I doubt this is the case with BYDs cameras.
But it's still worth pointing out, I think.
My point is, BYD's LIDAR system costing $x is only one small part of the conversation.
Solid-state LIDAR is still a fairly new thing. LIDAR sensors were big, clunky, and expensive back when Tesla started their Autopilot/FSD program.
I googled a bit and found a DFR1030 solid-state LIDAR unit for 267 DKK (for one). It has a field of view of 108 degrees and an angular resolution of 0.6 degrees. It has an angle error of 3 degrees and a max distance of 300mm. It can run at 7.5-28 Hz.
Clearly fine for a floor-cleaning robot or a toy. Clearly not good enough for a car (which would need several of them).
LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything. Maybe this influences how they, and regulators, will think about self driving cars.
And the general public?! No way. Most are completely unaware of the foibles of LLMs.
No the don't. You're making a straw man rather than trying to put forth an actual argument in support of your view.
If you feel can't support your point, then don't try to make it.
I responded to this parent comment:
"LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything."
You take issue with my response of:
"loads of DEVs on here will claim LLMs are infallible"
You're not really making sense. I'm not straw-manning anything, as I'm directly discussing the statement made. What exactly are you presuming I'm throwing a straw man over?
It's entirely valid to say "there are loads of supposed experts that don't see this point, and you're expecting the general public to?". That's clearly my statement.
You may disagree, but that doesn't make it a strawman. Nor does it make it a poorly phrased argument on my part.
Do pay better attention please. And your entire last sentence is way over the line. We're not on reddit.
The irony of telling someone not to be rude while being absolutely insufferable. Peak redditor behavior.
Please provide examples. Thank you!
Based on what I've read over the years: it costs too much for a consumer vehicle, it creates unwanted "bumps" in the vehicle visual design, and the great man said it wasn't needed.
Yes, those reasons are not for technology or safety. They are based on cost, marketing, and personality (of the CEO and fans of the brand).
https://opg.optica.org/oe/fulltext.cfm?uri=oe-31-2-2013&id=5...
I'm in a 2025 with HW4, but it's dramatic improvement over the last couple of years (previously had a 2018 Model 3) increased my confidence that Elon was right to focus on vision. It wasn't until late last year where I found myself using it more than not, now I use it almost every drive point to point (Cupertino to SF) and it does it.
I think people are generally sleeping on how good it is and the politicization means people are under valuing it for stupid reasons. I wouldn't consider a non Tesla because of this (unless it was a stick shift sports car, but that's for different reasons).
Their lead is so crazy far ahead it's weird to see this reality and then see the comments on hn that are so wrong. Though I guess it's been that way for years.
The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case. I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem.
If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off.
I'd really encourage people to here with a bias to dismiss to ignore the comments and just go in real life to try it out for yourself.
This is not a general solution, it is an SF one... at best.
Most humans also don't get in accidents or have problems with phantom breaking within the timeframe that you mentioned.
Have you met any humans? Or seen people driving?
The Bay Area has massive traffic, complex interchanges, SF has tight difficult roads with heavy fog. Sometimes there’s heavy rain on 280. 17 is also non trivial.
What Tesla has done is not trivial and roads outside the bay are often easier.
People can ignore this to serve their own petty cognitive bias, but others reading their comments should go look at it for themselves.
Here outside of Los Angeles, about an hour east, they do not do well at all on their 'auto-pilot.'
Your area has the benefit of being one of the primary training areas, and thus the dataset for your area is good.
Try that here. I'll be more than happy to watch you piss yourself as the Tesla tries to take you into the HOV lane THROUGH THE BARRIERS.
To date, SpaceX has sent nothing to Mars. Not to understate the company's accomplishment, but "people on HN" are fed up exactly with statements like yours.
My point is people will still be calling him a fraud when they do get it to mars, no evidence is sufficient for the HN cynic that thinks their “above the fray” ethos makes them smart.
Tesla has had massive success despite the haters, the model y becoming the literally best selling car on earth and you wouldn’t know it from HN. FSD has gotten really good, good enough to use more than not as they continue to improve it.
The best thing about capitalism is the losers here don’t matter - the winners get rich and keep going.
How is it politicization when TESLA THE COMPANY is saying Full Self Driving doesn't mean "Full" "Self" Driving?
If it is as good as you claim, why doesn't Tesla claim it's Full Self Driving?
But what is the point to use it everywhere if you still need to pay attention to the road, keep hands on the steering wheel?
It’ll be nice when that’s not required anymore, but even today it’s way more comfortable.
Why would anyone listen to the opinion of someone who bought a Tesla in 2025?
The only people still buying them are musk fanboys.
LIDAR requires line-of-sight (LoS) hence cannot see around conner, but RADAR probably can.
It's interesting to note that the all time 2nd most popular post on Tesla is 9 years ago on its full self driving hardware (just 2nd after the controversial Cybertruck) [1].
>Elon's vision-only move was extremely "short-sighted"
Elon's vision was misguided because some of the technologists at the time including him seem to really truly believed that AGI is just around the corner (pun attended). Now most of the tech people gave up on AGI claim blaming on the blurry definition of AGI but for me the truly killer AGI application is always full autonomous level 5 driving with only human level sensor perceptions minus the LIDAR and RADAR. But the complexity of the goal is very complicated that I really truly believe it will not be achieved in foreseeable future.
[1] All Tesla Cars Being Produced Now Have Full Self-Driving Hardware (2016 - 1090 comments):
But Tesla didn't do this.
I rented a Tesla a while back and drove from the bay to the death valley. On clear roads with no hazards whatsoever, the car hit the brakes at highway speeds. It scared the bejeesus out of me! Completely off put by the auto drive and derailed plans to buy a Tesla.
The filters introduce the problem of incorrectly deleting something that really is there.
tl;dr: you can use optics to determine if there's rain on a surface, from below, without having to use any fancy cameras or anything, just a light source and light sensor.
If you're into this sort of thing, you can buy these sensors and use them as a rain sensor, either as binary "yes its rained" or as a tipping bucket replacement: https://rainsensors.com
Careful. HN takes a dim view of puns.
Self-starting wipers uses some kind of current/voltage measure on the windshield right - unrelated to self-driving? It's been around longer than Tesla - or are you just saying it's another random failure?
Check this for a reference of how well Tesla's vision-only fares against the competition, where many have LiDAR. Keep it simple wins the game. https://www.youtube.com/watch?v=0xumyEf-WRI
One analyst asked about the reliability of Tesla’s cameras when confronting sun glare, fog, or dust. Musk claimed that the company’s vision system bypasses image processing and instead uses direct photon counting to account for “noise” like glare or dust.
This... is horseshit. Photon counting is not something you can do with a regular camera or any camera installed on a Tesla. A photon counting camera doesn't produce imagery that is useful for vision. Even beyond that, it requires a closed environment so that you can you know, count them in a controlled manner, not an open outside atmosphere.
It's bullshit. And Elon knows it. He just thinks that you are too stupid to know it and instead think "Oh, yeah, that makes sense, what an awesome idea, why is only Tesla doing this?" and are wowed by Elon's brilliance.
But go ahead fight some weird strawman you built.
Did you even look at the video? Don't think you did.
It wasn't Elon's but Karpathy's.
The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to ultimately have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case.
I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem.
If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off. His model is all this sort of first principles thinking, it's why his companies pull off things like starship. I wouldn't bet against it.
Elon is being foolish and weirdly anthropomorphic.
(For those who don't want to click through "LIDAR is a fool's errand, and anyone relying on LIDAR is doomed."
I am much more curious about the next ten years. If we can bring down the cost of a LIDAR unit into parity with camera systems[1], I think I know the answer. But I thought that 10 years ago and it did not happen so I wonder what is the real roadblock to make LIDAR cheap.
[1] Which it won't replace, of course. What it will change is that it makes the LIDAR a regular component, not an exceptionally expensive component.
2) make it right
3) make it fast (or cheap in this case)
Elon thinks his genius intellect allows him to skip to #3.
Anything except the lowest end car will cost $20K or more, so $200 is one percent of that price.
It’s nothing.
True self-driving is still a baby that needs to grow and cannot even compete against an adult human with 30+ years of experience. As self driving actually forms to that level the market will grown.
Once a product starts to sell after initial design, time is take to reduce the development cost. Try to reuse parts or replace part A with B. A machine from early 2018 can be little different than ones going out the door late 2018. _Kaizen_ was coined for this.
My point of view of when mass reduction in cost will be when self-driving is cost effect secondary feature on all Toyota vehicles. I see that as the litmus test for knowing that self-driving has reached true utility.
Also well designed vehicles would need a multi-sensor system to operate in self-driving mode. A human operating a car is using multi-sensor intake. Lack of multi-sensor in humans prevent them from operating a vehicle. Blind people need a secondary sensory input like walking stick. Vehicles need a multi-sensor system to prevent harming, mutilating, and killing the passengers and pedestrians.
A two to four ton vehicle that can accelerate like a Ferrari and go over 100 mph, fully self-driving, and 'a few hundred dollars is way too much'.
Disagree. Even as they are dialing back the claims, which may or may not affect how people use the vehicles. These things respond too quickly for flaky senses based on human sensoriums.
Supervised FSD is already safer than a human.
The revisions and updates are safety tested on roads for months before they are released. Tesla also has models that are too big to run on existing production hardware that perform better than the release versions in test cars.
Updates are not git pulls and no engineer would ever think that they were.
And those complex traffic situations are the main challenge for autonomous driving. Getting the AIs to do the right things before they get themselves into trouble is key.
Lidar is not a silver bullet. It helps a little bit, but not a whole lot. It's great when the car has to respond quickly to get it out of a situation that it shouldn't have been in to begin with. Avoiding that requires seeing and understanding and planning accordingly.
You can train a DL model to act like a LiDAR based on only camera inputs (the data collection is easy if you already have LiDAR cars driving around). If they could get this to work reliably, I'm sure the competition would do it and ditch the LiDAR, but they don't, so that tells us something.
For anyone who understands sensor fusion and the Kalman filter, read this and ask yourself if you trust Elon Musk to direct the sensor strategy on your autonomous vehicle: https://www.threads.com/@mdsnprks/post/DN_FhFikyUE
For anyone wondering, to a sensors engineer the above tweet is like sayin 1 + 1 = 0 -- the truth (and science) is the exact opposite of what he's saying.
If you look at the statistics on fatal car accidents, 85%+ involve collisions with stationary objects or other road users.
Nobody's suggesting getting rid of machine vision or ML - just that if you've got an ML+vision system that gets in 1 serious accident per 200,000 miles, adding LIDAR could improve that to 1 serious accident per 2,000,000 miles.
edit: no, it was ultrasonic sensors. But this was likely object detection, and now it's gone.
He's doing the exact same thing and worse to people he doesn't like.
What you say is plausible, I haven't directly seen the evidence for it but I'm not inclined to completely doubt you
But I think back to the time where everything surrounding covid was 'misinformation' and Musk (even if the broken clock is right twice a day) genuinely gave people a place to speak. Old Twitter would shut down people, so fast (even experts)
https://www.aljazeera.com/economy/2024/8/13/the-right-wing-l...
You can find this stuff searching 2 minutes on Google.
He's just a power hungry oligarch.
In any case, thanks, TIL!
> this tells me volumes about what's going on in the computer vision system
Emphasis:
> computer vision system
The cold hard truth is that LIDARs are a crutch, they're not strictly necessary. We know this because humans can drive without a LIDAR, however they are a super useful crutch. They give you super high positional accuracy (something that's not always easy to estimate in a vision-only system). Radars are also a super useful crutch because they give really good radial velocity. (Little anecdote, when we finally got the Radars working properly at work it made a massive difference to the ability for our car to follow other cars, ACC, in a comfortable way).
Yes machine learning vision systems hallucinate, but so do humans. The trick for Tesla would be to get it good enough to where it hallucinates less than humans do (they're nowhere near yet - human's don't hallucinate very often).
It's also worth adding that last I checked the state of the art for object detection is early fusion where you chuck the LIDAR and Radar point clouds into a neural net with the camera input so it's not like you'd necessarily have the classical methods guardrails with the Lidar anyway.
Anyway, I don't think Tesla were wrong to not use LIDAR - they had good reasons to not go down that route. They were excessively expensive and the old style spinning LIDARs were not robust. You could not have sold them on a production car in 2018. Vision systems were improving a lot back then so the idea you could have a FSD on vision alone was plausible.
The hard truth is there is no reason to limit machines to only the tools humans are biologically born with. Cars always have crutches that humans don't possess. For example, wheels.
In a true self-driving utopia, all of the cars are using multiple methods to observe the road and drive (vision, lidar, GPS, etc) AND they are all communicating with each other silently, constantly, about their intentions and status.
Why limit cars to what humans can do?
The reason this is clear is because, except for a brief period in late 2022, Teslas have included some combination of radar and ultrasonic sensors. [0]
[0] https://en.m.wikipedia.org/wiki/Tesla_Autopilot_hardware
Turns out, when there's demand for LIDAR in this form factor, people invest in R&D to drive costs down and set up manufacturing facilities to achieve economies of scale. Wow, who could have predicted this‽
Western countries might not be smart enough to keep R&D because Wall Street sees it as a cost center.
[1] https://www.tomshardware.com/tech-industry/artificial-intell...
You know what else used to be expensive? Structured light sensors. They cost $$$$ in 2009. Then Microsoft started manufacturing the Kinect for a mass market, and in 2010 price went down to $150.
You know what's happened to LIDAR in the past decade? You guessed it, costs have come massively down because car manufacturers started buying more, and costs will continue to come down as they reach mass market adoption.
The prohibitive cost for LIDAR coming down was always just a matter of time. A "visionary" like Musk should have been able to see that. Instead he thought he could outsmart everyone by using a technology that was not suited for the job, but he made the wrong bet.
This should be expected when someone who is *not* an experienced engineer starts making engineering decisions.
FOSS is the obvious counterexample to your absurdly firm stance, but so are many artistic pursuits that use engineering techniques and principles, etc.
My intent was to exclude research efforts, which is fundamentally different from engineering, which is a practical concern and not a “get it to just work” concern.
and propellers on a plane are not strictly necessary because birds can fly without them? The history of machines show that while nature can sometimes inspire the _what_ of the machine, it is a very bad source of inspiration for the _how_.
Crutch for what? AI does not have human intelligence yet and let’s stop pretending it does. There is no shame in that as the word crutch implies.
If a sensor provides additional data, why not use it? Sure, humans can drive withot lidars, but why limit the AI to using human-like sensors?
Why even call it a crutch? IMO It's an advantage over human sensors.
That's because our stereoscopic vision has infinitely more dynamic range, focusing speed and processing power w.r.t. a computer vision system. Periphery vision is very good at detecting movement, and central view can process tremendous amount of visual data without even trying.
Even a state of the art professional action camera system can't rival our eyes in any of these categories. LIDARs and RADARs are useful and shall be present in any car.
This is the top reason I'm not considering a Tesla. Brain dead insistence on cameras with small sensors only.
You’re not considering them even though they have the best adas on the market lmao suit yourself
Quality of additional data matters. How often does a particular sensor give you false positives and false negatives? What do you do when sensor A contradicts sensor B?
“3.6 roentgen, not great, not terrible.”
Probably comes down to lidar (and Ai) failure modes.
Also don’t forget that as a human you can move your head any which way, and also draw on your past experiences driving in that area. “There is always an old man crossing the road at this intersection. There is a school nearby so there might be kids here at 3pm.” That stuff is not as accessible to a LIDAR.
A system that's only based on cameras is only as good as its ability to recognize all road hazards, with no fall back if that fails. With LIDAR, the vehicle might not know what's the solid object in front of the vehicle using cameras, but it knows that it's there and should avoid running into it.
This is a good example of why sensor fusion is good.
The assumption was that with similar sensors (or practically worse - digital cameras score worse than eyeballs in many concrete metrics), ‘AI’ could be dramatically better than humans.
At least with Tesla’s experience (and with some fudging based on things like actual fatal accident data) it isn’t clear that is actually what is possible. In fact, the systems seem to be prone to similar types of issues that human drivers are in many situations - and are incredibly, repeatedly, dumb in some situations many humans aren’t.
Waymo has gone full LiDAR/RADAR/Visual, and has had a much better track record. But their systems cost so much (or at least used to), that it isn’t clear the ‘replace every driver’ vision would ever make sense.
And that is before the downward pressure on the labor market started to happen post-COVID, which hurts the economics even more.
The current niche of Taxis kinda makes sense - centrally maintained and capitalized Taxis with outsourced labor has been a viable model for a long time, it lets them control/restrict the operating environment (important to avoid those bad edge cases!), and lets them continue to gather more and more data to identify and address the statistical outliers.
They are still targeting areas with good climates and relatively sane driving environments because even with all their models and sensors, heavy snow/rain, icy roads, etc. are still a real problem.
When the argument was Phoenix is too pleasant I could buy that. Most places aren't Phoenix. But SF and LA are both much more like a reasonable place other humans live. It rains, but not always, it's misty, but not always. Snow I do accept as a thing, lots of places humans live have some snow, these cities don't really have snow.
However for ice when I watch one of those "ha, most drivers can make this turn in the ice" videos I'm not thinking "I bet Waymo wouldn't be able to do this" I'm thinking "That's a terrible idea, nobody should be attempting it". There's a big difference between "Can it drive on a road with some laying snow?" and "Can it drive on ice?".
Both SF and LA climates are super cushy compared to say, Northern Michigan. Or most of the eastern seaboard. Or even Kansas, Wyoming, etc. in the winter.
In those climates, if you don’t drive in what you’re calling ‘nobody should be attempting it’ weather, you - starve to death in your house over the winter. Because many months are just like that.
Self driving has a very similar issue with the vast majority of, say, Asia. Because similarly “this is crazy, no one should be driving like this conditions” is the norm. So if it can’t keep up, it’s useless.
Eastern and far Northern Europe has a lot of kinda similar stuff going on.
Self driving cars are easy if you ignore the hard parts.
In India, I’ve had to deal with Random Camel, missing (entire) road section that was there yesterday, 5 different cars in 3 lanes (plus 3 motorcycles) all at once, many cattle (and people) wandering in the road at day and night, and the so common it’s boring ‘people randomly going the wrong way on the road’. If you aren’t comfortable bullying other drivers sometimes to make progress or avoid a dangerous situation, you’re not getting anywhere anytime soon.
All in a random mix of flooding, monsoon rain, super hot temperatures, construction zones, fog, super heavy fireworks smoke, etc. etc.
Hell, even in the US I’ve had to drive through wildfires and people setting off fireworks on the road (long story, safety reasons). The last thing I would have wanted was the car freezing or refusing.
Is that super safe? Not really. But life is not super safe. And a car that won’t help me live my life is useless to me.
Such an AI would of course be a dangerous asshole on, say, LA roads, of course. Even more than the existing locals.
I live in the middle of a city, so, no, in terrible weather just like great weather I walk to the store, no need to "starve to death" even if conditions are too treacherous for people to sensibly drive cars. Because I'm an old man, and I used to live somewhere far from a city, I have had situations where you can't use a car to go fetch groceries because even if you don't care about safety the car can't go up an icy hill, it loses traction, gravity takes over, you slide back down (and maybe wreck the car).
Because as an old man who has actually lived in all these places - and also has ridden in Waymos before and has had friends on the Waymo team in the past, your comments seem pretty ridiculous.
A lot of the large population centres in the US are in these what you're calling "super cushy" zones where there's not much snow let alone ice. More launches in cities in Florida, Texas, California will address millions more people but won't mean more ice AFAIK. So I guess for you the most interesting announcement is probably New York, since New York certainly does have real snow. 2026 isn't that long, although I can imagine that maybe a President who thinks he's entitled to choose the Mayor of New York could mess that up.
As to the "But people in some places are crazy drivers" I saw that objection from San Francisco before it was announced. "Oh they'll never try here, nobody here drives properly. Can you imagine a Waymo trying to move anywhere in the Mission?". So I don't have much time for that.
When was the last time you had full attention on the road and a reflection of light made you super confused and suddenly drive crazy? When was the last time you experienced objects behaving erratically around you, jumping in and out of place, and perhaps morphing?
We were somewhere around Barstow on the edge of the desert when the drugs began to take hold. I remember saying something like, “I feel a bit lightheaded; maybe you should drive . . .”And suddenly there was a terrible roar all around us and the sky was full of what looked like huge bats, all swooping and screeching and diving around the car, which was going about 100 miles an hour with the top down to Las Vegas. And a voice was screaming: “Holy Jesus! What are these goddamn animals?” [0]
[0] Thompson, Hunter S., „Fear and Loathing in Las Vegas“When was the last time you saw a paper bag blown across the street and mistook it for a cat or a fox? (Did you even notice your mistake, or do you still think it was an animal?)
Do you naturally drive faster on wide streets, slower on narrow streets, because the distance to the side of the road changes your subconcious feeling of how fast you're going? Do you even know, or are you limited to your memories rather than a dashcam whose footage can be reviewed later?
etc.
Now don't get me wrong, AI today is, I think, worse than humans at safe driving; but I'm not sure how much of that is that AI is more hallucinate-y than us vs. how much of it is that human vision system failures are a thing we compensate for (or even actively make use of) in the design of our roads, and the AI just makes different mistakes.
Self-driving is probably “AI-hard” as you’d need extensive “world knowledge” and be able to reason about your environment and tolerate faulty sensors (the human eyes are super crappy with all kinds of things that obscure it, such as veins and floaters).
Also, if the Waymo UI accurately represents what it thinks is going on “out there” it is surprisingly crappy. If your conscious experience was like that when you were driving you’d think you had been drugged.
The human brain's vision system makes pretty much the exact opposite mistake, which is a fun trick that is often exploited by stage magicians: https://www.youtube.com/watch?v=v3iPrBrGSJM&pp
And is also emphasised by driving safety awareness videos: https://www.youtube.com/watch?v=LRFMuGBP15U
I wonder what we'd seem like to each other, if we could look at each other's perception as directly as we can look at an AI's perception?
Most of us don't realise how much we mispercieve because it doesn't feel different in the moment to percieve incorrectly; it can't feel different in the moment, because if it did, we'd notice we were mispercieving.
The correct move for Tesla would have been to split the difference and add LIDAR to some subset of their fleet, ideally targeted in the most difficult to debug environments.
Somewhat like Google/Waymo are doing with their Jaguars.
Don't LIDAR 100% of Teslas, but add it to >0%.
Reportedly, they no longer use this widely - but they still have some LIDAR-equipped "scout vehicles" they send into certain environments to collect extra data.
Tesla would subsidize them and offer them at the same price as non-LIDAR models, to select customers in target areas.
And yes, you answered the second part of your own question.
So maybe LIDAR isn't necessary but also if Tesla were actually investing in cameras with a memory bus that could approximate the speed of human vision I doubt it would be cheaper than LIDAR to get the same result.
One of the ways it's better is that humans can sense individual photons. Not 100% reliably, but pretty well, which is why humans can see faint stars on a dark night without any special tools even though the star is thousands of light years away. On the other hand, our resolution for most of our field of vision is pretty bad - this is compensated for by changing what we're looking it when we care about details we can just look directly at it and the resolution is better right in the centre of the picture.
I agree that Tesla may have made the right hardware decision when they started with this. It was probably a bad idea to lock themselves into that path by over-promising.
Robots are supposed to make up for our limitations by doing things we can't do, not do the things we can already do, but differently. The latter only serves to replace humans, not augment them.
(and that's not even addressing that human vision is fundamentally a weird sensory mess full of strange evolutionary baggage that doesn't even make sense except for genetic legacy)
This was only plausible to people who had no experience in robotics, autonomy, and vision systems.
Everyone knew LIDAR was the enabling technology thanks to the 2007 DARPA Urban challenge.
But the ignoramus Elon Musk decided he knew better and spent the last decade+ trashing the robotics industry. He set us back as far as safety protocols in research and development, caused the first death due to robotic cars, deployed them on public roads without the consent of the public by hoisting around his massive wealth, lied consistently for a DECADE about the capabilities of these machines, defrauded customers and shareholders while becoming richer and richer, all to finally admit defeat while he still maintains the growth story of for Tesla's future remains in robotics. The nerve of this fucking guy.
It’s already better at X-rays and radiology in many cases.
Everything you are talking about is just a matter of sufficient learning data and training.
The most important thing is that Tesla/Elon absolutely had no way to know, and no reason to believe (other than as a way to rationalise a dangerously risky bet) that machine vision would be able to solve all these issues in time to make good on their promise.
Maybe they'll reach level 4 or higher automation, and will be able to claim full self driving, but like fusion power and post-singularity AI, it seems to be one of those things where the closer we get to it, the further away it is.
Others are in prison for far less.
The first time could be an honest mistake, but after a certain point we have to assume that it’s just a lie to boost the stock price.
Just like politicians, it seems there's no repercussions for CEO's lying as long as it's fleecing the peons and not the elite.
Compare Boston Dynamics and cat. They are on the absolutely different levels for their bodies and their ability to manipulate their bodies.
I have no doubts, that using cameras-only would absolutely work for AI cars, but at the same time I'm feel that this kind of AI is not there. And if we want autonomous cars, it might be possible, but we need to equip them with as much sensors as necessary, not setting any artificial boundaries.
Check out what the Tesla park assist visualization shows now. It's vision based and shows a 3D recreation of the world around the car. You can pan around to see what's there and how far away it is. It's fun to play around with in drive thrus, garages, etc. just to see what it sees.
I guess you don't drive? You use more senses than just vision when driving a car.
You also do use your ears when driving.
Provable by one-eyed people being able to drive just fine, as could you with one eye covered.
> That something magical is happening because the eyes are close topographically to the brain?
It sounds to me like you have to study what eyes actually are. It's not about proximity or magic, they are a part of your brain, and we're only beginning to understand their complexities. Eyes are not just sensory organs, so the analogy to cameras is way off. They are able to discern edges, motion, color, and shapes, as well as correct errors before your brain even is even aware.
In robotics, we only get this kind of information after the camera image has been sent through a perception pipeline, often incurring a round trip through some sort of AI and a GPU at this point.
> Sounds implausible.
Musk just spent billions of dollars and the better part of a decade trying to prove the conjecture that "cameras are sufficient", and now he's waving the white flag. So however implausible it sounds, it's now more implausible than ever that cameras alone are sufficient.
No. I live in snow country. Folks with vestibular issues are advised to pull over in snowstorms because sometimes the only indication that you have perpendicular velocity and are approaching a slide off the road or spin is that sense. My Subaru has on more than one occasion noticed a car before I did based on radar.
Vision only was a neat bet. But it will cost Tesla first to market status generally and especially in cities, where regulators should have fair scepticism about a company openly trying to do self driving on the cheap.
I'm consistently surprised by how immune to sun-blindness my car is. It regularly reads traffic lights that have the sun right next to them; I've never seen any discernible degradation due to too much light, too little light, or bad contrast of any kind.
You're just bringing up a never-ending stream of but-what-abouts, so I'm done refuting them after this. It's not a good use of my time.
> You're just bringing up a never-ending stream of but-what-abouts
By "what abouts" you of course mean "shortcomings of camera-only systems that make them unsuitable for full autonomy."
> It's not a good use of my time.
No it's not, it's a losing battle, and Musk has admitted it. Camera-only systems will not enable full self driving. Y'all got scammed.
What is up with hn today? Was there a mass stroke?
Deaf drivers (may include drivers playing loud music too) don't, unless they're somehow tasting the other vehicles.
Nature's accelerometers.
I've had mine go bad, and it wasn't fun.
Just sayin'...
I was unable to stand up.
It all came out OK, in the end, but it was touch-and-go for a while.
Not quite a Lotus Position, but I used the Epley Maneuver on her which immediately lessened her symptoms: https://en.wikipedia.org/wiki/Epley_maneuver
Even driving with mild vertigo could be difficult because you want to restrict your head movement.
Source: my dad gets Benign paroxysmal positional vertigo (BPPV)
He's mentally sharp, and has a science background, but nope!
While it doesn't often snow or ice up here (it does sometimes), it does rain a good bit from time to time. You can usually feel your car start to hydroplane and lose traction well before anything else goes wrong. It's an important thing to feel but you wouldn't know it's happening if you're going purely on vision.
You can often feel when there's something wrong with your car. Vibrations due to alignment or balance issues. Things like that.
Those are quick examples off the top of my head. I'm sure there are more.
Of course, all these things can be tracked with extra sensors, I'm not arguing humans are entirely unique in being able to sense these things. But they are important bits of feedback to operate your car safely in a wide range of conditions that you probably will encounter, and should be accounted for in the model.
As for auditory feedback, while some drivers don't have sound input available to them (whether they're deaf or their music is too loud or whatever) sound is absolutely a useful input to have. You may hear emergency vehicles you cannot see. You may hear honking alerting you to something weird going on in a particular direction. You may hear issues with your car. Those rumble strips are also tuned to be loud when cars run over them as well. You can hear the big wind gusts and understand those are the source of weird forces pushing the car around as opposed to other things making your car behave strangely. So sure, one can drive a car without sound, but its not better without it.
But that's sort of besides the point: why would you not use additional data when the price of the sensors are baked into the feature that you're selling?
I am not saying that you couldn’t do this with hardware, I am quite confident you could actually, but I am just saying that there are senses other than sight and sound at play here.
The problem is clearly a question of the fidelity of the vision and our ability to slave a decision maker and mapper to it.
Sure, for some definition of "works"...
https://www.iihs.org/research-areas/fatality-statistics/deta...
Sensory processing is not matched, sure, but IMO how a human drives is more involved than it needs to be. We only have two eyes and they both look in the same direction. We need to continuously look around to track what's around us. It demands a lot of attention from us that we may not always have to spare, especially if we're distracted.
Not on all metrics, especially not simultaneously. The dynamic range of human eyes, for example, is extremely high.
AFAIK there is also more than one front camera. Why would anyone try to do it all with one or two camera sensors like humans do it?
It's important to remember that the cameras Tesla are using are optimized for everything but picture quality. They are not just taking flagship phone camera sensors and sticking them into cars. That's why their dashcam recordings look so bad (to us) if you've ever seen them.
try matching a cat's eye on those metrics. and it is much simpler that human one.
But the human brain can process the semantics of what the eye sees much better than current computers can process the semantics of the camera data. The camera may be able to see more than the eye, but unless it understands what it sees, it'll be inferior.
Thus Tesla spontaneously activating its windshield wipers to "remove something obstructing the view" (happens to my Tesla 3 as well), whereas the human brain knows that there's no need to do that.
Same for Tesla braking hard when it encountered an island in the road between lanes without clear road markings, whereas the human driver (me) could easily determine what it was and navigate around it.
LIDAR based self-driving cars will always massively exceed the safety and performance of vision-only self driving cars.
Current Tesla cameras+computer vision is nowhere near as good as humans. But LIDAR based self-driving cars already have way better situational awareness in many scenarios. They are way closer to actually delivering.
No part costs less, it also doesn't break, it also doesn't need to be installed, nor stocked in every crisis dealership's shelf, nor can a supplier hold up production. It doesn't add wires (complexity and size) to the wiring harness, or clog up the CAN bus message queue (LIDAR is a lot of data). It also does not need another dedicated place engineered for it, further constraining other systems and crash safety. Not to mention the electricity used, a premium resource in an electric vehicle of limited range.
That's all off the top of my head. I'm sure there's even better reasons out there.
> Lidar can be added for hundreds of dollars per car.
Surprisingly, many production vehicles have a manufacturer profit under one thousand dollars. So that LIDAR would eat a significant portion of the margin on the vehicle.But LIDAR would probably be wired more directly to the computer then use a packet protocol.
it has to do with the processing of information and decision-making, not data capture
Wake me up when the tech reaches Level 6: Ghost Ride the Whip [0].
He said consumers, just buy the car and it will come with an updated. It didn't.
This is a scam, end of story.
7 years of it.
I think you mean "securities fraud", at gargantuan scale at that. Theranos and Nikola were nowhere near that scale.
Perhaps it's that cars are more sacred than healthcare.
There is little to suggest that Tesla is any closer to level 4 automation than Nabisco is. The Dojo supercomputer that was going to get them there? Never existed.
The persistent problem seems to be severe weather, but the gap between the weather a human shouldn't drive in and weather a robot can't drive in will only get smaller. In the end, the reason to own a self-driven vehicle may come down to how many severe weather days you have to endure in your locale.
Interesting that Waymo now operates just fine in SF fog, and is expanding to Seattle (rain) and Denver (snow and ice).
A system that requires a "higher level" handler is not full self driving.
If the vehicle has a collision, who's ultimately responsible? That person (or computer) is the driver.
If a Waymo hits a pole for example, the software has a bug. It wasn't the responsibility of a remote assistant to monitor the environment in real time and prevent the accident, so we call the computer the driver.
If we put a safety driver in the seat and run the same software that hits the same pole, it was the human who didn't meet their responsibility to prevent the accident. Therefore, they're the driver.
Which is why an autonomous car company that is responsible and prioritizes safety would never call their SAE Level 4 vehicle "full self-driving".
And that's why it's so irresponsible and dangerous for Tesla to continue using that marketing hype term for their SAE Level 2 system.
But is Level 4 enough to count as "Full Self Driving"? I'd argue it really depends on how big the geofence area is, and how rare interventions are. A car that can drive on 95% of public roads might as well be FSD from the perspective of the average drive, even if it falls short of being Level 5 (which requires zero geofencing and zero human intervention).
https://www.reddit.com/r/waymo/comments/1gsv4d7/waymo_spotte...
> and uses remote operators to make decisions in unusual situations and when it gets stuck.
This is why its limited markets and areas of service: connectivity for this sort of thing matters. Your robotaxi crashing cause the human backup lost 5g connectivity is gonna be a real real bad look. NO one is talking about their intervention stats. IF they were good I would assume that someone would publish them for marketing reasons.
Waymo navigates autonomously 100% of the time. The human backup's role is limited to selecting the best option if the car has stopped due to an obstacle it's not sure how to navigate.
Interventions are a term of art, i.e. it has a specific technical meaning in self-driving. A human taking timely action to prevent a bad outcome the system was creating, not taking action to get unstuck.
> IF they were good I would assume that someone would publish them for marketing reasons.
I think there's an interesting lens to look at it in: remote interventions are massively disruptive, the car goes into a specific mode and support calls in to check in with the passenger.
It's baked into UX judgement, it's not really something a specific number would shed more light on.
If there was a significant problem with this, it would be well-known given the scale they operate at now.
California granted Waymo the right to operate on highways and freeways in March 2024.
L4 is "full autonomy, but in a constrained environment." L5 is the holy grail: as good as or better than human in every environment a human could take a car (or, depending on who's doing the defining: every road a human could take a car on. Most people don't say L5 and mean "full Canyonero").
That's a distinction without a difference. Forest service and BLM roads are "roads" but can be completely impassable or 100% erased by nature (and I say this as a former Jeep Wrangler owner), they aren't always located where a map thinks they are, and sometimes absolutely nothing differentiates them from the surrounding nature -- for example, left turn into a desert dry wash can be a "road" and right not.
Actual "full" autonomous driving is crazy hard. Like, by definition you get into territory where some vehicles and some drivers just can't make it through, but it's still a road(/"environment"). And some people will live at the end of those roads.
Are they? Did you mean Autonomous Vehicles?
It initially seems mad that a human, inside the box can outperform the "finest" efforts of a multi zillion dollar company. The human has all their sensors inside the box and most of them stymied by the non transparent parts. Bad weather makes it worse.
However, look at the sensors and compute being deployed on cars. Its all minimums and cost focused - basically MVP, with deaths as a costed variable in an equation.
A car could have cameras with views everywhere for optical, LIDAR, RADAR, even a form of SONAR if it can be useful, microwave and way more. Accellerometers and all sorts too, all feeding into a model.
As a driver, I've come up with strategies such as "look left, listen right". I'm British so drive on the left and sit on the right side of my car. When turning right and I have the window wound down, I can watch the left for a gap and listen for cars to the right. I use it as a negative and never a positive - so if I see a gap on the left and I hear a car to my right, I stay put. If I see a gap to the left but hear no sound on my right, I turn my head to confirm that there is a space and do a final quick go/no go (which involves another check left and right). This strategy saves quite a lot of head swings and if done properly is safe.
I now drive an EV: One year so far - a Seic MG4, with cameras on all four sides, that I can't record from but can use. It has lane assist (so lateral control, which craps out on many A road sections but is fine on motorway class roads) and cruise control that will keep a safe distance from other vehicles (that works well on most roads and very well on motorways, there are restrictions).
Recently I was driving and a really heavy rain shower hit as I was overtaking a lorry. I immediately dived back into lane one, behind the lorry and put cruise on. I could just see the edge white line, so I dealt with left/right and the car sorted out forward/backward. I can easily deal with both but its quite nice to be able carefully abrogate responsibilities.
I answered the question 'What does Waymo lack in your opinion to not be considered "full self driving"?'. And clearly its not if it can't drive on literally 99.99% of roads in the world. Any argument to the contrary is just ridiculous.
Germany, Italy, India all stand out as examples to me. The roads and driving culture is very different, and can be dangerous to someone who is used to driving on American suburban streets.
I really do stand by my comment, and apologize for the 'low quality' nature of it. I meant to suggest that we set the bar far higher for AI than we do for people, which is in general a good thing. But still - I would say that by this definition of 'full self driving', it wouldn't be met very well by many or most human drivers.
Of course I may have simply been lucky, but given that my driving license is valid in many countries it seems as though humanity has determined this is mostly a solved problem. When someone says "Put a Waymo on random road in the world, can it drive it?" they mean: I would expect a human to be able to drive on a random road in the world. And they likely could. Can a Waymo do the same?
I don't know the answer to that one. But if there is one thing that humans are pretty good at it is adaptation to circumstances previously unseen. I am not sure if a Waymo could do the same but it would be a very interesting experiment to find out.
American suburban streets are not representative of driving in most parts of the world. I don't think the bar of 'should be able to drive most places where humans can drive' is all that high and even your average American would adapt pretty quickly to driving in different places. Source: I know plenty of Americans and have seen them drive in lots of countries. Usually it works quite well, though, admittedly, seeing them in Germany was kind of funny.
"Am I hallucinating or did we just get passed by an old lady? And we're doing 85 Mph?"
That's experience and you learned and survived to tell the tale. Its almost as though you are capable of learning how to deal with an unfamiliar environment, and fail safe!
I'm a Brit and have driven across most of Europe, US/CA and a few other places.
Southern Italy eg around Napoli is pretty fraught - around there I find that you need to treat your entire car as an indicator: if you can wedge your car into a traffic stream, you will be let in, mostly without horns blaring. If you sit and wait, you will go grey haired eventually.
In Germania, speed is king. I lived there in the 70s-90s as well as being a visitor recently. The autobahns are insane if you stray out of lane one, the rest of the road system is civilised.
France - mostly like driving around the UK apart from their weird right hand side of the road thing! La Perifique is just as funky as the M25 and La Place du Concorde is a right old laugh. The rest of the country that I have driven is very civilised.
Europe to the right of Italy is pretty safe too. I have to say that across the entirety of Europe, that road signage is very good. The one sign that might confuse any non-European is the white and yellow diamond (we don't have them in the UK). It means that you have priority over an implied "priority to the right". See https://driveeurope.co.uk/2013/02/27/priority-to-the-right/ for a decent explanation.
Roundabouts were invented in the US. In the UK when you are actually on a roundabout you have right of way. However, everyone will behave as though "priorite a la doite" and there will often be a stand off - its hilarious!
In the UK, when someone flashes their headlights at you it generally means "I have seen you and will let you in". That generally surprises foreigners (I once gave a lift to a prospective employee candidate from Poland and he was absolutely aghast at how polite our roads seemed to be). Don't always assume that you will be given space but we are pretty good at "after you".
I don't agree.
My anecdata suggests that Waymo is significantly better than random ridesharing drivers in the US, nowadays.
My last dozen ridesharing experiences only had a single driver that wasn't actively hazardous on the road. One of them was so bad that I actually flagged him on the service.
My Waymo experiences, by contrast, have all been uniformly excellent.
I suspect that Waymo is already better than the median human driver (anecdata suggests that's a really low bar)--and it just keeps getting better.
> My anecdata suggests that Waymo is significantly better than random ridesharing drivers in the US, nowadays.
Those two aren't really related are they? That's one locality and a specific kind of driver. If you picked a random road there is a pretty small chance that road would be one like the one where Waymo is currently rolled out, and where your ridesharing drivers are representative of the general public, they likely are not.
Delusionaly generous take. Perhaps even zealotry.
https://www.nytimes.com/2025/05/13/business/tesla-stock-sale...
https://www.afr.com/technology/life-changing-wealth-stopped-...
Please, post numbers to back this up… please…
Probably Tesla being the only major domestic EV manufacturer + historically Musk not wading into politics + Musk/Tesla being widely popular for a time is probably why no one has gone after him. Not sure how this changes going forward with Musk being a very polarizing figure now.
Yeah, historically, as in: before many people here were born. It's been so long since SEC and FTC did such things.
I'm curious why you think this. I would be pretty shocked if, despite Musk's disgusting personality, they weren't also bought in.
While I didn't look long for a more neutral source, Teslarati has a good list of the prompts of the shift from Musk being anti-Trump and pro-Biden, to giving up on Biden, to supporting Trump: https://www.teslarati.com/former-tesla-exec-confirms-wsj-rep...
There were apparently also other considerations not associated with Tesla for his turn (transgender child, etc), but my read on all this is that Musk saw staying out of politics didn't mean politics would stay away from him. Given that Trump II is also now somewhat anti-Musk, it's not clear to me that he succeeded in avoiding a longer-term axe for Tesla (Neuralink/Solarcity/SpaceX/Boring...) from politicians. We'll see.
Based on the rate of progress alone I would expect functional vision-only self-driving to be very close. I expect people will continue to say LIDAR is required right up until the moment that Tesla is shipping level 4/5 self-driving.
Pro tip if you get stuck in a warren of tiny little back streets in the area. Latch on to the back of a cab; they're generally on their way to a major road to get their fare where they're going and they usually know a good way to get to one. I've pulled this trick multiple times around city hall, Government Center, the old state house, etc.
So close yet so far, which is ironically the problem vision based self-driving has. No concrete information just a guess based on the simplest surface data.
Like does it get naively caught in stopped traffic for turns it could lane change out or does it fucking send it?
The problem you’re describing — phantom braking, random wiper sweeps — is exactly what happens when the perception system’s “eyes” (cameras) feed imperfect data into a “brain” (compute + AI) that has no independent cross-check from another modality. Cameras are amazing at recognizing texture and color but they’re passive sensors, easily fooled by lighting, contrast, weather, or optical illusions. LiDAR adds active depth sensing, which directly measures distance and object geometry rather than inferring it.
But LiDAR alone isn’t the endgame either. The real magic happens in sensor fusion — combining LiDAR, radar, cameras, GNSS, and ultrasonic so each sensor covers the others’ blind spots, and then fusing data at the perception level. This reduces false positives, filters out improbable hazards before they trigger braking, and keeps the system robust in edge cases.
And there’s another piece that rarely gets mentioned in these debates: connected infrastructure. If the vehicle can also receive data from roadside units, traffic signals, and other connected objects (V2X), it doesn’t have to rely solely on its onboard sensors. You’re effectively extending the vehicle’s situational awareness beyond its physical line of sight.
Vision-only autonomy is like trying to navigate with one sense while ignoring the others. LiDAR + fusion + connectivity is like having multiple senses and a heads-up from the world around you.
Let the automated trucks figure it out if it’s an actual problem worth solving or we can just use trains or let truck driving be a decent middle class job.
For as long as we can’t understand AI systems as well as we understand normal code, first principles thinking is out of reach.
It may be possible to get FSD another way but Elon’s edge is gone here.
I agree that the team deserves most of the success. I think that's the case in general. At best, a CEO puts down good framing/structure, that's it. ICs do the actual innovative work.
Tesla Headlines Sentiment Analysis - Electrek.co Bottom Line: Strongly Negative Sentiment Based on analysis of Tesla headlines and articles from Electrek over the past few months, the sentiment is overwhelmingly negative (approximately 85% negative, 10% neutral, 5% positive). The coverage reveals a company in decline across multiple fronts.
There’s an increasing number of drivers that can barely drive on the freeways. When they hit our area they cannot even stay on their side of the road, slow down for blind curves (when they’re on the wrong side of the road!), maintain 50% the normal speed of other drivers, etc. I won’t order uber or lyft anymore because I inevitably get one of these people as my driver (and then watch them struggle on straight stretches of freeway).
Imagine how much worse this will get when they start exclusively using lane keeping on easy roads. It’ll go from “oh my god I have to work the round wheel thingy and the foot levers at the same time!” to “I’ve never steered this car at speeds above 11”.
I’d much rather self driving focused on driving safely on challenging roads so that these people don’t immediately flip their cars (not an exaggeration; this is a regular occurrence!) when the driver assistance disables itself on our residential street.
I don’t think addressing this use case is particularly hard (basically no pedestrians, there’s a double yellow line, the computer should be able to compute stopping distance and visibility distance around blind curves, typical speeds are 25mph, suicidal deer aren’t going to be the computer’s fault anyway), but there’s not much money in it. However, if you can’t drive our road, you certainly cannot handle unexpected stuff in the city.
Vision based systems would be more than adequate. Lidar or (god forbid) ultrasonic chirps would easily lead to superhuman safety and speeds.
I’m skeptical of transponder or network based systems. What happens during a natural disaster? Do the 10% of cars that lack drivers or steering wheels just stop and block the evacuation routes? That’d kill a lot of people in very graphic / high profile ways.
Tesla is kind of a joke in the FSD community these days. People working on this problem a lot longer than Musk's folk have been saying for years that their approach is fundamentally ignoring decades of research on the topic. Sounds like Tesla finally got the memo. I mostly feel sorry for their engineers (both the ones who bought the hype and thought they'd discover the secret sauce that a quarter-century-plus of full-time academic research couldn't find and the old salts who knew this was doomed but soldiered on anyway... but only so sorry, since I'm sure the checks kept clearing).
It's two steps from selling snake-oil, basically. Not that L4 or L5 are impossible, but people who knew the problem domain looked at how they were approaching it hardware-wise and went "... uhuh."
They literally did this with Summon. "Have your car come to you while dealing with a fussy child" - buried far further down the page in light grey, "pay full attention to the vehicle at all times" (you know, other than your "fussy child").
There were aspirations that the bottom up approach would work with enough data, but as I learned about the kind of long tail cases that we solved with radar/camera fusion, camera-only seemed categorically less safe.
easy edge case: A self driving system cannot be inoperable due to sunlight or fog.
a more hackernew worthy consideration: calculate the angular pixel resolution required to accurately range and classify an object 100 meters away. (roughly the distance needed to safely stop if you're traveling 80mph) Now add a second camera for stereo and calculate the camera-to-camera extrinsic sensitivity you'd need to stay within to keep error sufficiently low in all temperature/road condition scenarios.
The answer is: screw that, I should just add a long range radar.
there are just so many considerations that show you need a multi-modality solution, and using human biology as a what-about-ism, doesn't translate to currently available technology.
Many Lidar visualization software will happily pseudocolor the intensity channel for you. Even with a mechanically scanning 64-line Lidar you can often read a typical US speed limit sign at ~50 meter in this view.
It was his idea, his decision to build the architecture and he led the entire vision team during this.
Yet, he remains free from any of this fallout and still widely considered an ML god
At this point, Tesla looks less like a disruptive startup and more like a large-cap company struggling to find its next act. Musk still runs it like a scrappy startup, but you can’t operate a trillion-dollar business with the same playbook. He’d probably be better off going back to building something new from scratch and letting someone else run Tesla like the large company it already is.
https://www.truecar.com/compare/bmw-3-series-vs-tesla-model-...
It is hard to interpret the smugness above in a positive light. It is unhelpful to you and to everyone here.
If you want to compare an electric car against combustion-engine vehicle, go ahead, but that isn’t a key decision point for what we’re talking about.
The TrueCar web page table does not account for a $7,500 federal tax credit for EVs. I recognize it ends soon — September 30 — if only to head off a potential zinger comment (which would be irrelevant to the overall point).
All in all, it is notable that ~2 minutes asking a modern large language model for various comparisons is more helpful than this conversation with another human (presumably). If we’re going to advocate for the importance of humanity, seems to me like we should start demonstrating that we can at least act like why we deserve it. I view HN primarily as a place to learn and help others, not a place for snarky comments.
A better modern comparison showing less expensive EVs would mention the Nissan Leaf or Chevy Equinox or others. The history is interesting and worth digging into. To mention one aspect: the Leaf had a ~7 year head start but the Tesla caught up in sales by ~2018 and became the best-selling EV — even at a higher price point. So this undermines any claim that Tesla wasn’t doing something right from the POV of customer perception.
I don’t need to “be right” in this particular comment — I welcome corrections — I’m more interested in error correction and learning.
https://www.edmunds.com/electric-car/articles/cheapest-elect...
The model 3 is 1.5x more expensive than the cheapest car on the list, and it’s not obviously better than other things in its price range.
Here are some brands that have delivered more affordable EVs than Tesla: Kia, Hyundai, Chevy, Cooper, Nissan.
Note that all of these cost about 2x more than international competitors.
On top of that, Ford’s upcoming platform is targeting $30K midsize pickup trucks. Presumably, most other manufacturers have similar things in their pipelines.
Tesla is already behind most of its competitors, and does not seem to have anything new the pipeline, so the gap is likely to expand.
They’ve clearly failed to provide affordable EVs. They’ve been beaten to market by a half dozen companies in the North American market, and that’s with trade barriers blocking foreign companies that are providing cars for less than half these prices.
Behind? In what way? Please don't leave your fundamental criteria and assumptions unstated. You are prioritizing affordability above everything else, even range.
If you value your audience's time, please write more directly and clearly. [1] Don't try to sneak something past us. Instead, just come out and say it. Then you'll realize you need to make the case for why your criteria and assumptions are worth emphasizing. That would lead to interesting discussions like:
- What kinds of range do drivers want and why?
- What kind of trips do they do?
This is what we should be doing here -- unpacking our arguments, learning from each other, synthesizing. I'm so done with the low-quality comments from people that have little excuse. If you can write code or survive the tech industry, you probably have sufficient logical reasoning power and an understanding of constructive discussions. Or am I expecting too much? Seems to me such a person has little excuse to not step up the comment quality.
This concludes my lecture.
[1] By taking an extra ~X minutes to do so; it will save N * X minutes of the audience's time where N is the audience size. One you get in the habit of it, X will be relatively small. If you aren't willing to put in this effort, your comment ends up being a net negative towards the goal of helping people quickly and efficiently think about various pieces of the arguments.
The mini cooper EV is a joke of a car. 114 miles is ridiculous for $30k. Likewise, the 'base' Nissan Leaf is 150mi for $30k isn't much better.
The crossover/small-SUV segment is a little more competitive, but still you're comparing vehicles with quite dissimilar (n.b., worse) specs for lower prices.
If all you care about is a car that is electric that can be driven, then sure, there's cheaper cars. That doesn't mean they're better or reasonable for most consumers.
I’ve never driven one, but if it’s anything like my i3, it’s probably by far the sportiest thing on that list. Apparently it’s “a hoot to drive”, but the suspension is a bit stiff:
https://www.topgear.com/car-reviews/mini/cooper-electric
$30K for a quirky sporty commuter car seems completely reasonable to me.
(Also, the range is rated significantly higher in Europe for some reason. It probably outperforms EPA in the real world.)
Also, I looked up mini’s current offerings in the US. They don’t sell that car. The cheapest car they sell is the “Countryman SE ALL4” which starts at 47k and gets a meager 212mi of range.
Also, battery wear out on them is basically unheard of. Even if it were a thing, they have an extremely long transferable battery warranty (try finding an estimated dealer price for a swap). If you do somehow kill the battery, third party replacement ones can have higher range.
I honestly don’t know why you think 113 miles is worthless for a secondary car. Wyoming has the highest number of miles driven per year in the US at 24K. That’s 65 miles a day, so the 113 mile car will be more than adequate for most trips for the average Wyoming driver, assuming they drive daily.
The US average is 14K miles.
Sure that’s more than the “average” commute even in high usage areas, but people don’t generally have super consistent average commutes. You’ll have a shorter average commute and then a longer drive with errands and such.
Also, having a car that needs to be essentially fully charged every day means you have to always, always remember to plug it in. Not as convenient as a car you need to charge every couple days.
They are still profitable, have very little debt and a ton of money into the bank.
Every company has hits and misses. Bezos started before Musk and still hasn't gotten his rockets into orbit.
They have the best selling model in the world (their Model Y). But their total sales of all models are way behind many other car companies.
These car companies sell more cars each year than Tesla (ordered by total sales): Toyota, Volkswagen, Hyundai-Kia, GM, Stellantis, Ford, BYD, Honda, Nissan, Suzuki, BMW, Mercedes-Benz, Renault, and Geely.
Toyota and Volkswagen each sell more cars in a year than Tesla has sold over its lifetime, and Hyundai-Kia's annual sales are about the same as Tesla's lifetime sales.
By revenue rather than units these companies sell more per year: Volkswagen, Toyota, Stallantis, GM, Ford, Mercedes-Benz, BMW, Honda, BYD, and SAIC Motor. (Edit: I accidentally left out Hyundai-Kia)
"His track record is unimpressive"... I can see why you say that, I mean, took Tesla from almost nothing to a trillion dollar company. Started the most prolific rocket and satellite company in history (but hey, it's only rocket science right?), provides internet to places that it never even had the possibility of getting to, and providing untold millions the chance to get on the internet.
Started a company that is giving the paralyzed the ability to use a computer controlling their brain, and is working to restore sight to the blind.
Totally unimpressive. There are so many people who have done these things /s
That's contradicted here:
> The two co-founders funded the company until early 2004, when Elon Musk led the company's $6.5 million Series A financing
https://www.forbes.com/sites/quora/2014/12/29/how-much-equit...
Your claim was
> They contributed no money
Where did you get this from?
Tesla also licensed the design from tZero before the series-A I believe. And Eberhard was sort of keeping tZero afloat:
FSD has been a complete lie since the beginning. Any reasonable person who followed the saga (and the name "FSD") can tell you that. It was mobileye in 2015-2016, which worked quite well for what it's, followed by unfilled "FSD next year" promise since then every year.
Fool me once, shame on you; fool me twice, shame on me.
Yes, right now car sales make up 78% of Tesla's revenue. But cars have 17% margins. The energy-storage division, currently at 10% of revenue, has more like 30% margins. And the car sales are falling as the battery sales ramp up.
The cars were always a B2C bootstrap play for Tesla, to build out the factories it needed to sell grid-scale batteries (and things like military UAV batteries) under large enterprise B2B contracts. Which is why Tesla is pushing the "car narrative" less and less over time, seeming to fade into B2C irrelevancy — all their marketing and sales is gradually pivoting to B2B outreach.
> The cars were always a B2C bootstrap play for Tesla, to build out the factories it needed to sell grid-scale batteries
This seems like revisionist history. They called their company Tesla Motors, not Tesla Energy, after all.
This is a blog post from the founder and CEO about their first energy play. It seems clear that their first energy product was an unintended byproduct of the Roadster, they worried about it being a distraction from their core car business, but they decided to go ahead with it because they saw it as a way to strengthen their car business.
https://web.archive.org/web/20090814225814/http://www.teslam...
But, to support your wider point, there's some reporting that the initial grid BESS Megapack batteries had a test setup in the car park at Tesla and Elon was unaware they existed until they got mentioned to him in a meeting and someone pointed out the window to explain.
He immediately wanted to shut that project down to focus on cars.
Are we still doing this in 2025?
Uber is not a taxi company it’s a transportation company! Just wait until they roll out buses!
Juicero is not a fruit squeezing company it’s an end to end technology powered nourishment platform!
And so on. Save it for the VC PowerPoints.
Tesla is a car company. Maybe some day it’ll be defined by some other lines of business too. Maybe one day they’ll even surpass Yamaha.
I know because I bought it in March 2019 on a Model 3. (I got it because I thought it would help my elderly parents who mostly used the car.)
7500 euros completely down the drain. It still can’t even read highway speed signs. A five-year-old would be a safer driver than Tesla’s joke FSD.
They do have the audacity to send me NPS surveys on the car’s “Teslaversary.” Maybe they could guess by now that it’s a big fat zero.
However, I’m not sure that’s necessary, They lost the Tesla Roof class action suit, so it’s clearly possible to sue them.
Like with this. No, Tesla hasn't communicated any as such. Everyone knows FSD is late. But Robotaxi shows it is very meaningfully progressing towards true autonomy. And for example crushed the competition (not literally) in a recent very high-effort test in avoiding crashes on a highway with obstacles that were hard to read for almost all the other systems: https://www.youtube.com/watch?v=0xumyEf-WRI
What? They literally just moved the in car supervisor from the passenger seat to the driver seat. That's not a vote of confidence.
And I don't think you can glean anything. There are less than 20 Robotaxis in Austin, that spend their time giving rides to influencers so they can make YT videos where even they have scary moments.
They didn't "move" anything. They decided to have that for a new expansion, driving on highways. The system is progressing at a good rate, considering it's comparing well to Waymo well even though Waymo has been there for years.
>where even they have scary moments.
More FUD. There's hours long streams. And as I said, it compares well to Waymo. That alone should make their boardroom eerie.
Sure, it wouldn't replace any other sensing tech, but if my car has UWB and another car has UWB, they can telegraph where they are and what their intentions are a lot faster and in a "cleaner" manner than using a camera to watch the rear indicator for illumination
Like if a company comes out with a new transportation technology and calls it "teleportation", but in fact is just a glorified trebuchet, they shouldn't be allowed to use a generic term with a well-understood meaning fraudulently. But no, they'll just call it "Teleportation™" with a patented definition of their glorified trebuchet, and apparently that's fine and dandy.
I am still bitter about the hoverboard.
Other people, most importantly your local driving laws, use driving as a technical term to refer to tasks done by the entity that's ultimately responsible for the safety of the entire system. The human remains the driver in this definition, even if they've engaged FSD. They are not in a Waymo. If you're interested in specific technical verbiage, you should look at SAE J3016 (the infamous "levels" standard), which many vehicle codes incorporate.
One of the critical differences between your informal definition is whether you can stop paying attention to the road and remain safe. With your definition, it's possible have a system where you're not "driving", but you still have a responsibility to react instantaneously to dangerous road events after hours of of inaction. Very few humans can reliably do that. It's not a great way to communicate the responsibilities people have in a safety-critical task they do every day.
I don't understand why that is. They literally do nothing. The car drives itself. Parks itself. Does everything itself. The fact you have to engage with the wheel every now and then is because of regulation not because the tech isn't there imo. Really to me there is zero difference between the waymo and tesla experience save for regulatory decisions that prevent the tesla from being truly hands free eyes shut.
Tesla has chosen to not (yet) assume that liability, and leave that liability to the driver and requires a driver in the drivers seat. But someone in the drivers seat can override the steering wheel accidentally and cause a collision, so they likely will require the drivers seat to be empty to assume liability (or disable all controls, which is only possible on a steer by wire vehicle, and the only such vehicle in the world is Cybertruck).
Tesla has not asked for regulatory approval for level 4 or 5. When they do, it'll be interesting to see how governments react.
Still, my point is all this has nothing to do with the tech. It is all regulatory/legal checkers.
Because being a passenger in a driverless vehicle is a much better user experience than being a driver. You can be on a zoom call, sleep, watch a movie or TV show or scroll TikTok, get some work done on your computer, wear a VR headset and be in a different world, etc etc. Tesla would make a lot more money, and could charge a lot more for FSD.
They aren't doing that yet because they aren't ready yet. It's why they still have humans in the robotaxi service.
There are no doubts in my mind that they will do it probably next year. The latest version of FSD on the new cars is very, very impressive.
you still have a responsibility to react instantaneously to dangerous road events after hours of inaction.
There are no regulatory barriers impeding Tesla outside a small handful of states (i.e. California). The fact that you still have to supervise it is an intentional aspect of the system design to shift responsibility away from Tesla.In 2016 Tesla claimed every Tesla car being produced had "the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver": https://web.archive.org/web/20161020091022/https://tesla.com...
It was a lie then and remains a lie now.
Sure. Does Tesla take responsibility and accept liability for the vehicle's driving?
Am I the only one that noticed most of the targets are in nominal dollars, not inflation adjusted? Trump’s already prosecuting Fed leadership because they’re refusing to print money for him. Elon’s worked with him enough to understand where our monetary policy is headed.
My memory was more that you'd be able to get into (the driver's seat of) your Tesla in downtown Los Angeles, tell it you want to go to the Paris hotel in Vegas, and expect generally not to have to do anything to get there. But not guaranteed nothing.
Full Speed USB is 12Mbps, nobody wants a Full Speed USB data transfer.
Full Self Driving requires supervision. Clearly, even Tesla understands the implication of their name, or they wouldn't have renamed it Full Self Driving Supervised... They should probably have been calling it Supervised Self Driving since the beginning.
I get that there are many who rush to defend Musk/Tesla. I'm not one of them.
I was just caught off guard by the headline. To me, changing “Full Self-Driving” to “Full Self-Driving (Supervised)” doesn't merit the headline "Tesla changes meaning of ‘Full Self-Driving’, gives up on promise of autonomy".
Again, to me "Full Self-Driving" never meant you would retro-fit your Tesla to remove the steering wheel, nor even set it for someplace and go to sleep. To me, it meant not needing to have your hands on the steering wheel and being able to have a conversation while maintaining some sort of situational awareness, although not necessarily keeping your eyes fully on the road for the more monotonous parts of a journey.
As others have pointed out, Tesla/Musk sometimes claimed more than that, but the vast majority of their statements re: FSD hew closer to what I said above. At least I think so -- no one yet has posted something where claims of more than the above are explicit and in the majority.
Autopilot in a plane generally maintains heading and altitude. It certainly can do that with or without a pilot in the cockpit, and you hear about incidents from time to time where the pilot is incapacitated and the autopilot keeps the heading and altitude until fuel run out. Keeping heading and altitude is insufficient to operate a plane, of course; Tesla's choice of the word Autopilot was also problematic, because the larger market of drivers doesn't necessarily understand the limitations of aviation autopilot and many people thought the system is more capable than it actually is; an aviation style autopilot wouldn't be much help on the road, maintaining heading in that way isn't actually helpful when roads are not completely straight, maintaining speed is sometimes useful but that's been called cruise control for decades. (Some flight automation systems can do waypoints, and autoland is a thing, but afaik, it's not all put together where you put the whole thing in at once and chill, nor would that be a good idea).
> To me, it meant not needing to have your hands on the steering wheel and being able to have a conversation while maintaining some sort of situational awareness, although not necessarily keeping your eyes fully on the road for the more monotonous parts of a journey.
I mean, that's sort of what the product is, although there's real safety concerns about ability for humans to context switch and intervene properly. I see how that's supervised self-driving, but not how it's full self-driving.
If I paid 90% of your invoice and said paid in full, that doesn't make it paid in full.
https://www.reuters.com/technology/tesla-video-promoting-sel...
To be clear, this is obviously a reframing from the implications Musk has made. But I still don't see adding "supervised" to the description as that big a shift for most of the use cases that have been presented in the past.
The result is it looks like many drivers are unaware of the benefits of defensive driving. Take that all into account and safe 'full self driving' may be tricky to achieve?
It's pathetic. The Austin Robotaxi demo had a "safety monitor" in the front passenger seat, with an emergency stop button. But there were failures where the safety driver had to stop the vehicle, get out, walk around the car, get into the drivers's seat, and drive manually. So now the "safety monitor" sits in the driver's seat.[1] It's just Uber now.
Do you have to tip the "safety monitor"?
And for this, Musk wants the biggest pay package in history?
[1] https://electrek.co/2025/09/03/tesla-moves-robotaxi-safety-m...
"In a visible sign of its shifting posture from daredevil innovation to cautious compliance, Tesla this week relocated its robotaxi safety monitors, employees who supervise the autonomous software’s performance and can take over the vehicle’s operation at any moment, from the passenger seat to the driver’s seat."
And this one in Electrek.[2]
The state of Texas recently enacted regulations for self-driving cars that require more reporting. Tesla's Robotaxi with a driver is not a self-driving car under Texas law.
Musk claims Tesla will remove the safety driver by the end of the year. Is there a prediction market on that?
[1] https://gizmodo.com/tesla-robotaxi-2000653821
[2] https://electrek.co/2025/09/03/tesla-moves-robotaxi-safety-m...
[1] https://seekingalpha.com/article/4818639-tesla-robotaxi-ambi...
None of this is independent reporting. They're reading the same social media posts and repackaging them into articles. None of these authors are based in Austin where robotaxi is.
[1] https://www.statesman.com/business/technology/article/tesla-...
In the long run some of those promises might materialise. But who cares! Portfolio managers and retail investors want some juicy returns - share price volatility is welcomed.
Electric car + active battery management were what I cared about at the time of purchase. Also, I am biased against GM and Ford due to experiences with their cars in the 80s and 90s.
I doubt I'm the only one.
(In retrospect, the glass roof was not practical in Canada and I will look elsewhere in the future)
Besides, hot in summer and cold in winter. Just see no benefit, it is just another made for California feature
Is life absurd?
Is hope a solution to absurdity?
You have been voted down, but this is proven. He has lied about his education. Henever even enrolled at Stanford, and his undergraduate degree was basically a general studies business degree.
Musk has lied, time and time again, about his education. He has never worked as an engineer. People have commented that he barely understands how to run simple Python scripts.
Tesla is pivoting messaging toward what the car can do today. You can believe that FSD will deliver L4 autonomy to owners or not -- I'm not wading into that -- but this updated web site copy does not change the promises they've made prior owners, and Tesla has not walked back those promises.
The most obvious tell of this is the unsupervised program in operation right now in Austin.
As an aside, it's wild how different the perspective is between the masses and the people who experience the bleeding edge here. "The future is here, it's just not evenly distributed," indeed.
Lol it has been strategic manipulation right the way through. Right out of an Industrial Organisation textbook.
> Tesla has changed the meaning of “Full Self-Driving”, also known as “FSD”, to give up on its original promise of delivering unsupervised autonomy.
They have not given up on unsupervised autonomy. They are operating unsupervised autonomy in Austin TX as I type this!
Setting aside calling a driver in the driver's seat "unsupervised"... that's exactly the point. People paid for this, and they are revoking their promise of delivering it, instead re-focusing on (attempting) operating it themselves.
I'd have no objection to this if they offered buy-backs on the vehicles in the field, but that seems unlikely.
Or are people upset about the current state of autonomous vehicles like Waymo (which has been working for Years!) and the limited launch of Robotaxi?
At any rate, I don't think they are revoking their prior promises. I expect them to deliver L4 autonomy to owners as previously promised. With that said, I'm glad they are ceasing that promise to new customers and focusing on what the car does today, given how wrong their timelines have been. I agree it's shitty if they don't deliver that, and that they should offer buybacks if they find themselves in that position.
Nope, they gave up on that and moved them to the driver's seat.
FWIW, Tesla disputes this claim: https://x.com/robotaxi/status/1963436732575072723
That's not a fact, it's a conclusion drawn from all the other facts in the article.
Did you find the facts that support this conclusion to be false?