A robotaxi that has a "low enough to be acceptable" frequency of the above failure mode is likely to have enough occasional "full send" failure modes to make for Youtube fodder when deployed at scale even if they're comparatively rare compared to humans or the other failure type, or some other standard.
Why do we hold calculators to such high bars? Humans make calculation mistakes all the time.
Why do we hold banking software to such high bars? People forget where they put their change all the time. Etc etc.
I have seen this twice in my life. One person who freaked out because they stopped in the tracks and then turned onto them, the second I still have no idea how they got there.
I think there are two really big issues with the roll out of self-driving cars that are going to be hard for us to overcome:
1. Their mistakes are going to be highly publicized, but no one is publicizing the infinite number of dumbass things human drivers do every day to compare it to.
2. They're going to make mistakes that are extremely obvious in hindsight or from a third party perspective, that most humans will say no human would have ever done. It is likely that a human has and would have made similar and worse mistakes, and makes them at a higher rate, and we will have to accept these as a reality in a complex world.
Idea: "Waymo or Human," a site like Scrandle where we watch dashcam clips of insane driving or good driving in a challenging situation and guess if it's a human or self-driving car.
People still complain about that one cat that got run over. As if the Waymo jumped the curb and chased it down.
Personally I won't be using one of these cars because I want to contribute to other humans' paychecks, but I would much rather be using public transit over adding more and more cars to more and more roads/lanes.
All of the negative publicity around the autonomous cars is justified IMO because, even if these cars are "safer" than a human, they are still clearly not as safe as they need to be when considering liability, local laws, basic driving etiquette and the cost to other humans' incomes.
Good luck rearchitecting the entire way of life of the vast majority of Americans, not to mention somehow tearing out and replacing the entirety of our transportation infrastructure. I'm generally of the persuasion that we should reduce our reliance on cars and I intentionally live in a dense city with half-decent transit but this fever dream that highly individualistic Americans are going to get on board with shared transit is just that, a fever dream.
It would be good for us, but that doesn't mean it is inevitable or even possible at this time. Acknowledging that is important because it means you invest in alternatives that may actually get adopted.
> All of the negative publicity around the autonomous cars is justified IMO because, even if these cars are "safer" than a human, they are still clearly not as safe as they need to be when considering liability, local laws and the cost to other humans' incomes.
So now we come to the other half of your argument. Waymos are safer and it isn't even close. If I am an insurance company and you are asking me to cover a human or a Waymo I'm taking the Waymo 10/10 times. Humans are actually pretty bad at driving and we're getting worse as we're more distracted, not better. The simple math of the liability is going to move us towards self-driving cars more rapidly, not slow it down.
The only other argument I see buried in here is "cost to other human's incomes." Whether you mean gig economy workers, taxi drivers, or transit operators, I have a dozen arguments here but the simplest is maybe you should prioritize the 40k lives lost every year to motor vehicle accidents over income. We'll find other places for productivity and income. You don't get the dead people back.
Many cities could do better to have more robust public transit, but the reality is America is vast and people commute long distances regularly. The cost of deploying such vast amounts of public transit would be prohibitively expensive.
I used to believe this, I'm not sure it is actually true though for a large percentage of Americans. There is some unmet demand that would be satisfied, but beyond that, most Americans value their individualism and control (even if it is controlling where a driver takes them via an app) too much unless they were raised around good transit. That means that even if we build good transit, it will probably take more than a generation for someone to use it fully and effectively.
Functionally, they're no different than bus lanes or a wide shoulder. Humans drive on them all the time, because there's no traffic on them and they can get to where they're going faster. They shouldn't, it's illegal, and they can get ticketed for it, but they do it anyway. If you load up google street view in Phoenix/Tempe/Gilbert you can see a few people driving on them.
EDIT: anecdotally at least for this type of ground level light rail, I've seen people drive on similar streetcar tracks (that are not shared with cars) in Toronto more than one time.
The difference, of course, is that when a human does it we just say "what an idiot!" But when a machine does it, some people say "well, obviously machines can't drive and can never drive as well as a human can," which is silly.
I do think it's fair to argue that this is probably an oversight that can be corrected now that it has been revealed, though.
So, in the interests of avoiding needless regulation that would make us less safe, I think it's important to point out that these comparisons are unfair & examples are typically extremely rare edge cases.
However, I think that driving on tram tracks is unacceptably bad - it is something that a good driver would simply never do outside of really strange circumstances (like complete vision loss due to heavy storm weather). This example shouldn't be used as a single example to bar autonomous vehicles but it should also be properly recognized as unacceptable to be repeated.
(Sure, within the wide boundaries of mental health, it is not impossible.)
On a non-separated rail like in the video where you can just turn off at any time I can see a lot of people continuing to do it just to spite their spouse for screeching about the obvious.
Or on the other side of the coin I can see a lot of people just say nothing because it's probably fine and they'd rather not have the argument.
Folks in this thread are trying to compare Waymo to human driving as some sort of expectation setting threshold. If humans can’t be perfect why should we expect machines to be?
We don’t expect humans to be perfect. When a human breaks the law we punish them. When they are sued civilly and found liable, we take their money/property.
There’s also a sense of self-preservation that guides human decision making that doesn’t guide computers.
Until we account for the agency that comes along with accountability, and the self-preservation mechanisms that keep humans from driving someone else onto a light rail track, we are making a false equivalence in saying that somehow we can’t expect machines to be as good as humans. We should expect exactly that if we’re giving them human agency but not human accountability, and while they still lack the sense of preservation of self or others.
I, as a somewhat normal driver, am not personally at much risk if some other driver decides to drive on the rails. That won't be true if I'm in a Waymo and there's nothing I can do about its bugs.
And I don't blame people who are skeptical that Waymo will be properly punished. In fact, do you suppose they were punished here?
Some truth to this, but a machine failure can be patched for all cars. There's no effective way to patch a problem with all human drivers.
If Waymo et al can successfully limit payouts to that, they'll be fine.
OTOH, if every death or serious injury comes with a 9 figure+ punitive damage award, they'll be in trouble.
I think states that nudge but don't completely require insurance have it right (I live near one of them). Even though most people have insurance the plausible threat of having to actually litigate against someone and attempt to collect seems to put huge downward pressure on everything saving even the insured people a ton of money more than offsetting the "risk" of sharing a state with uninsured people. Having laughably low minimums is the next best thing.
There's also a lot more campaign donation money for huge train projects from engineering and construction firms.
Oof, psychopath much?
Does this car "feel"?
When these shit suckers peddling these slop slingers want you to anthropomorphize them it's a positive signal to stop doing that.
> or at all
That's called anthropomorphizing, as noted in my gp, and it is a different phenomenon from empathy.
> Anthropomorphism (from the Greek words "ánthrōpos" (ἄνθρωπος), meaning "human," and "morphē" (μορφή), meaning "form" or "shape") is the attribution of human form, character, or attributes to non-human entities [0]
"There is a medical condition known as delusional companion syndrome where people can have these feelings of empathy to a much more extreme extent and can be convinced that the objects do have these emotions — but it is much less common than the average anthropomorphizing, Shepard said." [1]
[0] https://en.wikipedia.org/wiki/Anthropomorphism
[1] https://www.cnn.com/2024/09/07/health/empathize-inanimate-ob...
It is an internal (to the person experiencing it) phenomenon. Feeling empathy does not require the object of the empathy to be intelligent, have emotions, or even thoughts. And it does not require the person experiencing it to believe that the object has the attributes, it does not require anthropomorphizing.
People feel empathy towards all sorts of people, things, and groups.
There, fixed it.