https://www.iihs.org/news/detail/high-visibility-clothing-ma...
For one thing I'd be worried that retroreflective tape could be crazy bright in the dark and could blow out the cameras.
By the way, there's a really cool in its simplicity medium term adaptation mechanism in eyes as well, they measure light intensity by photo-decay of a chemical substance that is produced slowly. If there is much light, the substance decays a short time after production. If there is little light, it accumulates for about half a minute, massively increasing sensitivity. The quantum efficiency (the inverse of how many photons it takes to produce a signal) of a dark-adapted eye is about 0.3: https://www.nature.com/articles/ncomms12172#MOESM482
Many cars have multiple cameras and could (can?) run them at different exposures. Or run at a very high frame rate and take every frame with multiple exposure settings, and calculate an HDR video on the fly.
https://my.clevelandclinic.org/health/body/photoreceptors-ro...
I know my Sony camera has pixels that, collectively, can be attached or not attached to a capacitor that reduces the sensitivity
https://www.photonstophotos.net/Aptina/DR-Pix_WhitePaper.pdf
I wonder if you could have a matrix of high and low sensitivity pixels that would give a camera more headroom.
I've caught (and fixed) this issue before at my own employers.
The gold standard for standardized AEB testing is in the Euro NCAP. You can see the testing protocol [1] explicitly specifies [2] a fixed size human adult with black hair, black shirt, blue pants with a precise visible, infrared, and radar cross-section. I lack sufficient knowledge to comment on whether those characteristics are representative, but I will assume that they are.
While precise test characteristics are valuable for test reproduction and comparative analysis, it makes it very easy for manufacturers to overfit and make their systems seem safer than they actually are in generalized circumstances whether accidentally or intentionally.
[1] https://cdn.euroncap.com/media/58226/euro-ncap-aeb-vru-test-...
[2] https://www.acea.auto/files/Articulated_Pedestrian_Target_Sp...
But similar enough to turnout gear worn by many North American fire departments.
The idea that they can't deal with stationary obstacles just makes it all worse, because obstacles happen constantly.
The concept that biological systems have made 3D vision, navigation, and object avoidance work without LIDAR is certainly attractive.
But there is a LOT more to it than just a photosensor and a bunch of calculations. The sensors themselves have many properties unmatched by cameras, including wider dynamic range, processing in the retina and optic nerve itself, and more, and the intelligence attached to every biological eye also is built upon a body that moves in 3D space, so has a LOT of alternate sensory input to fuse into an internal 3D model and processing space. We are nowhere near being able to replicate that.
The more appropriate analogy would be the wheel or powered fixed wing aircraft. Yes, we're finally starting to be able to build walking robots and wing-flapping aircraft, and those may ultimately be the best solution for many things. But, in the meantime, the 'artificial' solution of wheels and fixed airfoils gets us much further.
Ultimately, camera-only vision systems will likely be the best solution, but until then, integrating LIDAR will get us much further.
Why though? How could it possibly be better than camera plus other sensors?
For example, several drivers of Tesla vehicles have been beheaded when a semi-truck turned/crossed in front of them and the car on autopilot evidently identified the white side of the trailer as sky and drove right under it, removing the roof and the occupant's heads. LIDAR would have identified a large flat object at range decreasing at the approximate speed of the vehicle, and presumably the self-driving system would have taken different action.
edit: I think perhaps you didn't quite mean it that way, but it sounded like you were saying "eventually, camera-only will be superior to any other possible system, including camera + other sensors", which just seems nonsensical.
To your point, there are MANY situations where that will never be true, where the extra 3D info will absolutely make a difference.
Going back to the wheel/leg and fixed/flapping wing analogues; legs and flapping wings will likely always be superior for rough terrain and tight spaces. However, legged vehicles will never be as fast as wheels can go on smoother roads, and similarly, flapping wings will never be superior to fixed-wings+power for long haul or heavy air transport.
So, you're right — it's Horses For Courses — different solutions will work best in different situations.
Like echolocation in bats and dolphins... Excellent point!
In fact, humans do have some echolocation capability [0,1]. That should tell us that LIDAR (or emitter-receiver-range-finder capability) may ultimately always be a core piece of the solution.
[0] https://en.wikipedia.org/wiki/Human_echolocation
[1] https://www.smithsonianmag.com/innovation/how-does-human-ech...
Having the same sensors as a human but being more attentive would be a step up. That said, I think camera-only is not good enough for now.
Peppering a few webcam-quality cameras around a car and plugging it into an Intel Atom processor probably won't be better than our eyes and brain, even if the cameras don't blink or get tired. It's only going to get better though.
And, vision can't work in/penetrate heavy snow or fog, which is transparent to radar.
Vision is an indirect measurement. Lidar/radar is a direct measurement. I'm curious if there are any other safety critical systems that uses such massively indirect measurements?
Does a truly featureless wall/road with no visible edges actually exist in the wild? I’d expect cameras with high enough resolution, spacing, and FOV would handle any real world examples but maybe I’m wrong.
At this point it does not feel safe to trust self driving cars or assistive systems designed, built and tested primarily in California or the southern US for one simple reason: they do not get the range of adverse weather conditions that drivers in the rest of the world have to deal with and adapt safely to on a regular basis. It's easy to make a self driving car that "works" on California style freeways which are almost never under construction because they don't wear out as fast. In other places like eastern Ontario we sometimes have to deal with temperature shifts from -30C to +10C in 24 hours, salt our roads like crazy in the winter, and have a much wider range of typical weather conditions. These all take a significant toll on road infrastructure, and mean that what are rare corner cases in California become regular events elsewhere. We have 2 seasons where I live: construction season and winter. Based on several published reports of self driving cars hitting parked emergency vehicles or lane confusion in construction zones, I simply do not trust that the current widely available "self driving" vehicles are provably safe outside of the near ideal conditions present in California.
At least Waymo seems to be quite hesitant about rolling out to cities that have less favourable weather.
What I would like to see is for regulatory bodies with a safety first approach to accidents (similar to how the FAA investigates and regulates commercial aircraft) be involved in setting the criteria for the design, testing and regulation of self driving cars and driver assistance systems. Reading reports and watching shows about the root cause analysis of airplane crashes is fascinating, and it shows just how hard it is to learn how to make large and complex real world systems safe. It has taken plenty of deaths to get us to the point where commercial flights are safer than the trip to the airport, and it will take many more deaths before self driving cars are appreciably better than humans. Some of the most important lessons from aviation are about the interaction between pilot(s), crew and automation, and how those systems fail.
Test cases / data for self driving cars should be shared and made public. If we're trusting our lives to a piece of software, we should be able to see how well it does across standard test cases that the industry has encountered and developed, and be able to help add more. Capitalism does many things well, but making things safe for humans is not one of them.
...only if you count the field of view you get from moving your eyeballs. You wouldn't say a PTZ camera has "360 FOV" just because it can rotate around. The "576 megapixel" figure is also questionable. Peak resolution only exists in the fovea. Everywhere else is blurry and much lower resolution. You don't notice this because your eyes does it automatically, but the actual information you can receive at any given time is far less than "576 megapixel".
I worry that hitting a pedestrian at night is the most likely way I'd seriously hurt somebody, and I want to encourage automakers to prioritize the safety of pedestrians and other road users, so Subarus will be high on my list the next time I'm shopping for a car.
Current latest gen Subaru has a forward radar.
The 2025 models have 3 forward cameras, no radar.
They don't mess with shipped cars and so they don't care about legacy compatibility, that makes it hard to keep track of changes
https://www.hmpgloballearningnetwork.com/site/emsworld/news/...
i'm not clear from that how many trials were run for each test condition, but the percentage is average speed reduction, not a chance for binary hit/not hit. edit: the paper pdf says up to three trials each.
No. It was all three vehicles. The table is average speed reduction.
Reflective strips had a lower average speed reduction than black clothing in every case except for the Subaru at 0 and 20 lux and the Honda at 0 lux.
The Subaru though was an entirely different class. It worked so well. The thing would drive me around curves on a somewhat windy country road. Comfortably brought me to a stop behind a stopped car. Etc. I bought the Subaru.
According to an engineer I was in contact with, (at the time, maybe still true) the Subaru EyeSight system was their crown jewel system.
Even the automatic headlights are by far the worst I have ever used (and that includes a 1996 Ford Taurus). They only reliably stay off for a few hours around noon on overcast days, where the illumination is bright and diffuse. Otherwise they toggle on and off while I am driving through shadows (including self-shadowing when I turn away from the sun).
It’s kinda of terrible because it teaches me bad habits to depend on the car and then when I drive conventional, I’m more vulnerable as I’ve been trained to let the car take over.
And if I calmly but consistently hold left to keep it left a bit, the PID loop ramps up trying to center, and if I let go of the wheel it wants to swing into the right lane.
Especially in situations where a lot of emergency vehicles are parked with the lights on outside city lights. It's often very disorientating and for me it reduces visibility of the surroundings when passing by. Those traffic situations require additional caution, because there could be people and debris on the road, but might reduce passing drivers abilities to properly see them.
It's probably also a problem for car safety systems.
Maybe the emergency lights and reflective strips got too good, to a point where they start causing harm. Emergency lights could easily adjust automatically to the ambient lighting conditions.
(Mazda/honda definitively need to get better, the data shows it's possible, not arguing with that fact)
Ambulances in the Australian Capital Territory use steady burn amber perimeter lights when stopped on roadsides. Makes the outline of the vehicle more conspicuous, tends not to encourage rubbernecking and doesn't dazzle people.
https://www.carpro.com/blog/almost-all-new-vehicles-have-aut...
Why only those three?
It's a bit like yogurt that comes with an economy class meal. Evaluations made for the cup on one flight might not apply to a flight on different days, or its return flight. Shouldn't it be the brand on the cup, not the one on headrest, that gets named in reports?
No exec that killed the spending on better/more sensors for more profit will be punished, certainly not jail.
No coder or their manager that missed any bug will be punished or go to jail.
So why would they even worry about it? One less thing.
Nor should they.
These are not self-driving features, they're an assist to make up for shitty drivers and shitty pedestrians. These features basically only exist for the occasions when two dumbasses meet: An inattentive driver meets a pedestrian that doesn't check for oncoming traffic before crossing the road.
Jailing developers because it's not perfect sounds like an excellent way to prevent new safety technology from ever being developed again.
Considering we rarely jail drivers for killing people, the idea of jailing developers for failing to prevent drivers from killing people sounds nuts.
Perhaps if Tesla wants to be treated like a regular car company they should get a regular car company CEO?
edit: beliefs I'm referring to guests with space is fake and then a guy who literally manages rockets coming on