59 pointsby Kye3 days ago19 comments
  • thorncorona3 days ago
    If you want the unsummarized source and not the chatgpt summarized version:

    https://www.iihs.org/news/detail/high-visibility-clothing-ma...

    • pfedak3 days ago
      the chart in the streetsblog article puts some values in the wrong boxes, too. pathetic
      • neom3 days ago
        wow not just some, it tells a totally different story, that's awful. (edit, I emailed the author- further edit: author reply: I’ve corrected my chart. I did not use a tool to extract the data. I actually think the chart you are looking at (on the left) was updated from the one I originally received and I may have been working off the earlier one.)
  • PaulHoule3 days ago
    I'm fascinated with weird cameras and noticed that the #1 requirement of automotive cameras is the ability to deal with extreme variations in brightness both between frames and within a frame.

    For one thing I'd be worried that retroreflective tape could be crazy bright in the dark and could blow out the cameras.

    • ahartmetz3 days ago
      The instantaneous "HDR" capability of biological eyes is really quite amazing. About 5 orders of magnitude for human eyes, about 2-3 for most cameras.

      By the way, there's a really cool in its simplicity medium term adaptation mechanism in eyes as well, they measure light intensity by photo-decay of a chemical substance that is produced slowly. If there is much light, the substance decays a short time after production. If there is little light, it accumulates for about half a minute, massively increasing sensitivity. The quantum efficiency (the inverse of how many photons it takes to produce a signal) of a dark-adapted eye is about 0.3: https://www.nature.com/articles/ncomms12172#MOESM482

      • andix3 days ago
        Is the 2-3 just for single frames, or does it already include all the tricks cameras can do to get more dynamic range?

        Many cars have multiple cameras and could (can?) run them at different exposures. Or run at a very high frame rate and take every frame with multiple exposure settings, and calculate an HDR video on the fly.

    • AlotOfReading3 days ago
      The processing pipeline is just as important as the camera hardware here. It's difficult to build an appropriate system by gluing together off the shelf software and many people writing automotive requirements aren't even aware of the failure modes. When it goes to the tier ones, they'll just throw things together until it meets the requirements and nothing more.

      I've caught (and fixed) this issue before at my own employers.

  • Veserv3 days ago
    Very likely a case of tuning to the standard safety tests.

    The gold standard for standardized AEB testing is in the Euro NCAP. You can see the testing protocol [1] explicitly specifies [2] a fixed size human adult with black hair, black shirt, blue pants with a precise visible, infrared, and radar cross-section. I lack sufficient knowledge to comment on whether those characteristics are representative, but I will assume that they are.

    While precise test characteristics are valuable for test reproduction and comparative analysis, it makes it very easy for manufacturers to overfit and make their systems seem safer than they actually are in generalized circumstances whether accidentally or intentionally.

    [1] https://cdn.euroncap.com/media/58226/euro-ncap-aeb-vru-test-...

    [2] https://www.acea.auto/files/Articulated_Pedestrian_Target_Sp...

    • jonas213 days ago
      Someone should sell a shirt and pants made to those specifications.
  • bigfatkitten3 days ago
    > with a reflective strips in a configuration similar to those worn by roadway workers (though their safety gear is generally bright orange or yellow rather than black).

    But similar enough to turnout gear worn by many North American fire departments.

    • tlavoie2 days ago
      I wonder too if the reflective markers are also messing with self-driving vehicles that hit stopped fire trucks. It's been a few years, but this article on Teslas hitting fire vehicles was sobering. https://www.wired.com/story/tesla-autopilot-why-crash-radar/

      The idea that they can't deal with stationary obstacles just makes it all worse, because obstacles happen constantly.

  • throwaway484763 days ago
    If the goal is to be safer than a human driver then it will require better than human sensors, such as lidar. Camera only approaches will not stand the test of time.
    • toss13 days ago
      Yup.

      The concept that biological systems have made 3D vision, navigation, and object avoidance work without LIDAR is certainly attractive.

      But there is a LOT more to it than just a photosensor and a bunch of calculations. The sensors themselves have many properties unmatched by cameras, including wider dynamic range, processing in the retina and optic nerve itself, and more, and the intelligence attached to every biological eye also is built upon a body that moves in 3D space, so has a LOT of alternate sensory input to fuse into an internal 3D model and processing space. We are nowhere near being able to replicate that.

      The more appropriate analogy would be the wheel or powered fixed wing aircraft. Yes, we're finally starting to be able to build walking robots and wing-flapping aircraft, and those may ultimately be the best solution for many things. But, in the meantime, the 'artificial' solution of wheels and fixed airfoils gets us much further.

      Ultimately, camera-only vision systems will likely be the best solution, but until then, integrating LIDAR will get us much further.

      • cmiller13 days ago
        > Ultimately, camera-only vision systems will likely be the best solution, but until then, integrating LIDAR will get us much further.

        Why though? How could it possibly be better than camera plus other sensors?

        • toss13 days ago
          Because LIDAR specifically gives you the range or distance to each object. While in theory this should be possible with multiple cameras and stereoscopic vision/analysis, it obviously is not as simple in practice as it seems in theory. The additional depth info is also critical in identifying objects.

          For example, several drivers of Tesla vehicles have been beheaded when a semi-truck turned/crossed in front of them and the car on autopilot evidently identified the white side of the trailer as sky and drove right under it, removing the roof and the occupant's heads. LIDAR would have identified a large flat object at range decreasing at the approximate speed of the vehicle, and presumably the self-driving system would have taken different action.

          • hn_acc13 days ago
            I think you misunderstood the question you responded to. The question was in response to (edit: your own conclusion) "ultimately, camera-only will be the best system" - and was wondering why the same camera system PLUS more sensors wouldn't be better?

            edit: I think perhaps you didn't quite mean it that way, but it sounded like you were saying "eventually, camera-only will be superior to any other possible system, including camera + other sensors", which just seems nonsensical.

            • toss13 days ago
              Ah, got it now, and it's an excellent point. Mostly thinking the idea of cameras(+intelligence)-only may be best in the end just from perspective of smallest effective resources and assuming biomimicry is best.

              To your point, there are MANY situations where that will never be true, where the extra 3D info will absolutely make a difference.

              Going back to the wheel/leg and fixed/flapping wing analogues; legs and flapping wings will likely always be superior for rough terrain and tight spaces. However, legged vehicles will never be as fast as wheels can go on smoother roads, and similarly, flapping wings will never be superior to fixed-wings+power for long haul or heavy air transport.

              So, you're right — it's Horses For Courses — different solutions will work best in different situations.

      • nomel2 days ago
        Whenever I walk up to a chainlink fence, and my vision places it at the wrong z distance, I'm reminded that 3d from vision is a consequence of our biological limitation of not having evolved emitters.
    • thebruce87m3 days ago
      > If the goal is to be safer than a human driver then it will require better than human sensors, such as lidar.

      Having the same sensors as a human but being more attentive would be a step up. That said, I think camera-only is not good enough for now.

      • bigfatkitten3 days ago
        The sensors they have now aren't even as good as a human. The cameras have nowhere near the dynamic range of the average human eyeball, which isn't even particularly spectacular as far as eyes go.
        • thebruce87m3 days ago
          I agree. My point was from “all other thing being equal” that extra attentiveness on its own is a plus. I work in computer vision AI and am aware of current limitations.
      • hn_acc13 days ago
        So, actual androids?
    • standeven3 days ago
      I think vision-only approaches can work, but our eyes and brain are amazing and it would take some serious hardware. Our eyes have a 200-degree FOV, providing a 576 megapixel landscape, with 13ms of latency. Plus there are 6 billion neurons in the visual cortex alone to process the images, which are then fed to another 80 billion neurons that can interpret and react to the data.

      Peppering a few webcam-quality cameras around a car and plugging it into an Intel Atom processor probably won't be better than our eyes and brain, even if the cameras don't blink or get tired. It's only going to get better though.

      • nomel2 days ago
        There are corner cases of vision that cannot work because it's not mathematically possible, like a featureless wall (or ground, in case of recent mars crash).

        And, vision can't work in/penetrate heavy snow or fog, which is transparent to radar.

        Vision is an indirect measurement. Lidar/radar is a direct measurement. I'm curious if there are any other safety critical systems that uses such massively indirect measurements?

        • standevena day ago
          Good point regarding snow and fog, but I’m assuming operation would slow or stop in those conditions and that would be acceptable.

          Does a truly featureless wall/road with no visible edges actually exist in the wild? I’d expect cameras with high enough resolution, spacing, and FOV would handle any real world examples but maybe I’m wrong.

          • bcrl16 minutes ago
            For those of us who live in areas with lots of snowy winter conditions, you can't just hand wave that requirement away (as we in industry do so often while developing software)!!!! Winter driving conditions are some of the most important times for driver assistance systems to help drivers out, especially young drivers. Tunnel vision while driving in snow is a very real thing that drivers encounter often enough where I live, and there have been plenty of times when I had to drive home on roads with barely visible boundaries; you figure out where the road is based on the ditch and the sound your tires make when they hit the edge of the road combined with a copious reduction in speed to give you time to recover.

            At this point it does not feel safe to trust self driving cars or assistive systems designed, built and tested primarily in California or the southern US for one simple reason: they do not get the range of adverse weather conditions that drivers in the rest of the world have to deal with and adapt safely to on a regular basis. It's easy to make a self driving car that "works" on California style freeways which are almost never under construction because they don't wear out as fast. In other places like eastern Ontario we sometimes have to deal with temperature shifts from -30C to +10C in 24 hours, salt our roads like crazy in the winter, and have a much wider range of typical weather conditions. These all take a significant toll on road infrastructure, and mean that what are rare corner cases in California become regular events elsewhere. We have 2 seasons where I live: construction season and winter. Based on several published reports of self driving cars hitting parked emergency vehicles or lane confusion in construction zones, I simply do not trust that the current widely available "self driving" vehicles are provably safe outside of the near ideal conditions present in California.

            At least Waymo seems to be quite hesitant about rolling out to cities that have less favourable weather.

            What I would like to see is for regulatory bodies with a safety first approach to accidents (similar to how the FAA investigates and regulates commercial aircraft) be involved in setting the criteria for the design, testing and regulation of self driving cars and driver assistance systems. Reading reports and watching shows about the root cause analysis of airplane crashes is fascinating, and it shows just how hard it is to learn how to make large and complex real world systems safe. It has taken plenty of deaths to get us to the point where commercial flights are safer than the trip to the airport, and it will take many more deaths before self driving cars are appreciably better than humans. Some of the most important lessons from aviation are about the interaction between pilot(s), crew and automation, and how those systems fail.

            Test cases / data for self driving cars should be shared and made public. If we're trusting our lives to a piece of software, we should be able to see how well it does across standard test cases that the industry has encountered and developed, and be able to help add more. Capitalism does many things well, but making things safe for humans is not one of them.

      • gruez3 days ago
        >Our eyes have a 200-degree FOV, providing a 576 megapixel landscape, with 13ms of latency.

        ...only if you count the field of view you get from moving your eyeballs. You wouldn't say a PTZ camera has "360 FOV" just because it can rotate around. The "576 megapixel" figure is also questionable. Peak resolution only exists in the fovea. Everywhere else is blurry and much lower resolution. You don't notice this because your eyes does it automatically, but the actual information you can receive at any given time is far less than "576 megapixel".

        • standeven3 days ago
          The quoted latency and neuron counts can also be questioned, but my point stands: it's hard to compete with the human eye and brain with current (affordable) camera and processing hardware.
    • ajross3 days ago
      I don't see how that follows. To first approximation zero human-at-fault accidents are due to "sensor failure". I mean, sure, somewhere out there a pedestrian was killed while walking in a white-out blizzard. But far, far more were hit by drivers looking at their phones with perfectly good eyes.
      • cozzyd3 days ago
        Accidents due to sun in eyes or someone illegally driving with poor vision surely happen.
        • ajross3 days ago
          Again though, with vanishing frequency in comparison to dumb mistakes and irresponsible decision-making. "Sensor failure" just isn't a major limit on traffic safety.
          • cozzyd3 days ago
            Indeed, seems like 9,000 accidents a year can be attributed to sun glare in the US, which is indeed a tiny proportion of crashes.
            • bigfatkitten3 days ago
              And that is usually based on what the driver said (often as an excuse for some other form of negligence), rather than being a provable cause.
    • 3 days ago
      undefined
  • ntonozzi3 days ago
    Wow, it's amazing how much better the Subaru's automatic braking system is.

    I worry that hitting a pedestrian at night is the most likely way I'd seriously hurt somebody, and I want to encourage automakers to prioritize the safety of pedestrians and other road users, so Subarus will be high on my list the next time I'm shopping for a car.

    • ezfe3 days ago
      Subaru is just casually shipping better vision-only TACC than any other car company (I include tesla in this comparison, when just activating TACC) and nobody is paying attention to the fact that front radar is just not needed.
      • numpad03 days ago
        Subaru is going back and forth between vision-only and radar assisted, and also going through suppliers and project structures for EyeSight-branded systems. Current camera unit is supplied by Veoneer in Sweden, slightly older ones were outsourced to Hitachi Astemo, before that were mostly internal R&D and so on.

        Current latest gen Subaru has a forward radar.

        • ezfe2 days ago
          Subaru doesn't have forward radar except in the Solterra, which uses Toyota's system.

          The 2025 models have 3 forward cameras, no radar.

          • numpad02 days ago
            They definitely have 4 corner radars and rear ultrasound. They indeed seem to have removed front long distance radar on Veoneer camera setups.

            They don't mess with shipped cars and so they don't care about legacy compatibility, that makes it hard to keep track of changes

            • ezfe10 hours ago
              Sorry, yes they do have short range parking sensors on the front and back. I am referring to TACC/automatic braking sensors.
      • nytesky3 days ago
        So all mainstream cars are vision only? No ranging like lidar?
        • bigfatkitten3 days ago
          Just about everyone ships radar, not only for collision avoidance but adaptive cruise control.
          • ezfe2 days ago
            Right, except for Subaru
            • nomel2 days ago
              How can it perform in fog/snow?
        • Aloisius3 days ago
          Honda and Mazda use radar as well.
        • ezfe2 days ago
          Most cars use radar, Subaru does not
  • aidenn03 days ago
    If I'm reading the table correctly, there was only one vehicle for which reflective strips were worse than normal clothing (the Mazda), for the Honda reflective strips didn't always help but don't seem to have hurt (judging by the body text they did on the order of 12 tests, so 9% vs 0% is 1/12 vs 0/12).
    • pfedak3 days ago
      you're reading the table correctly but it's been reproduced incorrectly and had its title removed from the original source https://www.iihs.org/news/detail/high-visibility-clothing-ma...

      i'm not clear from that how many trials were run for each test condition, but the percentage is average speed reduction, not a chance for binary hit/not hit. edit: the paper pdf says up to three trials each.

      • aidenn03 days ago
        Wow, so much was lost when they mis-transcribed that table; thanks for the link.
    • Aloisius3 days ago
      > If I'm reading the table correctly, there was only one vehicle for which reflective strips were worse

      No. It was all three vehicles. The table is average speed reduction.

      Reflective strips had a lower average speed reduction than black clothing in every case except for the Subaru at 0 and 20 lux and the Honda at 0 lux.

    • 3 days ago
      undefined
  • Waterluvian3 days ago
    I test drove those very 3 models (2020 years) when buying and I found that everything “autonomous” about the Honda and Mazda felt just plain bad. I raised it with the Mazda guy who insisted the features were probably just not turned on, but when he checked they were on.

    The Subaru though was an entirely different class. It worked so well. The thing would drive me around curves on a somewhat windy country road. Comfortably brought me to a stop behind a stopped car. Etc. I bought the Subaru.

    According to an engineer I was in contact with, (at the time, maybe still true) the Subaru EyeSight system was their crown jewel system.

    • aidenn03 days ago
      I have a similar experience with my Hyundai Kona Electric.

      Even the automatic headlights are by far the worst I have ever used (and that includes a 1996 Ford Taurus). They only reliably stay off for a few hours around noon on overcast days, where the illumination is bright and diffuse. Otherwise they toggle on and off while I am driving through shadows (including self-shadowing when I turn away from the sun).

    • nytesky3 days ago
      My Toyota is like your Subaru: it gently slows behind traffic, nearly stops for stop signs, and guides me in lane marker.

      It’s kinda of terrible because it teaches me bad habits to depend on the car and then when I drive conventional, I’m more vulnerable as I’ve been trained to let the car take over.

      • Waterluvian3 days ago
        My only complaint is that mine wants to stay in the very middle of the lane on the highway when I’d rather it bias to the left, especially when passing transport trucks.

        And if I calmly but consistently hold left to keep it left a bit, the PID loop ramps up trying to center, and if I let go of the wheel it wants to swing into the right lane.

  • bentcorner3 days ago
    The Honda and Mazda both use a single camera to visually detect pedestrians while the Subaru uses two cameras - perhaps this is the difference?
    • JumpCrisscross3 days ago
      My Subaru also has radar. It’s noticed things ahead of my in whiteout conditions that my eyes couldn’t yet discern.
      • jeffbee3 days ago
        Honda Sensing also includes a radar.
        • numpad03 days ago
          Actually the third gen Honda SENSING outsourced to Valeo is vision only. Older Bosch and NIDEC units were vision+radar.
          • jeffbee3 days ago
            Interesting. I wonder if that made any difference here. My older Honda has the radar, but also the phantom braking.
  • andix3 days ago
    Another thing that bothers me personally with emergency vechicles at night are the very bright emergency lights (blue in Europe).

    Especially in situations where a lot of emergency vehicles are parked with the lights on outside city lights. It's often very disorientating and for me it reduces visibility of the surroundings when passing by. Those traffic situations require additional caution, because there could be people and debris on the road, but might reduce passing drivers abilities to properly see them.

    It's probably also a problem for car safety systems.

    Maybe the emergency lights and reflective strips got too good, to a point where they start causing harm. Emergency lights could easily adjust automatically to the ambient lighting conditions.

    (Mazda/honda definitively need to get better, the data shows it's possible, not arguing with that fact)

    • dtgriscom3 days ago
      Each morning I drive past a middle school as it starts its day, and there's a police car and officer guiding traffic. Sometimes the officer leaves the full flashing blue lights going, and it makes it really hard to see what's around it (e.g. the officer and/or students). Most of the time they leave it on non-flashing blue, which makes it a lot easier to see the environment.
    • bigfatkitten3 days ago
      Some agencies have taken a smarter approach to this.

      Ambulances in the Australian Capital Territory use steady burn amber perimeter lights when stopped on roadsides. Makes the outline of the vehicle more conspicuous, tends not to encourage rubbernecking and doesn't dazzle people.

  • thesz3 days ago
    Why are there no European, American and/or Chinese vehicles to compare to?

    https://www.carpro.com/blog/almost-all-new-vehicles-have-aut...

    Why only those three?

  • numpad03 days ago
    Does it make sense to always refer to these systems by car manufacturers? Lots of these "camera" units are self contained computers that directly generates steering and braking commands, and are constantly switched between lowest bidders.

    It's a bit like yogurt that comes with an economy class meal. Evaluations made for the cup on one flight might not apply to a flight on different days, or its return flight. Shouldn't it be the brand on the cup, not the one on headrest, that gets named in reports?

    • GoToRO2 days ago
      Just to add that every single device is the same: airbag, central lock, engine, transmission and so on. They are all bought from suppliers and usually you get the same device in cars that are from competing car manufacturers. An expensive vehicle has up to 100 embedded devices, each with it's own function.
      • numpad02 days ago
        Those are more deeply integrated. The camera unit tends to be more closer to store-bought compared to other peripherals, and theory of operations hasn't converged.
    • largbae3 days ago
      As a consumer I can't choose which camera model, but I can choose which car manufacturer. So I should choose whichever car manufacturer chooses the best camera models, all else being equal.
  • hnburnsy3 days ago
    To me IIHS has done more harm then good. They continue to up their ratings bar to to help keep insurance outlays low (better crash protection), but at the expense of heavier and more expense cars, and there really has not been a decrease in passenger deaths per mile (2023 rate per mile is equal to 2018).
  • ck23 days ago
    There's no penalty if they get it wrong and kill someone.

    No exec that killed the spending on better/more sensors for more profit will be punished, certainly not jail.

    No coder or their manager that missed any bug will be punished or go to jail.

    So why would they even worry about it? One less thing.

    • Sohcahtoa823 days ago
      > No exec that killed the spending on better/more sensors for more profit will be punished, certainly not jail.

      Nor should they.

      These are not self-driving features, they're an assist to make up for shitty drivers and shitty pedestrians. These features basically only exist for the occasions when two dumbasses meet: An inattentive driver meets a pedestrian that doesn't check for oncoming traffic before crossing the road.

    • Aloisius3 days ago
      The driver is responsible for their vehicle, not the technology designed to try to prevent them from running people over.

      Jailing developers because it's not perfect sounds like an excellent way to prevent new safety technology from ever being developed again.

      Considering we rarely jail drivers for killing people, the idea of jailing developers for failing to prevent drivers from killing people sounds nuts.

  • nytesky3 days ago
    Would the cars have seen them if they wore light strips? I expect the model is interpreting the reflective strips as the reflection striping of the lines on the road.
  • hindsightbias3 days ago
    This is why everyone needs to invest in my Safety Conewear clothing line.
  • schiffern3 days ago
    [flagged]
    • xnx3 days ago
      Isn't the difference that the Mazda and Honda system is a backup for a driver who is expected to be paying attention, while Tesla pretends that it can drive itself?
    • bobsomers3 days ago
      Well, to be blunt when your CEO openly abuses ketamine and smokes blunts with Joe Rogan it invites a certain level of additional scrutiny.

      Perhaps if Tesla wants to be treated like a regular car company they should get a regular car company CEO?

      • 3 days ago
        undefined
      • ge963 days ago
        So funny the JRE stuff his past/beliefs/usual audience to having someone like EM or MZ go on there. Ahh well I don't listen to him anymore but yeah. Briefly I got into him he had some fun guests on there/smaller names.

        edit: beliefs I'm referring to guests with space is fake and then a guy who literally manages rockets coming on

    • itishappy3 days ago
      Agree in theory, but this particular publication looks different. It has had zero mentions of "Tesla" in recent (past 3+ months) headlines.
    • stefan_3 days ago
      You got yourself convinced of something based on the media report on IIHS testing that didn't even include Tesla. Tesla was not mentioned and was not evaluated. It's all in your head.
      • Terr_3 days ago
        But if it did happen then I could be outraged about it! :p
  • 3 days ago
    undefined
  • nunez3 days ago
    Meanwhile Tesla Vision would have seen those dummies in near complete darkness and would still get absolutely roasted.