* they were positioned for stereopsis like the human visual system
* had 6 degrees of motion freedom like the human visual system
* were hyper-adaptive to lighting conditions like the human visual system
* had a significantly higher density of pixels per degree of arc in the focus region like the human visual system
* and were backed by a system capable of intuiting object inertia like the human visual system.
Tesla does none of those.
Reductio ad absurdum.
EVs are safer, why we haven't banned ICE vehicles yet?
How about screening every food item in supermarket for chemicals and pathogens. Surely that will minimize excess death?
Lot's of things can be "best we can", ask yourself why it's not done yet.
On a different note TIL learned that Tesla uses the raw camera sensor data and creates an occupancy network, instead of using images and object detection, which feels like to me that Tesla isn't really using vision.
https://insideevs.com/news/773202/volvo-ex90-software-issues...
-- Volvo is upgrading the central computer on all 2025 EX90s for free.
-- The company has spent over a year trying to squash software bugs in the EX90, but owners are still reporting serious issues and glitches.
-- One owner told InsideEVs that her EX90 has been a "dumpster fire inside a train wreck."> One owner told InsideEVs that her EX90 has been a "dumpster fire inside a train wreck."
Quite the anecdote, surely Tesla has never had an owner say that!
Except that 1) isn't possible with the cameras they're using, and 2) even if it was, wouldn't be possible in an "open" environment, like "outside a vehicle".
Sadly that did not work for Tesla, and the promises of FSD next year...for the last 10 years...
"Tesla (TSLA) has to replace computer in ~4 million cars or compensate their owners" - https://electrek.co/2025/04/14/tesla-tsla-replace-computer-4...
Maybe focus on these:
https://news.ycombinator.com/item?id=46291500