Then I bet Rodney can just fiddle with goal post and say that 3.26 trillion miles were driven in US in 2024, and having a human intervene 1000 miles would mean 3.26 billion interventions, and that this is clearly not self driving. In fact until Waymo disables Internet on all cars and prices it never needs any intervention ever, Rodney can claim he’s right, even then maybe not stopping exactly where Rodney wanted it to, might be proof that self driving doesn’t work.
Next big thing after deep learning prediction is clearly false. LLM is deep learning, scaled up, we are not in any sense looking past deep learning. Rodney I bet wanted it to be symbolic AI, but that is most likely a dead end, and the bitter lesson actually holds. In fact we have been riding this deep learning wave since Alex-Net 2012. OpenAI talked about scaling since 2016 and during that time the naysayers could be very confident and claim we needed something more, but OpenAI went ahead and proved out the scaling hypothesis and passed the language Turing test. We haven’t needed anything more except scale and reasoning has also turned out to be similar. Just an LLM trained to reason, no symbolic merger, not even a search step it seems like.
DeepMind RL/MCTS can succeed in fairly open-ended settings like StarCraft and shit.
Brain/DeepMind still knocks hard. They under-invested in LLMs and remain kind of half-hearted around it because they think it’s a dumbass sideshow because it is a dumbass sideshow.
They train on TPU which costs less than chips made of Rhodium like a rapper’s sunglasses, they fixed the structural limits in TF2 and PyTorch via the Jax ecosystem.
If I ever get interested in making some money again Google is the only FAANG outfit I’d look at.
It’s like being in the back seat of Nikki Lauda’s car.
However, if the real number is something like an intervention every 20 or 100 miles, and so an operator is likely passively monitoring dozens of cars, and the cars themselves ask for operator assistance rather than the operator actively monitoring them, then I would agree with you that Waymo has really achieved full self driving and his predicitons on the basic viability have turned out wrong.
I have no idea though which is the case. I would be very interested if there are any reliable resources pointing one way or the other.
But that definition doesn’t even matter. The key factor is whether the additional overhead, whatever percentage it is, makes economic sense for the operator or the customer. And it seems pretty clear the economics aren’t there yet.
The promise has been that self-driving would replace driving in general because it’d be safer, more economical, etc. The promise has been that you’d be able to send your autonomous car from city to city without a driver present, possibly to pick up your child from school, and bring them back home.
In that sense, yes, Waymo is nonexistent. As the article author points out, lifetime miles for “self-driving” vehicles (70M) accounts for less than 1% of daily driving miles in the US (9B).
Even if we suspend that perspective, and look at the ride-hailing market, in 2018 Uber/Lyft accounted for ~1-2% of miles driven in the top 10 US metros. [1] So, Waymo is a tiny part of a tiny market in a single nation in the world.
Self-driving isn’t “here” in any meaningful sense and it won’t be in the near-term. If it were, we’d see Alphabet pouring much more of its war chest into Waymo to capture what stands to be a multi-trillion dollar market. But they’re not, so clearly they see the same risks that Brooks is highlighting.
[1]: https://drive.google.com/file/d/1FIUskVkj9lsAnWJQ6kLhAhNoVLj...
I think that's a bit of a silly standard to set for hopefully obvious reasons.
Calculator was a small device that was made in one tiny market in one nation in the world. Now we all got a couple of hardware ones in our desk drawers, and a couple software ones on each smartphone.
If a driving car can perform 'well' (Your Definition May Vary - YDMV) in NY/Chicago/etc. then it can perform equally 'well' in London, Paris, Berlin, Brussels, etc. It's just that EU has stricter rules/regulations while US is more relaxed (thus innovation happens 'there' and not 'here' in the EU).
When 'you guys' (US) nail self-driving, it will only be a matter of time til we (EU) allow it to cross the pond. I see this as a hockey-stick graph. We are still on the eraser/blade phase.
Development of this technology appears to be logarithmic, not exponential.
I just want to highlight that the only mechanism by which this eventually produces cheaper rates is by removing having to pay a human driver.
I’m not one to forestall technological progress, but there are a huge number of people already living on the margins who will lose one of their few remaining options for income as this expands. AI will inevitably create jobs, but it’s hard to see how it will—in the short term at least—do anything to help the enormous numbers of people who are going to be put out of work.
I’m not saying we should stop the inevitable forward march of technology. But at the same time it’s hard for me to “very much look forward to” the flip side of being able to take robocabs everywhere.
Let's say AV development stops tomorrow though. Is continuing to grind workers down under the boot of the gig economy really a preferred solution here or just a way to avoid the difficult political discussion we need to have either way?
All I'm asking is that we take a moment to reflect on the people who won't be winners. Which is going to be a hell of a lot of people. And right now there is absolutely zero plan for what to do when these folks have one of the few remaining opportunities taken away from them.
As awful as the gig economy has been it's better than the "no economy" we're about to drive them to.
The US is one of the richest countries in the world, with all that wealth going to a few people. "Give everyone else a few scraps too!" is better than having nothing, but redistributing the wealth is better.
But this is the society we live in now. We don’t live in one where we take care of those whose jobs have been displaced.
I wish we did. But we don’t. So it’s hard for me to feel quite as excited these days for the next thing that will make the world worse for so many people, even if it is a technological marvel.
Just between trucking and rideshare drivers we’re talking over 10 million people. Maybe this will be the straw that breaks the camel’s back and finally gets us to take better care of our neighbors.
This is just coming from using what we already know how to do better.
Self-driving cars will be disruptive globally. So far they primarily drive employment in a small set of the technology industry. Yes, there are manufacturing jobs involved but those are overwhelmingly going to be jobs that were already building human-operated vehicles. Self-driving cars will save many lives. But not as many as public transit does (proportionally per user) And it is blindingly obvious they will make traffic worse.
You haven’t paid attention to how VC companies work.
They don't run to SFO because SF hasn't approved them for airport service.
Waymo's app only shows the areas accessible to you. Different users can have different accessible areas, though in the Bay area it's currently just the two divisions I'm aware of.
It's an area they're operating legally, so it's part of their operational area. It's not part of their public service area, which I'd call that instead.
> with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in
That argument doesn't seem horribly compelling given the regular expansions to new areas.
It’s safe to assume that a company’s ownership takes the decisions that they believe will maximize the value of their company. Therefore, we can look at Alphabet’s capital allocation decisions, with respect to Waymo, to see what they think about Waymo’s opportunity.
In the past five years, Alphabet has spent >$100B to buyback their stock; retained ~100B in cash. In 2024, they issued their first dividend to investors and authorized up to $70B more in stock buybacks.
Over that same time period they’ve invested <$5B in Waymo, and committed to investing $5B more over the next few years (no timeline was given).
This tells us that Alphabet believes their money is better spent buying back their stock, paying back their investors, or sitting in the bank, when compared to investing more in Waymo.
Either they believe Waymo’s opportunity is too small (unlikely) to warrant further investment, or when adjusted for the remaining risk/uncertainty (research, technology, product, market, execution, etc) they feel the venture needs to be de-risked further before investing more.
I view the bottlenecks as two things. Producing the vehicles and establishing new markets.
My understanding of the process with the vehicles is they acquire them then begin a lengthy process of retrofitting them. It seems the only way to improve (read: speed up) this process is to have a tightly integrated manufacturing partner. Does $70B buy that? I’m not sure.
Next, to establish new markets… you need to secure people and real estate. Money is essential but this isn’t a problem you can simply wave money at. You need to get boots on the ground, scout out locations meeting requirements, and begin the fuzzy process of hiring.
I think Alphabet will allocate money as the operation scales. If they can prove viability in a few more markets the levers to open faster production of vehicles will be pulled.
Within the context of the original discussion around whether self-driving is here, today, or not, I think we can definitively see it’s not here.
Since Alphabet buybacks mostly just offset employee stock compensation, the main thing they are getting for this money is employees.
Alphabet has to buy back their stock because of the massive amount of stock comp they award.
Wait, really? They're a publically traded company; don't they just need to issue new stock (the opposite of buying it back) to employees, who can then choose to sell it in the public market?
> Mario Herger: Waymo is using around four NVIDIA H100 GPUSs at a unit price of $10,000 per vehicle to cover the necessary computing requirements. The five lidars, 29 cameras, 4 radars – adds another $40,000 - $50,000. This would put the cost of a current Waymo robotaxi at around $150,000
There are definitely some numbers out there that allow us to estimate within some standard deviations how unprofitable Waymo is
You're not even making a handwavy argument. Sure, it might sound like a lot of money, but in terms of unit profitability it could mean anything at all depending on the other parameters. What really matters is a) how long a period that investment is depreciated over; b) what utilization the car gets (ot alternatively, how much revenue it generates); c) how much lower the operating costs are due to not needing to pay a driver.
Like, if the car is depreciated over 5 years, it's basically guaranteed to be unit profitable. While if it has to be depreciated over just a year, it probably isn't.
Do you know what those numbers actually are? I don't.
Secondly, if we throw a dart on a map: 1) what are the chances Waymo can deploy there, 2) how much money would they have to invest to deploy, and 3) how long would it take?
Waymo is nowhere near a turn-key system where they can setup in any city without investing in the infrastructure underlying Waymo’s system. See [1] which details the amount of manual work and coordination with local officials that Waymo has to do per city.
And that’s just to deploy an operator-assisted semi-autonomous vehicle in the US. EU, China, and India aren’t even on the roadmap yet. These locations will take many more billions worth of investment.
Not to mention Waymo hasn’t even addressed long-haul trucking, an industry ripe for automation that makes cold, calculated, rational business decisions based on economics. Waymo had a brief foray in the industry and then gave up. Because they haven’t solved autonomous driving yet and it’s not even on the horizon.
Whereas we can drop most humans in any of these locations and they’ll mostly figure it out within the week.
Far more than lowering the cost, there are fundamental technological problems that remain unsolved.
[1]: https://waymo.com/blog/2020/09/the-waymo-driver-handbook-map...
> First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.
However, their analysis this year is that, "This is unlikely to happen in the first half of this century."
The prediction is clear. The evaluation is dishonest.
Just to make sure we're applying our rubric fairly and universally: Has anyone else been in an Uber where you wished you were able to intervene in the driving a few times, or at least apply RLHF to the driver?
(In other words: Waymo may be imperfect to the point where corrections are sometimes warranted; that doesn't mean they're not already driving at a superhuman level, for most humans. Just because there is no way for remote advisors to provide better decisions for human drivers doesn't mean that human-driven cars would not benefit from that, if it were available.).
You'd also have to believe that when you wished to change how your Uber driver drove, you'd actually have improved things rather than worsened them.
Human driving isn't a solved problem either; the difference is that when a human driver needs intervention it just crashes.
The remote operation seems to be more about navigational issues and reading the road conditions. Things like accidentally looping, or not knowing how to proceed with an unexpected obstacle. Things that don't really happen to human drivers, even the greenest of new drivers.
Honestly, back in 2012 or something I was convinced that we would have autonomous driving by now, and by autonomous driving I definitely didn't mean “one company is able to offer autonomous taxi rides is a very limited amount of places with remote operator supervision”, the marketing pitch has always been something along “the car you'll buy will be autonomously driving you to whatever destination you ask for, and you'll be just a passenger in you own car”, and we definitely aren't there at all when all we have is Waymo.
No-one would have equated the phrase "we'll have self-driving cars" with "some taxis in a few of US cities"
> First driverless "taxi" service in a major US city, with dedicated pick up and drop off points, and restrictions on weather and time of day.
Their 2025 analysis is: "This is unlikely to happen in the first half of this century."
The prediction is clear. The evaluation is dishonest.
Maybe he has a very narrow or strict definition of ‘driverless’. That would explain the “not in this half of the century”-sentiment. I mean, it’s 25 years!
"Nothing ever happens"... until it does, and it seems Brooks's prediction roundups can now be conveniently replaced with a little rock on it with "nothing in AI ever works" written on it without anything of value being lost.
He calls out that Tesla FSD has been “next year” for 11 years, but then the vast majority of the self-driving car section is about Cruise and Waymo. He also minorly mentions Tesla’s promise of a robotaxi service and how it is unlikely to be materially different than Cruise/Waymo. The amount of space allocated to each made sense as I read it.
For the meat of the issue: I can regularly drive places without someone else intervening. If someone else had to intervene in my driving 1/100 miles, even 1/1000 miles, most would probably say I shouldn’t have a license.
Yes, getting stuck behind a parked car or similar scenario is a critical flaw. It seems simple and non-important because it is not dangerous, but it means the drive would not be completed without a human. If I couldn’t drive to work because there was a parked car on my home street, again, people would question whether I should be on the road, and I’d probably be fired.
Direct quote from the article:
> Then I will weave them together to explain how it is still pretty much business as usual, and I mean that in a good way, with steady progress on both the science and engineering of AI.
There are some extremely emotional defences of Waymo on this comment thread. I don't quite understand why? Are they somehow inviolable to constructive criticism in the SV crowd?
Tell that to someone laid off when replaced by some "AI" system.
> Waymo not autonomous enough
It's not clear how often Waymo cars need remote attention, but it's not every 1-2 miles. Customers would notice the vehicle being stopped and stuck during the wait for customer service. There are many videos of people driving in Waymos for hours without any sign of a situation that required remote intervention.
Tesla and Baidu do use remote drivers.
The situations where Waymo cars get stuck are now somewhat obscure cases. Yesterday, the new mayor of SF had two limos double-parked, and a Waymo got stuck behind that. A Waymo got stuck in a parade that hadn't been listed on Muni's street closure list.
> Flying cars
Probably at the 2028 Olympics in Los Angeles. They won't be cost-effective, but it will be a cool demo. EHang recently put solid state batteries into their flying car and got 48 minutes of flight time, instead of their previous 25 minutes. Ehang is basically a scaled-up quadrotor drone, with 16 motors and props. EHang been flying for years, but not for very long per recharge. Better batteries will help a lot.
[1] https://aerospaceamerica.aiaa.org/electric-air-taxi-flights-...
Some companies may claim they are replacing devs with AI. I take it with a grain of salt. I believe some devs were probably replaced by AI, but not a large amount.
I think there may be a lot more layoffs in the future, but AI will probably account for a very small fraction of those.
I'm not even sold on the idea that there were any. The media likes to blame AI for the developer layoffs because it makes a much more exciting story than interest rates and arcane tax code changes.
But the fact is that we don't need more than the Section 174 changes and the end of ZIRP to explain what's happened in tech. Federal economic policy was set up to direct massive amounts of investment into software development. Now it's not. That's a real, quantifiable impact that can readily explain what we've seen in a way that the current productivity gains from these tools simply can't.
Now, I'll definitely accept that many companies are attributing their layoffs to AI, but that's for much the same reason that the media laps the story up: it's a far better line to feed investors than that the financial environment has changed for the worse.
>In 2024: At least 95,667 workers at U.S.-based tech companies have lost their jobs so far in the year, according to a Crunchbase News tally.
No, they are saying that the reason for the layoffs is not AI, it is financial changes making devs too expensive.
> If that is true then you need way less devs.
This does not follow. First of all, companies take a long time to measure dev output, it's not like you can look at a burn down chart over two sprints and decide to fire half the team because it seems they're working twice as fast. So any productivity gains will show up as layoffs only after a long time.
Secondly, dev productivity is very rarely significantly bounded by how long boilerplate takes to write. Having a more efficient way to write boilerplate, even massively more efficient, say 8h down to 1h, will only marginally improve your overall throughput, at least at the senior level: all that does is free you to think more about the complex issues you needed to solve. So if the task would have previously taken you 10 days, of which one day was spent on boilerplate, it may now take you, say, 8-9 days, because you've saved one day on boilerplate, plus some more minor gains here and there. So far from firing 7 out of every 8 devs, the 8h-to-1h boilerplate solution might allow you to fire 1 dev in a team of 10.
Sure, in the same sense that editors and compilers mean you need way less devs.
The problem is different in the meantime: nobody wants to be paying for training of those new devs. Juniors don’t have the experience to call LLM’s bullshit and seniors don’t get paid to teach them since LLMs replaced interns churning out boilerplate.
BLS reports ~1.9 million software developer jobs and predicts 17% growth through 2033. Crunchbase is talking about "tech workers" not developers. And they don't even say that tech employment is down. I predict that when BLS publishes their preliminary job numbers for 2024 it will be at least 1.85 million, not 1.9 million as suggested by your Crunchbase News. I would lay 2:1 odds that it will be higher than 2023's number.
Same can be said for github, and open-source deoendency management tools like npm, and I'd argue that it had an even a much bigger impact then, and did you see what happen afterwards? Where were the mass layoffs back then? The number of software developers is actually much higher than before that era.
I am not sure what I expect for software developers besides that the nature if the work will change but it is still too early to say exactly how. We certainly cannot extrapolate linearly or exponentially from the past few years.
Of course not. The Section 174 changes are really only relevant to software devs—the conversation in the months leading up to them kicking in was all about how it would kill software jobs. But then when it happened the media latched onto this idea that it was the result of automation, with zero evidence besides the timing.
Since the timing also coincided with a gigantically important change to the tax code and a rapid increase in interest rates, both of which were predicted to kill software jobs, I'm suggesting that blaming AI is silly—we have a proximate cause already that is much more probable.
Devs are getting laid off, yes. AI is not the reason. Executive/shareholder priorities are the reason.
1. leaders notice they were wrong, start to increase human headcount again 2. human work is seen as boutique and premium, used for marketing and market placement 3. we just accept the sub-par quality of AI and go with it (quite likely with copywriting I guess)
I'd like to compare it with cinema and Netflix. There was a time where lost of stuff was mindless shit, but there was still place for A24 and it took the world by storm. What's gonna happen? No one knows.
But anyway, I figure that 90% of "laid off because of AI" is just regular lay-offs with a nice sounding reason. You don't loose anything by saying that and only gain in stakeholder trust.
If you look up business analyst type jobs on JP Morgan website they are still hiring a ton right now.
What you actually notice is how many are being outsourced to other countries outside the US.
I think the main process at work is 1% actual AI automation and a huge amount of return to the office in the US while offshoring the remote work under the cover of "AI".
Nobody knows how many times operators intervene, because Waymo hasn't said. It's literally impossible to deduce.
Which means I also agree his estimate could also be wildly wrong too.
Maybe a 2x improvement in kwh/kg. Much less risk of fire or thermal runaway. Charging in < 10 mins.
It would be unfortunate if we get solid state batteries that have the great features that you describe but they're limited to 2x or so power density. Twice the power density opens a lot of doors for technology improvements and innovation but it's still limiting for really cool things like humanoid robotics and large scale battery powered aircraft.
There are, of course, small startups promising usable lithium-air batteries Real Soon Now.[2]
[1] https://en.wikipedia.org/wiki/Lithium%E2%80%93air_battery
1. Solid state batteries. Likely to be expensive, but promise better energy density.
2. Some really good grid storage battery. Likely made with iron or molten salt or something like that. Dirt cheap, but horrible energy density.
3. Continued Lithium ion battery improvements, e.g. cheaper, more durable etc.
[1] https://newatlas.com/energy/worlds-largest-flow-battery-grid...
Glad Polymarket (and other related markets) exist so they can put actual goal posts in place with mechanisms that require certain outcomes in order to finalize on a prediction result.
Polymarket is a great way to incentive people into making their predictions happen, with all clandestine tools at their disposal, which is definitely not what you want for your society generally.
Not really wanting to have this argument a second time in a week (seriously- just look at my past comments instead of replying here as I said all I care to say https://news.ycombinator.com/item?id=42588699), but he is totally wrong about LLMs just looking up answers in their weights- they can correctly answer questions about totally fabricated new scenarios, such as solving simple physics questions that require tracking the location of objects and reporting where they will likely end up based on modeling the interactions involved. If you absolutely must reply that I am wrong at least try it yourself first in a recent model like GPT-4o and post the prompt you tried.
I was curious so I looked up how much you can buy the cheapest new helicopters for, and they are cheaper than an eVTOL right now- the XE composite is $68k new, and things like that can be ~25k used. I'm shocked one can in principle own a working helicopter for less than the price of a 7 year old Toyota Camry.
Most components are safety critical in ways that their failure can lead to an outright crash or feeding the pilot false information leading him to make a fatal mistake. Most cars can be run relatively safely even with major mechanical issues, but something as 'simple' as a broken heater on a pitot tube (or any other component) can lead to a crash.
Then there's an issue of weather - altitude, temperature, humidity, wind speed can create an environment that makes it either impossible, unsafe, or extremely unpleasant - imagine flying into an eddy current that stalls out the aircraft, making your ass drop a few feet.
Flying's a nice hobby, and I have great respect to people who can make a career out of it, but I'd definitely not get into these auto-piloted eVTOLs, nor should people who don't know what they are doing.
Edit: Also unlike helicopters, which can autorotate, and fixed wing aircraft, that can glide, eVTOLs just drop out of the sky.
The idea is to have far cheaper operating costs. Electric motors are far more efficient than ICE, so you should have much cheaper energy costs. Electric motors are also simpler than ICE so you should have cheaper maintenance with less required downtime compared to helicopters.
Of course, most of this is still being tested and worked on. But we are getting closer to having these get certified (FAA just released the SFAR for eVTOL, the first one since the 1940s).
The ones I'm seeing in the 20k range are mostly the "Mini 500." Wikipedia suggests that maybe as few as 100 were built, with 16 fatalities thus far (or is it 9- which it says in a different part of the article?). But some people argue all of those involved "pilot error."
I suppose choosing to fly the absolute cheapest homemade experimental aircraft kit notorious for a high fatality rate is technically a type of pilot error?
Skill level needed for "driving" would increase by a lot, noise levels would be abysmal, security implications would be severe (be they intentional or mechanical in nature), privacy implications would result in nobody wanting to have windows.
This is all more-or-less true for drones as well, but their weight is comparable to a todler, not to a polar bear. I firmly believe they'll never reach mass usage, but not because they're impossible to make.
Take, for example, the prediction about "robots can autonomously navigate all US households". Why all? From the business POV, 80% of the market is "all" in a practical sense, and most people will consider navigation around the home as "solved" if they can do it for the majority of households and with virtually no intervention. Hilarious situations will arise that amuse the folks; video of clumsy robots will flood the internet instead of cats and dogs, but for the business site, it's lucrative enough to produce and sell them en masse. Another question of interest is how is the trend? What will the approximate cost of such a robot be? How many US households will adopt such a robot by which time, as they have adopted washing machines and dishwashers. Will we see a linear adoption or rather a logistic adoption? These are the more interesting questions than just whether I'm right or wrong.
I recommend reading Richard Hamming's "The Art of Science and Engineering." Early in the book he presents a simple model of knowledge growth that always leads to an s-curve. The trouble is that on the left, an s-curve looks exponential. We still don't know where we are on the curve with any of the technologies. It is very possible we've already passed the exponential growth phase with some of these technologies. If so, we will need new technologies to move forward to the next s curve.
Technically true but I'm not convinced it matters that much. The reason autonomation took over in manufacturing was not that they could fire the operator entirely, but that one operator could man 8 machines simultaneously instead of just one.
Are you wishing that he had tighter confidence intervals?
For example, saying that flying cars will be in widespread use NET 2025 is not much of a prediction. I think we can all say that if flying cars will be in widespread use, it will happen No Earlier Than 2025. It could happen in 2060, and that NET 2025 prediction would still be true. He could mark it green in 2026 and say he was right, that, yes, there are no flying cars, and so mark his scorecard another point in the correct column. But is that really a prediction?
A bolder prediction would be, say "Within 1-2 yrs of XX".
So what is Rodney Brooks really trying to predict and say? I'd rather read about what the necessary gating conditions are for something significant and prediction-worthy to occur, or what the intractable problems are that would make something not be possible within a predicted time, rather than reading about him complain about how much overhype and media sensation there is in the AI and robotics (and space) fields. Yes, there is, but that's not much of a prediction or statement either, as it's fairly obvious.
There's also a bit of an undercurrent of complaint in this long article about how the not-as-sexy or hyped work he has done for all those years has gone relatively unrewarded and "undeserving types" are getting all the attention (and money). And as such, many of the predictions and commentary on them read more as rant than as prediction.
In that context, I’d say his predictions are neither obvious nor lacking boldness when we have influential people running around claiming that AGI is here today, AI agents will enter the workforce this year, and we should be prepared for AI-enabled layoffs.
Whether that's worth congratulating him about depends on how obvious it was, but I think you really need to measure "fairly obvious" at the time the prediction is made, not seven years later. A lot of things that seem "fairly obvious" now weren't obvious at all then.
1. A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing.
2. The reasonable person adapts themselves to the world: the unreasonable one persists in trying to adapt the world to themself. Therefore all progress depends on the unreasonable person. (Changed man to person as I feel it should be gender neutral)
In hindsight when we look back, everything looks like we anticipated, so predictions are no different some pans out some doesn't. My feeling after reading prediction scorecard is that you need a right balance between risk averse (who are either doubtful or do not have faith things will happen quickly enough) and risk takers (one who is extremely positive) for anything good to happen. Both help humanity to move forward and are necessary part of nature.It is possible AGI might replace humans in a short term and then new kind of work emerges and humans again find something different. There is always a disruption with new changes and some survive and some can't, even if nothing much happens its worth trying as said in quote 1.
How much money has been burned on robo-taxis which could have been spent on incubators for kids.
> Let’s Continue a Noble Tradition!
> The billionaire founders of both Virgin Galactic and Blue Origin had faith in the systems they had created. They both personally flew on the first operational flights of their sub-orbital launch systems. They went way beyond simply talking about how great their technology was, they believed in it, and flew in it.
> Let’s hope this tradition continues. Let’s hope the billionaire founder/CEO of SpaceX will be onboard the first crewed flight of Starship to Mars, and that it happens sooner than I expect. We can all cheer for that.
Rodney Brooks Predictions Scorecard - https://news.ycombinator.com/item?id=34477124 - Jan 2023 (41 comments)
Predictions Scorecard, 2021 January 01 - https://news.ycombinator.com/item?id=25706436 - Jan 2021 (12 comments)
Predictions Scorecard - https://news.ycombinator.com/item?id=18889719 - Jan 2019 (4 comments)
I'm curious where this idea even came from, not sure who the customer would be, it's a little disappointing he doesn't mention mag-lev trains in a discussion about future rapid transit. I'd much rather ride a smooth mag-lev across town than an underground pallet system.
Of course, then we will eventually see infrastructure become even more hostile to non-drivers and people will have to sue their own governments for the right to exist in public without paying transport companies. Strong Towns tried to warn us
Like, Reagan's instructions to the regulatory agencies to basically stand down was only just beginning to be undone after 40 years, and we immediately elected the people promising to slam hard in the other direction.
America will be a regulatory free for all for business for decades.
It really doesn't matter what prestigious lab you ran, as that apparently didn't impart the ability to think critically about engineering problems.
[Hint: Flying takes 10x the energy of driving, and the cost/weight/volume of 1 MJ hasn't changed in close to a hundred years. Flying cars require a 10x energy breakthrough.]
Not to mention, since we do have helicopters, the engineering challange of flying cars is almost entirely unrelated to energy costs (at least for the super rich, the equivalent of, say, a Rolls Royce, not of a Toyota). The thing stopping flying cars from existing is that it is extremely hard to make an easy to pilot flying vehicle, given the numerous degrees of freedom (and potential catastrophic failure modes); and the significantly higher impredictability and variance of the medium (air vs road surface).
Plus, the major problem of noise pollution, which gets to extreme levels for somewhat fundamental reasons (you have to diaplace a whole lot of air to fly; which is very close to having to create sound waves).
So, overall, the energy problem is already fixed, we already have point-to-point flying vehicles usable, and occasionally used, in urban areas, helicopters. Making them safe when operated by a very lightly trained pilot, and silent enough to not wake up a neighborhood, are the real issues that will persist even if we had mini fusion reactors.
A modern car might easily have 130 kW or more, and that's what a Cessna 172 has (around 180 hp). (Sure, a plane cruises at the higher end of that, while a car only uses that much to accelerate and cruises at the lower end of the range - still not a factor of 10x.)
As another datapoint, a Diamond DA40 does around 28 miles per gallon (< 9 litres per 100 km) at 60% power cruise.
The author also expands on this:
> Don’t hold your breath. They are not here. They are not coming soon.
> Nothing has changed. Billions of dollars have been spent on this fantasy of personal flying cars. It is just that, a fantasy, largely fueled by spending by billionaires.
It’s worth actually reading the article before trashing someone’s career and engineering skills!
I don't see that in this article. Largely, I see the author trying to argue that he was right in 2018 rather than trying to take a step back to accurately evaluate their predictions.
But this is the whole point of VC investing. It is not normal distribution investing.
Where I live (in suburbia Virginia), we now can get items from the local WalMart grocery via DroneUp, which kind of blows mind.
Disclaimer: I worked for years building robots, several of these years with Rod. I assure you, when it comes to robotics and AI, he knows what he's talking about.
Here's my perspective. Also, he wrote his original predictions six years ago in a blog post [1], which is the basis for this latest post. If you don't have the time to read the old post, I provide a short summary from it about autonomous driving below, too.
1. Rod is not just an MIT professor emeritus and a past director of CSAIL. He has co-founded multiple robotics companies, one of which, iRobot, made loads of money selling tens of millions of consumer-grade autonomous robots cleaning floors in people's homes.
Making money selling autonomous robots is a very, very difficult thing. Roomba was a true milestone. Before then, the only civilian, commercially successful mass-produced robots were the programmable industrial arms that are still used in auto manufacturing. If the author sounds self-important, maybe that's why.
Yeah, he can get a little snarky sometimes when self-important CEOs run around with VC money in their pockets making tall claims and never being held accountable. That's just his style. Try to look beyond it. You might learn a thing or two.
2. The entire purpose of his annual "predictions" posts starting with [1] was to counter the hype and salesmanship about AI and robotics that's wasting billions of investment dollars and polluting the media landscape.
About autonomous cars, he believes that the core technology has been demonstrated in the 1980s, but that instead of using it, we have squandered the decades since then. For autonomous robots, the interaction with their surroundings is critical to success. We could have enhanced our road and communications infrastructure to enable autonomous cars. Instead, we have chosen to give money to slick salesmen to chase the mirage of placing "intelligent" cars on existing roads, continuing to neglect our civil infrastructure.
You are not predicting just daydreaming.
If you took a transcript of a conversation with Claude 3.6 Sonnet, and sent it back in time even five years ago (just before the GPT-3 paper was published), nobody would believe it was real. They would say that it was fake, or that it was witchcraft. And whoever believed it was real would instantly acknowledge that the Turing test had been passed. This refusal to update beliefs on new evidence is very tiresome.
The NotebookLM “podcasters” would have been equally convincing to me.
Wait, What now?
I have never heard this, but from the founder of CSAIL I am going to take it as a statement of fact and proof that basically every AI company is flat out lying.
I mean the difference between remote piloting a drone that has some autonomous flying features (which they do to handle lag etc) and remote driving a car is … semantics?
But yeah it’s just moving jobs from one location to another.
This is more like telling your units to go somewhere in a video game, and they mostly do it right, but occasionally you have to take a look and help them because they got stuck in a particularly narrow corridor or something.
One thing to remember is that there is more than one target audience in these claims. VCs for example seem to operate on a rough principle of 5 tech companies, 4 make 0x and one makes 10x, for a total 2x on each investment. If you only promise 5x, with 4 failures of 0x and one success at 5x, total return is 1x on each (not worth the risk). You may say "yes, my company is 2x, but it is guaranteed!" - but they all sell this idea. VCs could be infinitely good at predicting success and great companies, but it's based on partial information. Essentially companies have to promise the 10x and the VCs assume they are likely incorrect anyway, in order to balance the risk profile.
I do have a fundamental problem with this "infinite growth" model that almost everything seems based on.
> There is steady growth in sales but my prediction of 30% of US car sales being electric by 2027 now seems wildly optimistic. We need two doublings to get there in three years and the doubling rate seems more like one doubling in four to five years.
Even one doubling in 4-5 years might be too much. There are fundamental issues to be addressed:
1. What do we do about crashed EVs? They are dangerous to store and dangerous to dismantle. There have been quite a few EV fires at places like Copart now. There is little to no value in crashed EVs because they are so dangerous, which pushes insurance up because they cannot recover these funds.
2. Most car dealerships in the UK refuse to accept EVs for trade-in, because they sit on their forecourt until they eventually die. Those who can afford EVs typically get them on finance when the batteries provide the fullest range. Nobody I know is buying 10 year old EVs with no available replacement batteries. Commerical fleets are also not buying any more EVs as they essentially get no money back after using them for 3 years or so.
3. The electrical grid cannot scale to handle EVs. With every Western country decarbonising their electrical grid in favour of renewable energy, they have zero ability to respond to increased load.
The truth is, when they push to remove fossil fuel vehicles, they simply want to take your personal transport from you. There is no plan for everybody to maintain personal mobility, it'll be a privilege reserved for the rich. You'll be priced out and put onto public transport, where there will be regular strikes because the government is broke and wages cannot increase - because who knew, infinite growth is a terrible investment model.
> The other thing that has gotten over hyped in 2024 is humanoids robots.
> The visual appearance of a robot makes a promise about what it can do and how smart it is.
The real sin is not HRI issues, it's that we simply cannot justify them. What job is a humanoid robot supposed to do? Who is going to be buying tens of thousands of the first unit? What is the killer application? What will a humanoid robot do that it is not cheaper/more effective to do with a real human, or cannot be done better with a specialised robot?
Anything you can think of which is a humanoid robot performing a single physical action repeatedly, is wrong. It would need to be a series of tasks that keeps the robot highly busy, and the nature of the work needs to be somewhat unpredictable (otherwise use a dedicated robot). After all, humans are successful not because we do one thing well, but because we do many not-well defined things good-enough. This kind of generalisation is probably harder than all other AI problems, and likely requires massive advances in real-time learning, embodiment and intrinsic motivation.
What we need sub-problems for robots, i.e. like a smart vacuum, where robots are slowly but surely introduced into complex environments where they can safely incrementally improve. Trying to crack self-driving 1+ tonne high speed death machines in your first attempt is insanity.
Predict the future, Mr. Brooks!
It seems to me we’re at the very least close to this, unless you hold unproven beliefs about grey matter vs silicon.