Same for GPUs/LLMs? At some point things will mature and we’ll be left with plentiful, cheap, high end LLM access, on the back of the investment that has been made. Whether or not it’s running on legacy GPUs, like some 90s fiber still carries traffic, is meaningless. It’s what the investment unlocks.
If AI says don't buy a subaru it's not worth the money, then Subaru pays attention and they are willing to pay money to get a better rec. Same for Univerisites. Students who see phrases like "If the degree is from brown flush it down" (ok hyperbole, but still) are going to pick different schools.
Soon enough, asking an LLM a question and blindly trusting the answer will be seen as ridiculous, like getting all your news from Fox News or Jacobin, or reading ads on websites. Human beings can eventually tell when they're being manipulated, and they just... won't be.
We've already seen how this works. Grok gets pushed to insert some wackjob conservative talking point, and then devolves into a mess of contradictions as soon as it has to rationalize it. Maybe it's possible to train an LLM to actually manipulate a person towards a specific outcome, but I do not think it will ever be easy or subtle.
I don't believe people explicitly (or maybe knowingly) want to be manipulated, though.
We wonder why the US has lost or losing competitiveness with China in most industries. Their government has focused on public investment and public ownership of natural monopolies, preventing rent extraction and keeping the costs of living lower. That means employers don’t have to pay workers as much so their businesses can be more competitive. Contrast with the US whose working class is parasitized by various forms of rent extraction - land, housing, medicine, subscription models, etc. US employers effectively finance these inefficiencies. It’s almost like the US wants to fall behind.
Okay, and? What if the demand for LLMs plummets because people get disillusioned due to the failure to solve issues like hallucinations? If nobody is willing to buy, who cares how cheap it gets?
Personal anecdote: I work with beginner IT students, and a new, fun (<- sarcasm, not fun) thing for me is the amount of energy they spend arguing with me about basic, easily proven Linux functionality. It's infuriating, the LLM is more believable than the paid professional who's been doing it for 30 years...
I find it highly doubtful that hallucinations, if unsolved, will have any real negative effect on LLM adaptation.
Swallow your fury, and accept the teachable moments; AI isn’t going away and beginners will continue to trust it when they lack the skills to validate its information on their own.
> For you AI skeptics, this is going to be rough to hear, but given the suffering that will occur if AI fails, I believe we have a moral imperative to make sure it succeeds.
What if, and hear me out here, the prediction is that it will fail because it can't succeed. It hasn't proven it is capable of producing the results that would justify itself. That there is some value doesn't mean that it will develop that capability. We have nothing to say that it will.
It might be possible that it can't succeed, but we don't know that and the evidence in terms of what happens if it fails is pretty compelling, so I think morality dictates we have to at least try, at least until we have more information or another path emerges.
What's doubly duplicitous is that even if LLMs achieve general intelligence, the whole idea is to enrich a tiny fraction of humanity that happens to be shareholders.
EDIT: trying to address the Roko part..I'm assuming once AGI is achieved.. the AGI doesn't need more compute to increase its intelligence beyond those of an average activist employee (I can assure you that in OpenAI there are such employees, and they know to shut up for now)
the antisocial part: it's already happening. What can you do about that.
As a thought experiment, say you were the CEO/board member of a company that's told your new platform is choosing public benefit over profits. What would you do? Now filter down the decisions going down the heirarchy, considering job security and a general preference for earning bonuses.
For all the discussions around "alignment" - any AI that's not aligned with increased profits will be unplugged posthaste, all other considerations are secondary.
The first example that comes to mind is OpenAI's Sebastian Bubeck (& others) giving underwhelming examples to illustrate that GPT has surpassed human mathematicians. Or earlier, when SamA says that he has seen things you wouldn't believe, whereas the top replicant in a proper sci-fi will at least be specific about the I-Beams.
Another parallel which you would be familiar with is nuclear power. Humans can plainly see it's an unworldly tech, but I'd say the value extracted so far has been a net negative-- mainly because our collective engineering chops are just too.. profane. And SamA/Musk/Thiel/Luckey just don't seem as wise as Rickover (who is the Rickover of our age? I think the Rickover pipeline is broken, tbh)
From my vantage point, I agree with you: China sees AI as less important as solar, so charitably, I'd say that Thiel's contrarian strategy is actually to get the Chinese so riled up they zuzwang themselves into providing the progress but keeping the whirlwind for themselves (so proving he has learnt the post-mimetic game from the atomics opening)
There could be another interesting post-mimetic line exploring how a hybrid chabuduo-"festina-lente" culture is required for developing supertech.. which.. only a Rickover can play
(I don't know if you're army or navy but there's this millenniums old cultural divide between a conservative army engineering culture -- that develops atomics -- and a progressive naval engineering culture -- that makes headway harnessing it. AI bulls smell like they are more aligned with land-forces
What does that look like?
I don't see how some kind of big breakthrough is going to happen with the current model designs. The superintelligence, if it will ever be created, will require a new breakthrough in model architecture. We've pretty much hit the limit of what is possible with current LLMs. The improvements are marginal at this point.
Secondly, hypothetically, the US achieves superintelligence, what is stopping China from doing the same in a month or two, for example?
Even if China achieves a big breakthrough first, it may benefit the rest of the world.
If you read the article carefully, I work hard to keep my priors and the priors of the people in question separate, as their actions may be rational under their priors, but irrational under other priors, and I feel it's worth understanding that nuance.
I'm curious where you got the writer "clinging to power and money desperately."
Also, to be fair, I envy Europe right now, but we can't take that path.
The game seems to be everyone else waiting for the R&D money to provide us amazing open source models, and then just run those locally.
My cynical take is that this is the US committing economic suicide, based on a misguided belief in something that'll never happen.
The new superpowers will be the EU, which was smart enough not to make the same gamble, and China, which will structurally survive it.
I also disagree with your conclusion of a moral imperative to make sure that AI succeeds. I believe it's the opposite. AI failing would finally create the long-needed revolutionary moment to throw off the shackles of the oligarchy that got us into this mess in the first place.
Not with how much pulling teeth is required to get them to invest in defense. I don't see how you can unironically make the claim that a written down investment would sink the ship that is the US economy.
A written down investment thats four times the size of the mortgage crisis.
We need political Aikido to hold this country together.
The competition between the US and China is pertinent to everyone.
Seeing the last tariffs and what China done about the rare earth minerals (and also the deal the US made with Ukraine for said minerals), the article might have a point that the super power will cripple each other to be the first with the super intelligence. And you also need money for it so tariffs.
(There are a thousand more problems, but none of them matter until that first one is overcome.)
For the folks who lived though it; were the Expert Systems boosters as insufferable in the 80s as the LLM people are now about the path to machine intelligence?
ARPA would throw relatively large sums of money at you, but demand progress reports and a testable goal. Very little got rolled out based on hype. (Let's not talk about vehicle design.) If your project didn't show signs of working, or not enough signs of working, funding ended.
Anything which met goals and worked, we now think of as "automation" or "signal recognition" or "solvers", not "intelligent systems".
Is it? I am pretty sure biology will solve good old "are viruses alive?" sooner than we agree on definition of intelligence. "Chinese Room" is at least 40 years old.
Practically speaking, the inherentness of intelligence doesn't really matter because both intelligent-looking entity and provably intelligent entity are capable for societal disruptions anyway. I partly dislike the Chinese Room argument for this reason; it facilitates useless discussions in most cases.
Tariffs aren’t there to pay for a race to superintelligence, they’re a lever that the authoritarian is pulling because it’s one of the most powerful levers the president is allowed to pull. It’s a narcissist’s toy to feel important and immediately impactful (and an obvious vehicle for insider trading).
If the present administration was interested in paying for a superintelligence race they wouldn’t have signed a law that increases the budget deficit.
They also wouldn’t be fucking with the “really smart foreign people to private US university, permanent residence, and corporate US employment” pipeline if they were interested in the superintelligence race.
I don’t underestimate their ability to do damage but calling them smart is generous.
Not even Peter Thiel, he’s one of the most over-hyped people in tech. Access to capital is not intelligence, and a lot of his capital comes from the equivalent of casino wins with PayPal and Facebook.
- this is war path funding
- this is geopolitics; and it’s arguably a rational and responsible play
- we should expect to see more nationalization
- whatever is at the other end of this seems like it will be extreme
And, the only way out is through
I believe China's open source focus is in part a play for legitimacy, and part a way to devalue our closed AI efforts. They want to be the dominant power not just by force but by mandate. They're also uniquely well positioned to take full advantage of AI proliferation as I mentioned, so in this case a rising tide raises some boats more than others.
They don't heavily advertise this definition because investors expect AGI to mean the computer from Her, and it's not gonna be that. They want to be able to tell investors without lying that they're on target for AGI in 3 years, and they're riding on pre-existing expectations.
Also thats 2028.. 2026 midterms look grim.
And I mean inevitably in the strongest way possible. This happens at basically every midterm with the opposition party picking up seats and republicans barely control Congress as it is.
> Chinese will not pull back on AI investment, and because of their advantages in robotics and energy infrastructure, they don't have to.
> The gap matters because AI's value capture depends on having the energy and hardware to deploy frontier models in the physical world.
I understand energy infrastructure because of power-hungry GPUs and ASICs, but I was wondering about the nexus between AI and robotics. I have my own thoughts on how they are linked, but I don't think my thoughts are the same as the author's and I would like to know more about them.
It's not tho. We've been at it for about 70 years now. Returns have been diminishing exponentially if you look at the amounts invested and we still have bumbling contraptions that are useful in very narrow and contrived use cases.
The whole hype is based on wishful/magical thinking. The booster arguments are invariably about some idea in their minds that has no correspondent in the real world.
The US indeed seems destined to fall behind due to decades of economic mismanagement under neoliberalism while China’s public investment has proved to be the wise choice. Yet this fact wounds the pride of many in the US, particularly its leaders, so it now lashes out in a way that hastens its decline.
The AI supremacy bet proposed is nuts. Prior to every societal transition the seeds of that transition were already present. We can see that already with AI: social media echo chambers, polarization, invading one’s own cities, oligarchy, mass surveillance.
So I think the author’s other proposed scenario is right - mass serfdom. The solution to that isn’t magical thinking but building mass solidarity. If you look at history and our present circumstances, our best bet to restore sanity to our society is mass strikes.
I think we are going to get there one way or another. Unfortunately things are probably going to have to get a lot more painful before enough people wake up to what we need to do.
Do you really prefer brutal serfdom to the AI supremacy scenario? From where I sit, people have mixed (trending positive) feelings about AI, and hard negative feelings about being in debt and living paycheck to paycheck. I'd like to understand your position here more.
What I found persuasive was the argument that this bubble could be worse than others due to the perceived geopolitical stakes by US leadership, plausibly leading to mass serfdom as our society implodes, based on the argument that we already have a version of serfdom today. I found that astute.
I do NOT think that scenario is favorable - it just seems like the most likely future. I hold that we should view our situation with clear eyes so that we can make the best decision from where we stand.
Thinking through this, what does that mean for how we should face our present moment? Eventually people will throw off their chains; our leadership is so incompetently invested in class war that the working class is going to continue to be squeezed and squeezed - until it pops. It’s a pattern familiar in history. The question in my mind is not if but when. And the sooner we get our collective act together the less painful it’s going to be. There’s some really bad stuff waiting for us round the bend.
What should we do? The economy depends on us the serfs and soon-to-be serfs. It’s the workers who can take control of it and can shut it down with mass strikes. It’s only this in my view that can plausibly free us from our reckless leadership (which goes beyond the current administration) and the bleak future they are preparing for us.
1. Fusion 2. Pharmaceuticals (think Ozempic economic benefit X 100) 3. Advanced weaponry (if warlike, don't build just conquer) 4. Advanced propaganda to destabilize other nations (again, if warlike)
This is mostly off the cuff, given the success of AlphaEvolve and AlphaFold I don't think they're unreasonable.
I think holding on to these beliefs about AI must give some people a sense of hope. But a hope based on reality and is stronger than one based in denial. The antidote I believe is solidarity founded on our shared vulnerability before this bleak future.
However your conclusions are what throw me off.. you kind of have this Doom and Gloom mindset which may be fair but I don't really think it's related to this particular bubble.. in other words our decline is happening off to the side of this particular bubble rather than it being caused by that particular bubble too. To me the core take away your post is that this bubble is a little bit like the Apollo program was... in a massive investment capturing a lot of people's imagination... likely lots of great things come out of it in a sense but also not clear that it all adds up in the end for a business perspective. But that's potentially okay
1. The debt bomb. Not dealing with this could cause a great depression by itself. Having it go off at the same time as we're underwater on bad capex with a misaligned economy could produce something the likes of which we've never seen.
2. We have an authoritarian president that has effectively captured the entire government, and the window to prevent consolidation of power is closing rapidly. Worse, he's an ego driven, reckless, ham-fisted decision maker with a cabinet of spineless yes men.
This is suggesting an "end of history" situation. After Fukuyama, we know there is no such thing.
I'm not sure if there is a single strong thesis (as this one tries to be) on how this will end economically and geopolitically. This is hard to predict, much less to bet on.
I mean if you are talking about USA itself falling into dystopic metastability in such a situation, maybe, but even so I think it misses some nuance. I don't see every other country following USA into oblivion, and also I don't see the USA bending the knee to techno-kings and in the process giving up real influence for some bet on total influence.
The only mechanism I see for reaching complete stability (or at least metastability) in that situation is idiocracy / idiotic authoritarianism, i.e. Trump/his minions actually grabbing power for decades and/or complete corruption of USA institutions.
My maximum is that the courts restrain Trump enough that we still have at least the shell of a democracy after he's gone. But he blazed the trail, and in 20 or 30 years someone will walk it better, having done a better job of neutralizing or politicizing or perhaps deligitimizing the courts first. Then it's game over.
Good skills but wank.
Those people have never felt real pain. They may say they have but they haven't. If this article is real and the whole shitbox falls apart, I don't think even loyalty to Trump will stop these people from finally reckoning with reality.
The GFC really set those people straight. Enough of them got desperate enough that they switched sides and voted for Obama.
These southerners could have lost their culture, the local industry has long since left, and fentanyl is the name of the game. Yet thats still nothing compared to what Chinese (or even Russian people post USSR) had to go through.
I read an article today in which western business leaders went to China and were wowed by "dark factories" where everything is 100% automated. Lots of photos of factories full of humanoid robots too. Mentioned only further down the article: that happens because the Chinese government has started massively distorting the economy in favor of automation projects. It's widely known that one of the hardest parts of planning a factory is figuring out what to automate and what to use human labour for. Over-automating can be expensive as you lose agility and especially if you have access to cheap labour the costs and opportunity costs of automation can end up not worth it. It's a tricky balance that requires a lot of expertise and experience. But obviously if the government just flat out reimburses you 1/5th of your spending on industrial robots, suddenly it can make sense to automate stuff that maybe in reality should not have been automated.
BTW I'm not sure the Kuppy figures are correct. There's a lot of hidden assumptions about lifespan of the equipment and how valuable inferencing on smaller/older models will be over time that are difficult to know today.
https://www.reddit.com/r/Amtrak/comments/1hnvl3d/chinese_hsr...
https://merics.org/en/report/beyond-overcapacity-chinese-sty...
It's easy to think of Uber/AirBnB style apps as trivialities, but this is the mistake communist countries always make. They struggle to properly invest in consumer goods because only heavy industry is legible to the planners. China has had too low domestic spending for a long time. USSR had the same issue, way too many steel mills and nowhere near enough quality of life stuff for ordinary people. It killed them in the end; Yeltsin's loyalty to communist ideology famously collapsed when he mounted a surprise visit to an American supermarket on a diplomatic mission to NASA. The wealth and variety of goods on sale crushed him and he was in tears on the flight home. A few years later he would end up president of Russia leading it out of communist times.
… or it is an early manoeuvre – a pre-emptive measure – to address the looming burden of an ageing population and a dearth of young labour that – according to several demographic models – China will confront by roughly 2050 and thereafter[0]. The problem is compounded by the enforced One-Child Policy in the past, the ascendance of a mainland middle class, and the escalating costs of child-rearing; whilst, culturally and historically, sons are favoured amongst the populace, producing a gender imbalance skewed towards males – many of whom will, in consequence, be unable to marry or to propagate their line.
According to the United Nations’ baseline projection – as cited in the AMRO report[1] – China’s population in 2050 is forecast at approximately 1.26 billion, with about 30 per cent aged 65 and over, whilst roughly 40 per cent will be aged 60 and over. This constitutes the more optimistic projection.
The Lancet scenario[2] is more gloomy and projects a 1 billion population by 2050, with 3 out of 10 being of the age of 65+.
It is entirely plausible that the Chinese government is distorting the economy; alternatively, it is attempting to mitigate – or to avert – an impending crisis by way of automation and robotics. The reality may well lie somewhere between these positions.
[0] https://www.populationpyramid.net/china/2050/
[1] https://amro-asia.org/wp-content/uploads/2023/12/AN_Chinas-L...
[2] https://www.thelancet.com/article/S0140-6736%2820%2930677-2/...
The most optimistic reading of this move is that it's just more of the same old communism: robotic factories seem cool to the planners because they're big, visible and high-tech assets that are easily visitable on state trips. So they overinvest for the same reason they once overinvested in steel production.
The actually most gloomy prognosis is that they're trying to free up labour to enter the army.
For something like an invasion of Taiwan or (gulp) other territories beyond that, the only way to completely subdue the captured population is with lots of soldiers.
Regarding effective conquest, we can look at the historical lesson of Rome. Conquest is effective when you can co-opt local leaders and cultures to cause them to identify with the conquering culture. Conquest that doesn't cause integration is historically unstable.
China, indeed, possesses a longstanding tradition of curating information to satisfy the sensibilities of its ruling class — a practice traceable to the dynastic courts of antiquity. Yet, to dismiss a potential adversary solely upon the architecture of its political order is — at best — ill-advised, and at worst, a grave miscalculation. The probability of threat must be judged on capacity, not narrative. Whether such an adversary proves formidable or farcical is immaterial at present — the truth will emerge within the span of a decade or so.
You can't seriously believe that spending all your income each month while living in the country with the highest standard of living in history is "serfdom."
Hyperbolic nonsense like this makes the rest of the article hard to take seriously, not that I agree with most of it anyway.
Luxembourg, Netherlands, Denmark, Oman, Switzerland, Finland, Norway, Iceland, Austria, Germany, Australia, New Zealand, Sweden, United States, Estonia.
Based on these metrics: Quality of Life Index, Purchasing Power Index, Safety Index, Health Care Index, Cost of Living Index, Property Price to Income Ratio, Traffic Commute Time Index, Pollution Index, and Climate Index.
Source: https://www.numbeo.com/quality-of-life/rankings_by_country.j...
People are suffering, agree with the rest of what I say or not, but I can't let you slide on that.
There is an additional ~30% that is notionally living paycheck to paycheck as a lifestyle choice rather than an economic necessity.
The median US household has a substantial income surplus after all ordinary expenses. There may be people suffering economically but it is a small minority by any reasonable definition of the term.
And to this specific comment, wages outpaced inflation since the 1970s for everyone but the poorest households (I believe the bottom 10% are the exception, who I would probably agree are suffering in some sense). Working class real wage growth actually outpaced white collar real wage growth for a couple years post-COVID, for the first time in a long time. Also, wage measurements don't normally measure total compensation, notably health insurance which has been increasing much faster than wages or overall inflation for decades.
Also, there's no reason to expect wage growth to match productivity growth. Productivity gains are largely due to company investment, not increased effort from workers, and household expenses are not positively correlated with productivity metrics.
From where I look at it, LLMs are flawed in many ways, and people who see progress as inevitable do not have a mental model of the foundation of those systems to be able to extrapolate. Also, people do not know any other forms of AI or have though hard about this stuff on their own.
The most problematic things are:
1) LLMs are probabilistic and a continuous function, forced by gradient descent. (Just having a "temperature" seems so crazy to me.) We need to merge symbolic and discrete forms of AI. Hallucinations are the elephant in the room. They should not be put under the rug. They should just not be there in the first place! If we try to cover them with a layer of varnish, the cost will be very large in the long run (it already is: step-by-step reasoning, mixture of experts, RAG, etc. are all varnish, in my opinion)
2) Even if generalization seems ok, I think it is still really far from where it should be, since humans need exponentially less data and generalize to concepts way more abstract than AI systems. This is related to HASA and ISA relations. Current AI systems do not have any of that. Hierarchy is supposed to be the depth of the network, but it is a guess at best.
3) We are just putting layer upon layer of complexity instead of simplifying. It is the victory of the complexifiers and it is motivated by the rush to win the race. However, I am not so sure that, even if the goal seems so close now, we are going to reach it. What are we gonna do? Keep adding another order of magnitude of compute on top of the last one to move forward? That's the bubble that I see. I think that that is not solving AI at all. And I'm almost sure that a much better way of doing AI is possible, but we have fallen into a bad attractor just because Ilya was very determined.
We need new models, way simpler, symbolic and continuous at the same time (i.e. symbolic that simulate continuous), non-gradient descent learning (just store stuff like a database), HAS-A hierarchies to attend to different levels of structure, IS-A taxonomies as a way to generalize deeply, etc, etc, etc.
Even if we make progress by brute forcing it with resources, there is so much work to simplify and find new ideas that I still don't understand why people are so optimistic.
Hallucinations are incredibly fucking overrated as a problem. They are a consequence of the LLM in question not having a good enough internal model of its own knowledge, which is downstream from how they're trained. Plenty of things could be done to improve on that - and there is no fundamental limitation that would prevent LLMs from matching human hallucination rates - which are significantly above zero.
There is a lot of "transformer LLMs are flawed" going around, and a lot of alternative architectures being proposed, or even trained and demonstrated. But so far? There's nothing that would actually outperform transformer LLMs at their strengths. Most alternatives are sidegrades at best.
For how "naive" transformer LLMs seem, they sure set a high bar.
Saying "I know better" is quite easy. Backing that up is really hard.
If what you're claiming is that external, vaguely-symbolic tooling allows a non-symbolic AI to perform better on certain tasks, then I agree with that.
If you replace "a non-symbolic AI" with "a human", I agree with that too.
Why is there no fundamental limitation that would prevent LLMs from matching human hallucination rates? I'd like to hear more about how you arrived at that conclusion.
This is not something that's impossible for an LLM to do. There is no fundamental issue there. It is, however, very easy for an LLM to fail at it.
Humans get their (imperfect, mind) meta-knowledge "for free" - they learn it as they learn the knowledge itself. LLM pre-training doesn't give them much of that, although it does give them some. Better training can give LLMs a better understanding of what the limits of their knowledge are.
The second part is acting on that meta-knowledge. You can encourage a human to act outside his knowledge - dismiss his "out of your depth" and provide his best answer anyway. The resulting answers would be plausible-sounding but often wrong - "hallucinations".
For an LLM, that's an unfortunate behavioral default. Many LLMs can recognize their own uncertainty sometimes, flawed as their meta-knowledge is - but not act on it. You can run "anti-hallucintion training" to make them more eager to act on it. Conversely, careless training for performance can encourage hallucinations instead (see: o3).
Here's a primer on the hallucination problem, by OpenAI. It doesn't say anything groundbreaking, but it does sum up what's well known in the industry: https://openai.com/index/why-language-models-hallucinate/
In the end leaving the world changed, but not as meaningfully or positively as promised.
Basically the hype cycle is as American as Apple Pie.
Maybe say something concrete? What's a positive real world impact of LLMs where they aren't hideously expensive and error prone to the point of near uselessness? Something that isn't just the equivalent of a crypto-bro saying that their system for semi-regulated speculation (totally not a rugpull!) will end the tyranny of the banks.
Oh, I guess you mean when they grow up.
The woo is laughable. A cryptobro could have pulled the same nonsense out of their ass about web 3.0
---
Less flippantly, they are excellent for self-studying university-level topics. It's like being able to ask questions to a personal tutor/professor.
- documentation
- design reviews
- type systems
- code review
- unit tests
- continuous integration
- integration testing
- Q&A process
- etc.
It turns out when include all these processes, teams of error-prone human developers can produce complex working software. Mostly -- sometimes there are bugs. Kind of a lot actually. But we get things done.Is it not the same with AI? With the right processes you can get consistent results from inconsistent tools.
This is a pretty massive difference between the two, and your narrative is part of why AI is proving to be so harmful for education in general. Delusional dreamers and greedy CEOs talking about AI being able to do "PhD level work" have potentially ruined a significant chunk of the next generation into thinking they are genuinely learning from asking AI "a few questions" and taking the answers at face value instead of struggling through the material to build true understanding.
I’ll take a potential solution I can validate over no idea whatsoever of my own any day.
If any answer is acceptable, just get your local toddler to babble some nonsense for you.
If you have to validate what the LLM says, I assume you'd do that by researching primary sources and works by other experts. At that point, the LLM did nothing except charge you for a few tokens before you went down the usual research path. I could see LLMs being good for providing an outline of what you'd need to research, which is definitely helpful but not in a singularity way.
For research, yes, and the utility there is a bit more limited. They’re still great at digesting and contextualizing dozens or hundreds of sources in a few minutes which would take me hours.
But what I mean by “easily testable” is usually writing code. If I already have good failing tests, verification is indeed very very cheap. (Essentially boils down to checking if the LLM hacked around the test cases or even deleted some.)
> At that point, the LLM did nothing […]
I’d pay actual money for a junior dev or research assistant capable of reading, summarizing, and coming up with proofs of concept at any hour of the day without getting bored at the level of current LLMs, but I’ve got the feeling $20/month wouldn’t be appealing to most candidates.
It would have taken me a whole day, easily, to do on my own.
Useless it is emphatically not
She doesn't like using Claude, but she accepts the necessity of doing so, and it reduces 3-month projects to 2-week projects. Claude is an excellent debating partner.
Crypto? Blockchain? No-one sceptical could ever see the point of either, unless and until their transaction costs were less than that of cash. That... has not happened, to put it mildly.
These things are NOT the same.
The hype is real, but there’s actual practical affordable understandable day-to-day use for the tech - unlike crypto, unlike blockchain, unlike web3.
That's an extremely speculative view that has been fashionable at several points in the last 50 years.
Nah, we aren't. There's a reason the output of generative AI is called slop.
Prediction is obviously involved in certain forms of cognition, but it obviously isn't all there is to the kinds of beings we are.
Extraordinary claims demand extraordinary evidence. We have machines that talk, which is corollary to nothing.
Most ideas, even in the reasoning fields, are generated in non-linguistic processes.
Of course, some problems are solved by step-by-step linguistic (or math) A, then B, then C steps, etc., but even for those types of problems, when they get complex, the solution looks more like follow a bunch of paths to dead ends, think some more, go away, and then "Aha!" the idea of a solution pops into our head, then we back it up and make it explicit with the linguistic/logical 'chain of reasoning' to explain it to others. That solution did not come from manipulating language, but from some other cognitive processes we do not understand, but the explanation of it used language.
LLMs aren't even close to that type of processing.
They revolutionized supermarkets.
And for small baskets, sure, but it was scan as you shop that really changed supermarkets and those things thankfully do not talk.
I would really like to hear you explain how they revolutionized supermarkets.
I use them every day, and my shopping experience is served far better by going to a place that is smaller than one that has automated checkout machines. (Smaller means so much faster.)
Hell, if you go to Costco, the automated checkout line moves slower than the ones manned by experienced workers.
2. The category of computerized machines (of which self checkouts are one example) has absolutely revolutionized the world. Computerization is the defining technology of the last twenty years.
Not that I think you're wrong, but come on - make the case!
I have the very unoriginal view that - yes, it's a (huge) bubble but also, just like the dot com bubble, the tevhnology is a big deal - but it's not obvious to see what will stand and fall in the aftermath.
Remember that Sun Microsystems, a very established pre-dot com business, rose to huge heights on the bubble and was then smashed by the fall when it popped. Who's the AI bubble's Sun and who's its Amazon? Place your bets...
Even in 2002, my CS profs were talking about how GAI was a long time off bc we had been trying for decades to innovate on neural nets and LLMs and nothing better had been created despite some of the smartest people on the planet trying.
The compute and data are both limitations of NNs.
We've already gotten really close to the data limit (we aren't generating enough useful content as a species and the existing stuff has all been slurped up).
Standard laws of physics restrict the compute side, just like how we know we will hit it with CPUs. Eventually, you just cannot put things closer together that generate more heat because they interfere with each other because we hit the physical laws re miniaturization.
No, GAI will require new architectures no one has thought of in nearly a century.
It's always different this time.
More seriously: there are decent arguments that say that LLMs have an upper bound of usefulness and that we're not necessarily closer to transcending that with a different AI technology than we were 10 or 30 years ago.
The LLMs we have, even if they are approaching an upper bound, are a big deal. They're very interesting and have lots of applications. These applications might be net-negative or net-positive, it will probably vary by circumstance. But they might not become what you're extrapolating them into.
That I think is the entire mistake of this bubble. We confused what we do have with some kind of science fiction fantasy and then have worked backwards from the science fiction fantasy as if it is inevitable.
If anything, the lack of use cases is what is most interesting with LLMs. Then again, "AI" can do anything. Probabilistic language models? Kind of limited.
I would be interested to hear the way that you see. I don't have any problem seeing a huge number of roadblocks to post-scarcity that AI won't solve, but I am open to a different perspective.
My own experience, using ChatGPT and Claude for both dev and other business productivity tasks, lends credence to the METR model of exponential improvement in task time-horizon [0]. There are obviously still significant open technical issues, particularly around memory/context management and around online learning, but extensive work is being done on these fronts, propelled amongst other things by the ARC-AGI challenge [1], and I don't see anything that is an actual roadblock to progress. If anything, from my perspective, it seems appears that there are significant low-hanging-fruit opportunities around plain-old software engineering and ergonomics for AI agents, more so than a need for fundamental breakthroughs in neural network architecture (although I believe that these too will come).
So then, with an increasing time horizon and improved task accuracy (much of it assured by improvements in QA mechanisms), we will see ourselves handing off more and more complex tasks to AI agents, until eventually we could have "the factory of the future ... [with] only two employees: a man and a dog", and at that stage I believe that there would be no imperative for humans to work (unless they choose to, or have a deeply ingrained Calvinist work ethic). And then, as you said, we're down to the non-technological roadblocks.
Obviously capitalists would fight to stay in control, and unlike some who expect a fully peaceful and organic transition, I do expect somewhat of a war here (whether kinetic or cold), but I do envision that when push comes to shove, those of us who believe in the free software movement and the foundational principles of democracy will be able to assert shared national/international (rather than corporate) control over the AIs and restructure society into a form where AI (and later robots) perform the work for the benefit of humans who would all share in the bounty. I am not an economist and don't have a clear prediction on the exact form this new society would take, but from my reading of the various pilot implementations of UBI [2], I think that we will see acceptance towards a society where people are essentially in retirement throughout their life. Just as currently, some retired people, choose to only stay home and watch TV, while others study, do art, travel the world, help raise and teach future generations or contribute to social causes close to their hearts, so we'll all be able to do what is in our hearts, without worrying about subsistence.
You may say that I'm a dreamer...
[0] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
[1] https://arcprize.org/leaderboard
[2] https://en.wikipedia.org/wiki/Universal_basic_income_pilots
I'm not really trying to be snarky; I'm trying to point out to you that you're being really vague. And that when you actually get really, really concrete about what we have it ... starts to seem a little less magical than saying "computers that talk and think". Computers that are really quite good at sampling from a distribution of high-likelihood next language tokens based upon complex and long context window is still a pretty incredible thing, but it seems a little less likely to put us all out of a job in the next 10 years.
And it became and industry that as completely and totally changed the world. The world was just so analog back then.
>starts to seem a little less magical than saying "computers that talk and think"
Computer thinking will never become magical. As soon as we figure something out it becomes "oh that is just X". It is human thinking that will become less magical over time.
Slightly different cohorts.
LLMs may be a stepping stone to AGI. It's impressive tech. But nobody's proven anything like that yet, and you're running on pure faith not facts here.
I'm enjoying the new LLM based tooling a lot, but nothing about it suggests that we're in any way near to AGI because it's very much a one trick pony so far.
When we see generative AI that updates its weights in real time (currently an intractible problem) as part of the feedback loop then things might get very interesting. Until then it's just another tool in the box. CS interns learn.
It's blatantly obvious to see if you work with something you personally have a lot of expertise in. They're effectively advanced search engines. Useful sure.. but they're not anywhere close to "making decisions"
An RNG can do what you're describing
Or did you pop your laundry into a machine and your dishes into another one and press a button?
Don't do that, it's not cool.
> Take [...] a step further, and imagine the systems in 2035
How about imagining AI slop multiplied by 10 years. How bad is this going to get?
It's cool that you're excited, but you need a bit more than enthusiasm to make the case.
So why's it different this time?
And we've made it past. LLMs of today reason a lot like humans do.
They understand natural language, read subtext, grasp the implications. NLP used to be the dreaded "final boss" of AI research - and now, what remains of it is a pair of smoking boots.
What's more is that LLMs aren't just adept at language. They take their understanding of language and run with it. Commonsense reasoning, coding, math, cocktail recipes - LLMs are way better than they have any right to be at a range of tasks so diverse it makes you head spin.
You can't witness this, grasp what you see, and remain confident that "AGI isn't possible".
Outside of the software world it's mostly a (much!) better Google.
Between now and a Star Trek world, there's so much to build that we can use any help we can get.
In 10 years GPUs will have a lifespan for 5-7 years. The rate of improvement on this front has been slowing down faster then CPU.
The three year number was a surprisingly low figure sourced to some anonymous Google engineer. Most people were assuming at least 5 years and maybe more. BUT, Google then went on record to deny that the three year figure was accurate. They could have just ignored it, so it seems likely that three years is too low.
Now I read 1-3 years? Where did one year come from?
GPU lifespan is I suspect also affected by whether it's used for training or inference. Inference loads can be made very smooth and don't experience the kind of massive power drops and spikes that training can generate.
Perhaps the author confused "new GPU comes out" with "old GPU is obsolete and needs replacement"
I believe that lifespan range came from cryptocurrency mining experience, running the GPU at 100% load constantly until components failed.
What is interesting is that it seems like the ever larger sums of money sloshing around are resulting in bigger, faster hype cycles. We are already seeing some companies face issues after blowback from adopting AI too fast.
(It might be too expensive to pay for LLM subscriptions when every device in your house is "thinking" all day long. A 3-5k Computer for a local llm might pay itself off after a year or two. )
The next frontier would be training directly with block floating point, where you have a shared exponent plus the two remaining bits. It's getting tight.
Maybe it is possible to have mini LoRA blocks where an n times n block is approximated by the outer product of two n sized vectors. For n = 4 the savings would be 50% less FLOPs and for n=8 the savings would be 75% less FLOPs.
You can keep a server running for 10-15 years, but usually you do that only when the server is in a good environment and has had a light load.
I said solid state components last decades. 10nm transistors have a thing for over 10 years now and other than manufacturer defect don't show any signs of wearing out from age.
> MTBFs for GPUs are about 5-10 years, and that's not about fans.
That sounds about the right time for a repaste.
> AWS and the other clouds have a 5-8 year depreciation calendar for computers.
Because the manufacturer warranties run out after that + it becomes cost efficient to upgrade to lower power technology. Not because the chips are physically broken.
2. While comprehensive studies were never done, some tech channels did some testing and found used GPUs to be generally reliable or easily repairable, when scamming was excluded. https://youtu.be/UFytB3bb1P8
> Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use.
So it isn’t entirely tied to the rate of obsolescence, these things apparently get worn down from the workloads.
In terms of performance improvement, it is slightly complicated, right? It turns out that it was possible to do ML training on existing GPGPU. Then there was spurt of improvement as they go after the low-hanging fruit for that application…
If we’re talking about what we might be left with after the bubble pops, the rate of obsolescence doesn’t seem that relevant anyway. The chips as they are after the pop will be usable for the next thing or not, it is hard to guess.
I'm not suggesting it's an outright lie, but rather it's easy to massage the costs to make it look true even if it isnt. Eg does GPU cost go into inference cost or not?
9 years ago, when my now wife and I were dating, we took a long cross-country road trip, and for a lot of it, we listened to NPR's Ask Me Another (a comedy trivia game).
Anyway, on one random episode, there was a joke in the show that just perfectly fit what we were doing at that exact moment. We laughed and laughed and soon forgot about it.
Years later, I wanted to find that again and purposely recreate the same moment.
I downloaded all 300 episodes as MP3s. I used Whisper to generate text transcripts, followed by a little bit of grepping, and I found the one 4-second joke that otherwise would have been lost to time.
Maybe you could argue it cost some electricity, but... In reality, it meant my computer, which runs 24/7 pulling ~185W, was running at ~300W for 56 hours... Thusly.. 300 - 185 = 115W * 56H = 6.44kWh @ $0.13 per kWh = $0.85 + tax.
So... Yes, it was very much worth $0.85 to make my wife happy.
You would want to add the cost of your network+hardware depreciating over the timeframe, and you probably can't just ignore the first 185W since if you are Anthropic it doesn't seem likely that the idle power draw would be needed if they weren't expecting to serve AI traffic.
So, let's say $0.02 per hour ($1/50 roughly). That's about $15 per month per user. Let's call it $10 per month per user since users aren't constantly hammering the service. To support a big sales and marketing engine, you would like to be selling subscriptions for $100+ per month. I'm just not sure people are prepared to pay that for AI in its current form.
If you are feeding the LLM a report, and asking it for a summary, it doesn't need the latest updates from Wikipedia or Reddit.
Consider how much software is out there that can now be translated into every (human) language continuously, opening up new customers and markets that were previously being ignored due to the logistical complexity and cost of hiring human translation teams. Inferencing that stuff is a no brainer but there's a lot of workflow and integration needed first which takes time.
Creating new LLMs might be out of reach for all but very well-capitalized organizations with clear intentions, and governments.
There might be a viable market for SLMs though. Why does my model need to know about the Boer wars to generate usable code?
It’s underappreciated that we would already be in a pretty absurdly wild tech trajectory just due to compute hyperabundance even without AI.
The main reason they train new models is to make them bigger and better using the latest training techniques, not to update them with the latest knowledge.
I think LLMs work best when you give it data, and ask it to try make sense of it, or find something interesting, or some problem. To see something I can't see, then I can go back and go back to the original data and make sure its true.
The “at a loss” scenario comes from (1) training costs and (2) companies selling tokens below market to get market share. Neither of those imply that people won’t run models in future. Training new frontier-class models could potentially become an issue, but even that seems unlikely given what these models are capable of.
I have access to quite a few models, and I use them here and there. They are sort of useful, sometimes. But I don't pay directly for any of them. Honestly, I wouldn't.
But then, without this huge financial and tech bubble that's driven by these huge companies:
1/ will those models evolve, or new models appear, for a fraction of the cost of building them today?
2/ will GPU (or their replacement) also cost a fraction of what they cost today, so that they are still integrated in end-user processors, so that those model can run efficiently?
These people won't sit still and models will keep getting better as well as cheaper to run.
GPUs will last much longer after the crash because there won't be any money available to replace them. You can either keep running the existing GPUs or throw them in the trash. The GPUs will keep running as long as they can generate enough revenue from inference to cover electricity. Tokens will become very cheap but free tokens might go away.
AI datacenters aren't that specialized. You can buy a 1 GW datacenter and put 300 MW of equipment in it and sell 2/3 of the gas generators. You'll have to buy some InRow units.
The AI stack isn't as proprietary as it sounds. GPU rental exists today and it's pretty interoperable. Ironically, Nvidia's moat has made their GPUs a de facto standard that is very well understood and supported by whatever software you want.
I think that people doing work in many professions with these offline tools alone could more than double their productivity compared to their productivity two years ago. Furthermore if the usage was shared in order to lower idle time, such as 20 machines for 100 workers, the initial capital outlay is even lower.
Perhaps investors will not see the returns they expect, but it is difficult to image how even the current state of AI doesn't vastly change the economy. There could be significant business failures among cloud providers and attempts to rapidly increase the cost of admission to closed models, but there's essentially no possibility of productivity regressing to a pre-AI levels.
They already work on the most expensive Apple hardware. I expect that price to come down in the next few years.
It’s really just the UX that’s bad but that’s solvable.
Apple isn’t having to pay for each users power and use either. They sell hardware once and folks pay with their own electricity to run it.
I know folks who still use some old Apple laptops, maybe 5+ years old, since they don't see the point in changing (and indeed if you don't work in IT and don't play video games or other power-demanding jobs, I'm not sure it's worth it). Having new models with some performant local LLM built-in might change this for the average user.
You won't be getting cheap Apple machines chock full of ram any time soon, I can tell you that. That goes against Apple's entire pricing structure/money making machine.
Is the wear so small that it’s simply negligible ?
Is it going to be that significant though? No idea.
Just ask Intel what happened to 14th gen.
It's not normally an issue, but the edge cases can be very sharp. Otherwise, the bigger concern is the hardware becoming obsolete because of new generations being significantly more power efficient. Over a few years, the power+cooling+location bill of a high end CPU running at 90% utilization can cost more than the CPU itself.
But with that said machines that run at a pretty constant thermal load within range of their capacitors can run a very long time.
Maybe we can finally have a Rosie from the Jetsons.
Then there's this article:
https://rodneybrooks.com/why-todays-humanoids-wont-learn-dex...
Talking about human hands having tens of thousands of receptors of several different types, and the difficulty of tasks like picking up a match, and the trouble with the project of learning dexterity by brute force.
the dotcom bubble was a result of investors jumping on the hype train all at once and then getting off of it all at once.
Yes, investors will eventually find another hype train to jump on, but unlike 2000, we have tons of more retail investors and AI is also not a brand new tech sector, it's built upon the existing well established and "too big to fail" internet/ecommerce infrastructure. Random companies slapping AI on things will fail but all the real AI use cases will only expand and require more and more resources.
OpenAI alone just hit 800M MAU. That will easily double in a few years. There will be adjustments,corrections and adaptations of course but the value and wealth it generates is very real.
I'm no seer, I can't predict the future but I don't see a massive popping of some unified AI bubble anytime soon.
Figuring out which was which was absolutely not possible at the time. Not many people foresaw Sun Microsystems as being a victim and nor was it obvious that Amazon would be a victor.
I wouldn't bet my life savings on OpenAI.
I wouldn't be my life savings on OpenAI either FWIW.
OpenAI has ~4B of revenue already, and they aren't even monetizing aggressively. Facebook has an infinite money glitch, and can afford to put billions in the ground in pursuit of moonshots and Zuck's own vanity projects. Google is Google, and xAI is Elon Musk. The most vulnerable frontier lab is probably Anthropic, and Anthropic is still backed by Amazon and, counterintuitively, also Google.
At the same time: there is a glut of questionable AI startups, extreme failure rate is likely - but they aren't the bulk of the market, not by a long shot. The bulk of the "AI money" is concentrated at either the frontier labs themselves, or companies providing equipment and services to them.
The only way I see for the "bubble to pop" is for multiple frontier labs to get fucked at the same time, and I just don't see that happening as it is.
That's one of the issues with the current valuation: it's unclear to me how many of the 800M MAU will stick to ChatGPT once it monetizes more aggressively, especially if its competitors don't. How many use ChatGPT instead of Claude because the free version offers more? How many will just switch once it doesn't anymore?
OpenAI is already at a 500B valuation; the numbers I could find indicate that this number grew 3x in a year. One can reasonably ask if there's some ceiling or if it can keep on growing indefinitely. Do we expect it to become more valuable than Meta or MSFT? Can they keep raising money at higher valuations? What happens if they can't anymore, given that they seem to rely on this even for their running costs, not even speaking about their investments to remain competitive? Would current investors be fine if the valuation is still 500B in a year, or would they try to exit in a panic sell? Would they panic even if the valuation keeps growing but at a more modest pace?
GPT-5 was in no small part a "cost down update" for OpenAI - they replaced their default 4o with a more optimized, more lightweight option that they can serve at scale without burning a hole in their pockets. At the same time, their "top end" options for the power users willing to pay for good performance remain competitive.
The entire reason why OpenAI is burning money is "their investments to remain competitive". Inference is profitable - R&D is the money pit. OpenAI is putting money into more infra, more research and more training runs.
It will become cool for you to become inaccessible, unreachable, no one knowing your location or what you’re doing. People might carry around little beeper type devices that bounce small pre-defined messages around on encrypted radio mesh networks to say stuff like “I’m okay” or “I love you”, and that’s it. Maybe they are used for contactless payments as well.
People won’t really bother searching the web anymore they’ll just ask AI to pull up whatever information they need.
The question is, with social media on the decline, with the internet no longer used for recreational purposes, what else are people going to do? Feels like the consumer tech sector will shrink dramatically, meaning that most tech written will be made to create “hard value” instead of soft. Think anything having to do with movement of data and matter, or money.
Much of the tech world and government plans are built on the assumption that people will just continue using tech to its maximum utility, even when it is clearly bad for them, but what if that simply weren’t the case? Then a lot of things fall apart.
And if you can't think of what to do with massive amounts of matrix multiplying compute, that's pretty sad IMO. Not to mention the huge amount of energy demand probably creating a peace dividend in energy generation for decades to come.
We have also gotten a lot of open models we wouldn't of had without the AI boom competition, not mention all the other interesting stuff coming out in the open model world.
Typical pessimist drivel.
How about chips during the dotcom period? What was their lifespan?
Maybe in next decade we will have cheap gaming cloud offerings built on repurposed GPUs.
Which is exactly what we expect if technological efficiency is increasing over time. Saying we've invested 1000x in aluminum plants and research over the first steel plants means we've had massive technological growth since then. It's probably better that it's actually moving around in an economy than just being used to consolidate more industries.
>compute/storage servers became obsolete faster than networking
In the 90s extremely rapidly. In the 00s much less rapidly. And by the 10's servers and storage especially the solid components like boards lasted a decade or more. The reason the servers became obsolete in the 90s is much faster units came out fast and were much faster, not that the hardware died. In the 2010-2020 era I repurposed tons of data center hardware to onsite computers for small businesses. I'm guessing a whole lot less of that hardware 'went away' then you'd expect.
...whether it is profitable is another matter
As the noise fades, and with luck, the obsession with slapping "AI" on everything will fade with it. Too many hype-driven CEOs are chasing anything but substance.
Some AI tools may survive because they're genuinely useful, but I worry that most won't be cost-effective without heavy subsidies.
Once the easy money dries up, the real engineers and builders will still be here, quietly making things that work.
Altman's plea -- "Come on guys, we just need a few trillion more!" -- and that error-riddled AI slide deck will be the meme that marks the top of the market.
So it doesn’t answer my question. Real GPUs are bought. Presumably because real consumption is taking place. Presumably because real value (productivity) is produced. Which in turn reduces knowledge work labor (maybe?). Which may destroy jobs. Which reduces excess income and consumption in… a consumer-driven economy.
My point is, it’s actually not rational for the worst actors you could imagine. There’s a link in the chain missing for me logically and I think “billionaires” actually isn’t the answer.
Nobody knows what that work is though, and nobody wants to talk about it lest everyone realizes this all might be a house of cards.
Or rather than higher taxes, more money printing because inflation is just so incredibly low because of advancements in productivity.
That’s the only thing I can imagine so far. All other paths in my mind lead me thinking about uprising and political unrest.
(I’m not a doomer. I’m literally just trying to imagine how it will work structurally.)
Its a big reason why there is a decent possibility that AI is dystopian for the majority of poor to upper middle class people. There will still be a market for things that are scarce (i.e. not labor) but the other factors of productions such as land, resources, capital, etc. People derive more income/wealth from these will win in an AI world, people who rely on their skills/intelligence/etc will lose. Even with abundance; there's no reason to think you will have a share of that.
A robot, who I will name Robort, over time becomes, say, 1/10th the price of human labor. But they do 10x the work quality and work 10x longer.
In that scenario, you could pay them the same wage but produce significantly more economic value. The robot, who won’t care about material possessions or luxuries, could make purchases on behalf of a human - and that human, overworked in 2025 or jobless - has a significant quality of life improvement.
Help, an economist or someone smarter, check my math.
But that kind of gets back to my original point, which was that I think the vast majority of economic interaction will be business to business, not just in value (the way it is today) but also in volume. I.e. in the same way that everyone has a license, maybe every family also has a registered household business, for managing whatever assets they own. The time it takes for self hosted models to approach frontier model performance isn't huge, and maybe we see that filter in to households that are able to do decent work at a cheaper rate.
Children (under the age of 18) are less productive and spend significantly less time doing anything that could be considered work and they don't pay their parents in any significant capacity.
If people don't have the money to pay for shavers, shavers either won't be made, or they'll be purchased and owned by businesses, and leased for some kind of servitude. I'm not sure what kind of repayment would work if AI and machines can replace humans for most labor. Maybe we're still in the equation, just heavily devalued because AI is faster and produces higher quality output.
Alternatively, government might have to provide those things, funded by taxes from businesses that own machines. I think, realistically, this is just a return to slavery by another name; it's illegal to own people as part of the means of production, but if you have a person analog that is just as good, the point becomes moot.
I think it gets scary if the government decides it no longer has a mandate to look after citizens. If we don't pay taxes, do we get representation?
Not saying that’s even remotely realistic over the next century, but it does seem to be how some of these people think. Excessive wealth destroys intelligence, it doesn’t enhance it, as countless examples show.
This AI bubble already has lots of people with their forks and knifes waiting to capitalize on a myriad of possible surpluses after the burst. There's speculation on top of _the next bubble_ and how it will form, even before this one pops.
That is absolutely disgusting, by the way.
I don't know if you were there at the time but saying wow what a bubble was much of the conversation back then. I don't know if it was predicting it as saying gosh just look - you can take any company and put ".com" in the name and the stock goes up 5x, it's nuts.
It seems that in the current AI craze, some people stopped saying "it's nuts" and started saying "it will leave something nuts in its place!". As if the bubble had already burst!
Do you understand better now what I am trying to say?
I don't remember "wow, there will be infrastructure everywhere!". It was kind of more will they hurry up and build more infrastructure as it was seriously bad in 1999. I only had slow unreliable dial up. General broadband availability didn't happen till about a decade later and I still don't have fiber where I am in central London.
That's one difference - people wanted internet access, were willing to pay and there was a shortage. This time they've built more LLM stuff than people really want and they have to shove it into everything for free.
This is how humans have worked with pretty much every area of expansion in at least the last 500 years and probably longer. It's especially noticeable now because the amount of excess capital in the world from technological expansion makes it very noticeable and a lot of the limitations we know of in physics have been ran into, so further work gets very expensive.
If you want to stop the bubbles you have to pretty much end capitalism, if which capitalists will fight you about. If AI replaces human thinking and robots human labor that 'solves' the human capital problem but opens up a whole new field of dangerous new ones.
There is no winning scenario.
It occured to me as a teenager and I wrote an essay on if for my uni admissions exam back in 1981 but it's not rocket science and the idea goes back at least to John von Neumann who came up with the 'singularity' term in the 50s.
We are well into this process already. Core chat capabilities have pretty much stalled out. But most of the attempts at application are still very thin layers over chat bots.