In other words, the AI hype comes at the cost of lower growth rates in other sectors of the economy?
It makes sense, since investor money is spent exactly once. If it goes to AI then it doesn't go elsewhere. And if it didn't go to AI then it would go elsewhere.
Sad for folks outside tech. But at least they can AI generate cat pictures now, and watch their tech friend use AI tooling to write software.
Not true. With this much money and more coming it means other roles will benefit too. The whole tech sector will grow - maybe less than AI specific, but still.
It's better than the alternative of no or negative growth.
Some comments assume the funds exist and will be spent elsewhere in the US or the markets they refer to but maybe not. If no AI, US funds could invest in Vietnam (that is receiving a FTSE market upgrade), China, EU or just about anywhere else.
Don't assume you'd benefit by wishing AI to begone. When you wanted crypto to go away you got AI.
I can't imagine how great/amazing it must be there now with AI being 17x that. Everyone where I live is financially stressed. Living in a community that isn't would be so nice. I am sure people in the bay area would miss this current feeling of prosperity/success/optimism to start a family if the bubble burst.
Maybe without the AI it'd be worse? COVID over-hiring lingered for a long time. A lot of the layoffs were due to that. Perhaps without AI and the over-supply for workers things could have crashed.
I don’t think hyper growth bubble vs economic depression are the only two options.
But in general, there is a lot of extremely fascinating stuff, both to exploit and explore, for example in the context of traditional (non-transforeme-based) ML/DL methods. The methods are getting better year by year, and the hardware needed to do anything useful is getting cheaper.
So while it's true that after the initial fascination developers might not be that interested in GenAI, and some even deliberately decided not to use these tools at all in order to keep their skills fresh and avoid problems related with constant review fatigue, many tech folks are interested in AI in a wider context and are getting good results.
There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.
I find it a bit ironic though that our tool out of the excessive complexity is an even more complex tool, although, looking at biology and that programming in large longer-running projects already felt like it had plenty of elements that reminded me of how evolution works in biology, already leading to hard or even impossible to comprehend systems (like https://news.ycombinator.com/item?id=18442637), the new direction is not that big of a surprise. We'll end up more like biology and medicine some day, with probabilistic methods and less direct knowledge and understanding of the ever more complex systems, and evolution of those systems based on "survival" (does what it is supposed to most of the time, we can work around the bugs, no way to debug in detail, survival of the fittest - what doesn't work is thrown away, what passes the tests is released).
Small systems that are truly "engineered" and thought through will remain valuable, but increasingly complex systems go the route shown by these new tools.
I see this development as part of a path towards being able to create and deal with ever more complex systems, not, or only partially, to replace what we have to create current ones. That AI (and what will develop out of it) can be used to create current systems too is a (for some, or many, nice) side effect, but I see the main benefit in the start of a new method to deal with ever more complexity.
I only ever see single-person or -team short-term experiences of LLM use for development. Obviously, since it is so new. But one important task of the tooling will only partially be to help that one person, or even team, to produce something that can be released. Much more important will be the long-term, like that decades-long software dev process they ended up with in my link above, with a lot of developers over time passing through still being able to extend it and fix issues years later. Right now it is solved in ways that are far from fun already, with many developers staying in those teams only long enough, or H1Bs who have little choice. If this could be done in a higher level way, with whatever "AI for software dev" will turn into over the next few decades, it could help immensely.
I was wondering about this a lot. While it's a truism the generalities are always useful whereas the specific gets deprecated with time, I was trying to get down deeper on why certain specifics age quickly whereas other seem to last.
I came up with the following:
* A good design that allows extending or building on top of it (UNIX, Kubernetes, HTML)
* Not being owned by a single company, no matter how big (negative examples: Silverlight, Flash, Delphi)
* Doing one thing, and being excellent at it (HAproxy)
* Just being good at what needs to be done in a given epoch, gaining universality, building ecosystem, and just flowing with it (Apache, Python)
Most things in JS ecosystem are quite short-lived dead ends so if I were a frontend engineer I might consider some shortcuts with LLMs because what's the point of learning something that might not even exist a year from now? OTOH, it would be a bad architectural decision to use stuff that you can't be sure it will be supported in 5 years from now, so...
No, I'm talking about core principles.
You just can't go on being incredibly specific. We already tried other approaches, "4th gen" languages were a thing in the 90s already, for example. I think the current kind of more statistical and NN approach is more promising. The completely deterministic computing is harder to scale, or you introduce problems such as seen in my example link over time, or it becomes non-deterministic and horrible to debug because the bigger the system you other things dominate more and more.
Again, this won't replace smaller software like we write today, this is for larger, ever longer lasting and more complex systems, approaching bio-complexity. There is just no way to debug something huge line by line, and benefits of modularization (and separation of the parts into components easier to handle) will be undermind4ed by long-term development following changing goals.
Just look at the difference in complexity of software form a mere forty, or twenty years ago and now. The majority of software was very young, and code size was measured in low mega-bytes. The systems explode in size, scale and complexity, and new stuff added over time is less likely to be added cleanly. Stuff will be "hacked on" somehow and released when it passes the tests well enough, just like in my example link which was for a 1990s DB system, and it will only get worse.
We need very different tools, trying to do this with our current source code and debugging methods already is a nightmare (again, see that link and the work description). We might be better off embracing more fuzzy statistical and NN methods. We can still write smaller components in more deterministic ways of today.
The terminology of AI has a strong link with LLMs/GenAI. Quite reasonable.
As for code/architecture/infrastructure I like those things too. You do have to shape your communications to the audience you are talking to though. A lot of the products have eliminated the demand for such jobs, and its a false elimination so there will be an overcorrection later in a whipsaw, but by that time I'll have changed careers because the jobs weren't there. I'm an architect, with 10+ years of experience, not a single job offer in 2 years with tens of thousands of submissions in that time.
If there is no economic opportunity you have to go where the jobs are. When executives play stupid games based in monopoly to drive wages down, they win stupid prizes.
Sometime around 2 years is the max time-frame before you get brain drain for these specialized fields, and when that happens those people stop contributing to the overall public parts of the sector entirely. They take their expertise, and use it for themselves only, because that is the only value it can provide and there's no winning when the economy becomes delusional and divorced from reality.
You have AI eliminating demand for specialized labor that requires at least 5 years of experience to operate competently, AI flooding the communication space with jammed speech (for hiring through a mechanism similar to RNA interference), and you have professional certificate providers retiring all benefits, and long-lasting certificates that prove competency on top of the coordinated layoffs by big tech in the same time period. Eliminating the certificate path as a viable option for the competent but un-accredited through university.
You've got a dead industry. Its dead, but it doesn't know it yet. Such is the problem with chaotic whipsaws and cascading failures that occur on a lag. By the time the problem is recognized, it will be too late to correct (because of hysteresis).
Such aggregate stupidity in collapsing the labor pool is why there is a silent collapse going on in the industry, and why so many people cannot find work.
The level of work that can be expected now in such places because of such ill will by industry is abyssal.
Given such fierce loss and arbitrarily enforced competition, who in their right mind would actually design resilient infrastructure properly; knowing it will chug away for years without issue after they lay you off with no intent towards maintenance (making money all that time).
A time is fast approaching where you won't find the people competent enough to know how to do the job right, at any price.
the capacity built for AI will remain even if AI gets written down, which means cheaper energy and datacenter stuff in that case for "the rest"
if AI makes as much money as these valuations assume then it means productivity increased and growth did happen
Money-printing does destroy the economy on a lag, specifically when the production has such catastrophic shortfall that it shows itself to be ponzi without tangible value or benefit. Value being based entirely in human action.
When that happens, its basically slave labor silently extracted from the population through inflation. Such things historically always trigger other cascading failures.
All the parasites ended up dying in that book when all the intelligent people decided to just step back and let natural human tendency and the momentum they created do what it was always going to do. All those people were deluded into thinking they could just make a law without paying respect to the mechanics that made things work. Ayn Rand though is also quite deluded in that her ideas don't work without eliminating inheritance and money-printing.
Slavery is intolerable in any form.
Indeed, the scale matters a lot. With that said, a ponzi always has beneficiaries.
> Such things historically always trigger other cascading failures.
With the help of war or without, it's up to the beneficiaries.
This assumes only "the US" exists in this world. The AI hype would have been a thing regardless.
If the money doesn't go to the US it'd go to China or somewhere else. Just like with batteries you'd just lose the market if you don't invest.
So I'm nitpicking here, but this seems to me to be an important nitpick: This is not true because money circulates.
The distinction is, one should not stop looking at only the first level effects, but the entire fields the money streams flow through.
It remains true that money flows in specific areas, but it is on a higher level than only the immediate first level spending, so the analysis has to be different too.
This is not about the body or the land, but about the blood or the water flowing through them.
Because it describes exactly the point which GGP tried to make (source: that was me). The assumption is that AI growth is great because without it, look at how low the non-AI growth is! But that argument is flawed because the resources (manpower, materials, manufacturing, energy, ...) absent the AI hypr would not vanish but be used for something else, so the growth in those other areas would be bigger. Granted, perhaps not as big (marginal gains and all that), but the painted picture is still skewed.
> since investor money is spent exactly once
In addition, I even pointed out that I was not posting about the main argument!
Quoting myself, again:
> So I'm nitpicking here
Eh... No?
The money flows on pretty quickly, unless they keep it as cash under their mattress.
Are you confusing it with any possible effects or work performed?
> generating enough returns
Ah I see the confusion.
No, they don't have to wait for "returns". We are talking about THAT money, the exact investor money they got. Which they will spend again. Even if they just keep it at the bank, it will be available to that bank to do something with.
The fate of that business does not matter, all that matters is that the investor money they took is going to continue to flow, outwards from them to whomever the company pays with that money. And so does everybody else.
The flow only stops when the money is lying around somewhere, and since that's the bank then at least the bank can do something with it. The flows truly stop when the whole economy is going down, when everybody cuts back on spending, and investments dry up too, so that the money is truly just lying around and nobody wants it.
Let's say this investment then raises the market cap of the company that was invested in by $5-10B. Loans are then taken out against that $5-10B of increased market cap. If the growth never materializes, then the investment ends up underwater and there's secondary effects that make the loan worth less than what was given; this is what the credit worthiness is supposed to measure, but with hyperinflated values and lots of money as collateral these loans are given great rates. Basically what ends up happening is extra virtual currency starts circulating mirroring the increased market cap due to that initial $1B investment. But if the return on investment doesn't materialize, this $5-10B of currency just vanishes into thin air and dwarfs the original $1B investment. Additionally, there's leveraged secondary and tertiary bets that get taken out that further magnify this circulating currency and magnifies the loss if things don't work out.
This is precisely what happened in the dot com and banking crises bubbles. These things have secondary parasitic effects that are ballooned through leveraged investments into affecting the broader worldwide economy and crossing industries and whatnot.
Let's stop right there.
First, this already shows You are responding to something in your mind, not mine. What I wrote is plain to see. Second, the rest of that sentence confirms that fear. You are not arguing with me and what I wrote, but with some ghost in your own head.
> seem to imply that because the initial investment cash always ends up circulating, no value is ever destroyed
I did not say that * at all*. And whatever you yourself interpret into plain statements is... yourself speaking.
Paying my staff to light money on fire means that some of the money will circulate, and my brilliant idea to decarbonize the economy by replacing coal with printed currency will result in some benefits (much much less than the costs), but fundamentally it is not a productive endeavor.
The AI Data center build out is much more useful than purely lighting money on fire, but if we overpay for more than it is actually ultimately worth, than it still was a bad idea.
By the same argument, I don't think it's right to say without AI, GDP growth would be flat. That cash would likely go into other investments.
The question is if AI will make a return on the investment or not.
I think the comparison to stock buybacks is ludicrous.
Evidence?
Perhaps depends on what you mean with "ecosystem". Within the AI tech/hype, sure, there it's good. But for the economy as a whole? Is it that good? There are probably some benefits, but do they match the current valuations?
Most likely they don't, because hype cycles inherently overvaluate things for a while because they don't know what will stick. If things were not dramatically overvalued right now then investors would not be acting in their best interest.
Will it match a certain valuation within a certain time period? I guess I don’t really care, I’m not an investor.
[1] https://www.mckinsey.com/capabilities/operations/our-insight...
Did they park it in bonds over the past 10 years? I doubt it. Interest rates were ~0, VC funding was crazy, the money taps were open. They would have been less open without these buybacks.
It's not necessarily 1:1, it seems people are more willing to spend the cash on AI than they were on other things. But it's not 1:0 either.
I love the way I keep getting downvoted on HN whenever I say anything about a subject about which I know a lot more about than the average person here (usually investment and finance).
In your comment I’m replying to, the first paragraph contributes meaningfully to the discussion; the second sounds a bit like lashing out, which might be why people react negatively.
> a subject about which I know a lot more about than the average person here
True or not, expressing it like that is just arrogant.
That's the inherent nature of these voting based online platforms. They reward what the user base wants to hear over what is correct. This is especially apparent in matters with inherent nuance and uncertainty.
I also didn't say that the money disappear, I said the money may just end up getting parked in the modern day equivalent of dragon hoards. There's plenty of things to park idle money in hopes of returns.
I was just pointing out that the idea thaf it would just become investment in other parts of the economy is naive.
> I love the way I keep getting downvoted
I hadn't downvoted you, but I will do so now. I always downvote people that are butthurt about internet points.
You miss the point. What does the person who sold the assets in which the money is "parked" do with it? If they buy a bonds what does the seller of the bonds do with the money? Leave it in a bank account? The bank will lend the money to someone who will either spend the money (stimulating the economy) or reinvest it. They might buy another asset. If that asset is newly issued shares or bonds, the money will then go to a company planning to reinvest it. Anything else and it just pushes it another step to another person.
Eventually it goes back into the economy.
> I was just pointing out that the idea thaf it would just become investment in other parts of the economy is naive.
The naive assumption is that "parked" money somehow leaves the economy. its "parked" from the point of view of the person making the investment, but it has to go somewhere.
> I hadn't downvoted you, but I will do so now. I always downvote people that are butthurt about internet points.
How mature and charmingly expressed!
My point is that there is a lot of Dunning–Kruger in HN discussions of economics and finance.
I miss no point. I understand quite well that "parked money" still exusts. What you ignore is that value is sometimes "destroyed". Investiments that underperform or go in the red, loans that default, crashes in real estate, etc. if money is invested in stocks, and the stocks value go in freefall, the nominal amount of money that existed previously in the economy is the same, and everyone is still poorer because of it.
The massive AI hype is massively pumping a bull run in a very small sector of the economy (if this is a bubble is not something I can answer). A lot of money is moving around around a small subset of companies pumping revenues of one another in a circular fashion, which increases the value of those stocks (thus creating economic growth, real or otherwise). Without this mechanism, this value wouldn't have been created. It's anyone's guess how things would perform without it.
During a crash, the same amount of money that existed prior to the crash is still there. The crash still happens and the country can still go into recession.
> How mature and charmingly expressed!
Thank you. I, too, think I am mature and charming.
You also failed to understand that money put into an investment has to go somewhere pretty much immediately. If someone defaults on a loan they must have used the money, so someone else has it, so it is still not destroyed.
I explicitly did not talk about AI being a bubble.
You may understand of economics (at least you say so). But reading comprehension is not your forte.
I'd think investment banks, law firms and management consultancies must be doing very well in this inflated market. They get a piece of every financing deal and consulting engagement that drives these bilkion dollar spending decisions.
They would just as likely hoover up housing around the country or some such insanity to capitalize on the scarcity.
VC is actually a pretty effective vehicle to separate rich people from their money so society can try crazy things. You just don’t agree with this particular adventure and frankly there will never be the perfect alternative adventure.
> They would just as likely hoover up housing around the country or some such insanity to capitalize on the scarcity.
Another core issue in a hyper-financialised economy, the money doesn't get invested in what would be best for society, it keeps chasing either risky endeavours or parked in presumed safe assets (such as housing), inflating away asset classes. Where are the incentives to invest in foundational areas which do compound to make a society have resilient growth, like infrastructure: energy, transportation, etc.? It feels like without government direction to spend in big projects there's simply no appetite from the private, hyper-financialised, system to do the work, unless there's potential to get 10-100x returns. Is that good for society at large?
If hyper-financialisation is not helping the overall economy, and society to become better, why the hell should we still (in the Western world) pursue that? If all it can do is increasingly chase the extremes: hyper-growth vs extremely safe assets, is it any good anymore?
At least in the American context everything from California High Speed rail to bloated defense spending has shown that VC’s are much better shepherds of their own money.
It was designed to lose this ability, and to lean onto private enterprises to do anything but in the past the government was able to rollout highways, go to the moon, build dams, bridges, power plants.
If both the government and VCs are now unreliable to shepherd capital to direct it to the improvement of society at large you might need to rethink the whole system, and work to nudge it into a better path.
I’m saying letting them go to space or turning sand into intelligence is infinitely better than buying land and charging us rent (what most rich people have done in history).
I'm really struggling with this one. I think AI (generative and not) is surely fascinating. I should by rights be all up in it. I could definitely get it, I don't think I'm stupid in terms of technology. Regardless of the damage the laser-focus on one thing might (or might not) be doing to rest of industry (and the effect on society, which to be honest, I am conflicted on if we can blame the technology). And yet so much of it is all so...tedious and fake somehow, and just even keeping up with headlines is exhausting let along engaging with every LinkedIn "next huge thing that if you don't do you should find a bridge to live under soon".
It's like that guy who tells you constantly how rich and cool he is. Bro, if you're that cool, let your cool speak for itself. But I'm not sure I want to lend you a grand for your new car.
In my opinion AI makes visible more structural issues that were always there, but we could ignore. People addicted to various stuff (being substances or social networks or watching sports), social communities disappearing (no more going to the pub, stay at home with your TV), growing inequality (because capital is not taxed as labor), strange beliefs (all the conspiracy theories, which existed before) and others.
Find a use for the new tool to improve the situation if you can, but I think that hating tools can lead you on dark paths.
People have the same password across services. They share personal information. In a geopolitical climate as today's, where the currency of war is disruption, it can wreak havoc.
In that way AI is 1000% better than crypto or real estate speculation.
How is new housing supply to arrive?
Here's a though experiment. If you could invest today in a company that will result in the destruction of your town (say a mining company) but you got 1% higher return compared to others, you're saying that's a perfect investment and would do it right away?
And if the answer is yes to the above, you can make that a lot darker if you want. See how far your belief goes.
If you look at how the money people behaved since always, that's exactly what would happen.
Society works by balancing the interests of various groups, and there will be people with different opinions than yours, including some you don't like.
Then make them! I don’t have the confidence you have in this “scam”. For sure the valuations are inflated but I don’t think this infrastructure investment will be a waste.
At some point you have to grant people agency and accept that things spend money and time on things that are valuable to them.
We're talking about companies here, not people. And yet, it is kinda true—companies spend time and money on things that are valuable to the people at high level positions in the company and board. But that isn't the same as companies spending time and money on things that are valuable to the company.
Nvidia just announced it’s investing X billion dollars into OpenAI who will turn around and spend 98% of that on Nvidia chips, so GDP rises, stocks rise but actual free market activity? Not so much
I'm often reminded of the quote: "A man marries his housekeeper and that country’s GDP falls".
GDP is uncomfortably linked to granularity of measurement as well as the number of times money changes hands to accomplish a task. Split a pipeline over more businesses boundaries and suddenly GDP is "bigger" despite no change in value or utility.
Corporate income tax is usually a small slice of the overall taxation of a country.
What GDP measures (and what I meant) is the visible part of the economy that the government has knowledge of; and therefore can (not necessarily do) tax.
Does that matter? So many things these days aren't physically made in the US. So US companies don't get the profits and aren't needed?
The actual "made in" part is a small fraction of the total earnings. See how much an iPhone costs to make vs how much it gets sold for.
Why do you want it? It wouldn't have grown in the US. The protests would have erupted before anything happened. There's huge amounts of contamination, pollution, deaths, low wages, over-work, etc...
Also US focused on designing the electronics and the ecosystem around it. Are you saying there's no industry around AMD, Broadcom, Qualcomm, etc that are fabless but hire vast amounts of people?
> as opposed to inflating some virtual numbers for the US
You have App Store / SaaS (i.e. developers) and a lot more.
Would you prefer an average developer salary vs below-minimum US wage factory assembler?
The USA was the winner in all this, but apparently the people don't feel it. What might have gone wrong? (hint: wealth distribution, not manufacturing)
In this case, it is coming from investors like Microsoft, Softbank, Saudis, ChatGPT subscriptions, etc.
Inflation, unemployment, GDP.
It’s like we’re incapable of nuance on a societal level.
There's a ton more of more nuanced measures, many of which get reported too, but don't make a splash as a broad overview economic health indicator because the observations need to be paired with an explanation of the observation, and then you're in the business section and not on the front page.
If you're an economist or analyst, I have a feeling it's a little more nuanced than you're stating.
If you mean "for the average person" or "what the media reports for the average person", then, "duh".
The normal distribution of IQ in the general population would cause a general media company to limit it's complexity of data reported.
For the media it might be a bit more true if you only consider short term news media. But what do expect, them to do a 3h 'state of the economy' everyday? Tons of other media is covering lots of things.
Tons of articles posted here talk about economy topics but not inflation or GDP or unemployment
As to banks, there's a LOT more stuff considered in monetary policy, like domestic consumption/investment, gov spending/taxes, net exports/imports.
populist politics focus on unemployment and inflation but even Trump campaigned on reducing government spending which in fact is bad for unemployment but apparently more important to Americans regardless
It’s the idea that the big gubermant is waisting my money. How much money is actually spent where isn’t a part of that debate and effectively doesn’t matter. No amount of cuts can satisfy that concern.
There are tons of articles about movie stars too - often 80% rumors and not true. It's just noise and ways to get you clicks so has nothing to do with reality.
- https://news.ycombinator.com/item?id=45507195
- https://news.ycombinator.com/item?id=45500699
Unemployment can be manipulated by only including people who „actively search” for the job, and tweaking the definition of „actively”, counting part timers.
GDP is gamed by circular money movements.
Just turn the tariffs on and off again.
The metric doesn't matter as long as it goes up most of the time.
There's 10,000 people and 200 facilities in 32 countries and suddenly that's all worth 10% more, no, 6% less, wait, it's holding in a reverse double-sigma split backflip indication, it'll be up when the markets open. Head explode.gif.
Pretty sure most other countries do, too.
The issue isn't that there aren't good business models or value creation, it's that anything related to AI currently has valuations that are unsustainably high given the current limits of the technology. That leads to economic activity that just couldn't exist without those valuations. And once the hype cools down the valuations will go closer to reality, leaving a lot of companies unviable, and many more will have to severely cut back spending to remain viable.
Or maybe the entire AI market pulls a Tesla and just stays at valuations that aren't justified by normal market fundamentals. Or maybe the technology adapts fast enough to keep up with the hype and can actually deliver on everything that's promised. This doesn't have to come down, it's just very likely that it will.
That doesn't mean much does it? Oracle is a huge company. Not just a cloud either. Companies often offer discounts or promotions; so? There could be plenty of managed services, managed databases, CRM and many more that make up for it.
Whilst I'm not sure if Oracle's stock price is right - the memo was more like a way to pressure the stock down for whatever reason.
September 2020: 2020 Tech Stock Bubble (Sunpointe Investments, tech in general)
https://sunpointeinvestments.com/2020-tech-stock-bubble/
August 2017: When Will The Tech Bubble Burst?" (NY Times)
https://www.nytimes.com/2017/08/05/opinion/sunday/when-will-...
March 2015: Why This Tech Bubble is Worse Than the Tech Bubble of 2000 (Mark Cuban, bubble is social media)
https://blogmaverick.com/2015/03/04/why-this-tech-bubble-is-...
May 2011: The New Tech Bubble (Economist, bubble is "web companies")
https://memex.naughtons.org/where-angels-dare-to-tread-the-n...
And of course I haven't even bothered listing all the people who said cryptocurrency is a bubble. That's 15+ years of continuous bubble-calling.
At some point you have to say that if the thing supposedly inflating the tech bubble changes four or five times over a period that lasts a big chunk of a century, then maybe it's not a bubble but simply that economic growth comes from only two sources: a bigger population and technological progress. If technological progress becomes concentrated in a "tech industry" then it's inevitable that people will start claiming there is a "tech bubble" even if that doesn't make much sense as a concept. It's sort like claiming there's a "progress bubble". I mean, sure, there can and will be bankruptcies and retrenchments, as there always are at the frontier of progress. But that doesn't mean there's going to be a mega-collapse.
So if the action of datacentre building shows up as essentially the only GDP growth, but what later happens in the datacentres fails to take its place or exceed it, there will be a dip.
Whether LLMs grinding away can prop up all GDP growth from now on remains to be seen. People use them when they're free, but people also collected AOL discs for tree decorations because they were free.
There's obviously evidence people use LLMs. That's not necessarily the same as people paying a noticeable fraction of all their money to use them in perpetuity. And even if "normal" people do start taking out $50 subscriptions as a matter of course, commoditisation could push that price down as could "dumping" of cheap models from overseas. A breakthrough in being able to run "good enough" models very cheaply, or even locally, would also make expensive cloud AI subscriptions a hard sell. And expensive subscriptions are the only way this pans out.
It hasn't yet been shown that AI is a gas that will fill all the available capacity, and keep it filled. If bread were 10 times cheaper, would you buy 10x as much? That has more or less happened to food availability in the West over the last 200 years and OpenBread and BunVidia don't dominate the economy.
None of that is sure to happen, and maybe the AI hype train is right and huge expensive LLMs specifically drive a gigantic productivity boom¹ and are worth, say, 0.2*GDP forever. But if it isn't, and it turns out $5 a month gets you all people actually need, it's going to be untidy.
¹: in which case, why is GDP not growing from the AI we already have?
1. LLM's are so economically unfeasible that companies won't be able to make a profit and investing in datacenters will turn out to be a bad bet because AI companies themselves are a bubble
2. LLM's will become so cheap that datacenters will be useless and people will just use local models so investing in datacenters is a bad bet
I see both positions in this thread so which one is true?
The two positions there aren't really different, they're mostly that the profitability of AI can be eroded from several sides. One: the cost to run (power and hardware) being high and bring unable to recover it from revenue. Or two, commoditisation and efficiencies (which can also be operational convenience rather than only about power) driving down costs and therefore also revenue, and being unable to compensate by selling more AI more cheaply. Three: AI didn't actually help as many people make money as hoped and thus they don't want to pay, also depressing revenue.
In the middle is the three-axis happy AI place where costs are not too high, but also AI is too hard to have someone else do it cheaper, and it's useful enough to be paid for.
My guess is AI ending up roughly as impactful overall as cloud computing. A big industry, makes a lot of money, touches and enables very many businesses but hasn't replaced the entire economy, profitable especially if you can stake out a moat, with low-margin battlegrounds for the price-sensitive.
Maybe it just works out like CPUs or as you said cloud computing? CPU's got cheaper and demand increased and more people use them but everyone still made a profit.
Another good example is railways in the US. That was an huge, huge boom, around a fifth of GDP. No one knew what the railway-based economy of the 1900s would look like but it was surely going to be spectacular. All that money! The speed! The cargo! All those people! Railways absolutely were a commerce multiplier, and made stacks of revenue very quickly and got investment from around the world to build build build. But, eventually, the (over)building was done, there were bankruptcies and consolidations and it ultimately did not become the dominant industry. And yet, it's still a big industry that enables a lot of other economic activity. Trains are still expensive to operate, but moving goods is pretty cheap. Obviously there's a natural physical monopoly at play there that AI doesn't have so again, who knows.
Which leads to another thought the AI investors, both commercial and national, maybe should eventually have: is there an automobile or airliner to their railway?
Will LLMs create a significant shift in productivity where its usage will create enough overall value to the economy to justify the hoovering of capital from other industries?
Those are the unknowns, I don't think many people are saying that LLMs have no value like NFTs, it's that the money being pushed onto this novelty is such an absurd amount that it might pull down everything else if/when it's discovered that there won't be enough value generated to compensate for trillions of USD in investments. Hence the comparison to the dotcom bubble, we came out of that with the infrastructure for the current internet even though it was painful for a lot of people when it crashed, will we have a 2nd internet-esque revolution after this whole thing crashes?
The technology is definitely valuable, and quite fantastic from a technical standpoint, is it going to lift all the other boats in the rest of the economy like the internet did though? No one can tell that yet.
This is what I want to challenge. At what point do you think people will pay more than it costs? Lets try to come up with a number because the price of LLMs have dropped more than 30 times in the last 2 years.
It may continue to drop and AI companies will continue to be in loss because the new things will be unlocked due to new efficiences and the same debate over LLM economics will continue.
I think it is already profitable and people are more than willing to pay for the actual costs.
If people are willing to pay for the costs, where are the profitable AI companies?
At the moment everyone is trying their best to implement it, but it remains to be clear if it actually increases a company's profitability. Time will tell, and I think there are a lot of things obfuscating the reality right now that the market will eventually make clear.
Additionally the economics of training new models may not work out, another detail that's currently obfuscated by venture capital footing the bill.
People also like pizza. How many million weekly active consumers of pizza? how about rice?
Really what people like here is cheap stuff and having a job that pays money to buy it. chatgpt so far loses boatloads of money. Soon they jack up prices, add adds, and people realize that it was all trained on them & threatens their job. So really right now chatgpt is sweating hard to make itself too big to fail.
800m total users, 25m paying customers... Most people use free accounts and would likely never pay any substantial amount of money for them
https://www.theverge.com/openai/640894/chatgpt-has-hit-20-mi...
I think it will only be economically sound as a business if you're Google and can start serving ads OR when we switch over from GPUs to wildly more efficient TPUs/ASICs
All the data center CapEx is going into compute that will be obsolete once that happens
LLMs are very useful, I can’t see myself walking back to the old way of doing things. But the amounts invested expect major breakthrough that we are not anywhere near. It’s a gamble and that’s what innovation is; but you gamble on a small portion of your wealth. Not your house and certainly you do not gamble a huge country like the US on a single thing.
Why do you automatically assume that people won’t pay for it?
But there is a big difference here compared to most software companies. The product does cost significant money per additional customer and usage.
There is a real product here. And you can likely earn money with it. But the question is "how much money?", and whether these huge data center investments will actually pay off.
I keep hearing this but this is very unlikely to be true. The cost of LLMs have gone down by more than 30 times in the past 1 year. How much more should it go down until you consider it economically feasible?
You can also do a simple analysis on the Anthropic Max plan and how it successively gets more and more limited, they don’t have the OpenAI VC flow to burn so I believe it’s a indicator of what’s to come, and I could of course be wrong.
If you want to question to on the fundamental economics of LLm themselves then how efficient should LLMs get till you decide that it’s cheap enough to be economically viable? 2 times more efficient? 10 times? It has already gotten more than 30 times over last 2 years.
I don’t think it’s a matter of efficiency at current pricing but increased pricing. It would be a lot more sane if the use cases became more advanced and less people used them, because building enormous data centers to house NVIDIA hardware so that people can chat their way to a recipe for chocolate cake is societal insanity.
This is not true for any LLM and not just Claude.
> I don’t think it’s a matter of efficiency at current pricing but increased pricing.
I don't know what this means - efficiency determines price.
> It would be a lot more sane if the use cases became more advanced and less people used them, because building enormous data centers to house NVIDIA hardware so that people can chat their way to a recipe for chocolate cake is societal insanity.
Do you think same thing could have been said during the internet boom? "It would be more sane if the use cases become more advanced and less people used them, because building enormous data centers to house INTEL hardware so that people can use AOL is societal insanity".
Efficiency doesn’t determine price, companies does. Efficiencies tend to give more returns, not lower prices.
Internet scaled very well, AI hasn’t so far. You can have millions of users on a single machine doing their business, you need a lot of square footage for millions of users working with LLM’s. It’s not even in the same ballpark.
Did we build many single company data centers the scale of manhattan before AI?
Then I think we agree that while the cost remained the same, the performance dramatically increased.
FWIW Sonnet 3.7 costs 2.5x as much as GPT-5 while also being slightly worse.
As for OpenAI I don’t think anyone is working on the API side of things since GPT-5 has had months of extreme latency issues.
I think it will only be economically sound as a business if you're Google and can start serving ads OR when we switch over from GPUs to wildly more efficient TPUs/ASICs
All the data center CapEx is going into compute that will be obsolete once that happens
That alone will be a monumental shakeup for the industry.
0.
Ferrari is a luxury sports brand. What's the point of it if it flooded the streets?
How to say you don't own a Ferrari without saying you don't own a Ferrari.
It’s actually quite interesting to see these contradictory positions play out:
1. LLMs are useless and everyone is making a stupid bet on it. The users of llms are fooled into using it and the companies are fooled into betting on it
2. Llms are getting so cheap that the investments into data centers won’t pay off because apparently they will get good enough to run on your phone
3. Llms are bad and they are bad for environment, bad for the brain, bad because they displace workers and bad because they make rich people richer
4. AI is only kept up because there’s a conspiracy to keep it propped up by Nvidia, oracle, OpenAI (something something circular economy)
5. AI is so powerful that it should not be built or humanity would go extinct
B) You're missing a few things like:
1. The hardware overhang of edge compute (especially phones) may make the centralized compute investments irrelevant as more efficient LLMs (or whatever replaces them) are released.
2. Hardware depreciates quickly. Are these massive data centers really going to earn their money back before a more efficient architecture makes them obsolete? Look at all the NPUs on phones which are useless with most current LLMs due to insufficient RAM. Maybe analogue compute takes off, or giant FPGAs, which can do on a single board what is done with a rack at the moment. We are nowhere near a stable model architecture, or stable optimal compute architecture. Follow the trajectory of bitcoin and etherium mining here to see what we can expect.
3. How does one company earn back their R&D when the moment it is released, competition puts out comparable models within 6 months, possibly by using the very service that was provided to generate training data.
GDP per capita is more important than overall GDP to see the trend in prosperity, IMHO.
https://www.bloomberg.com/news/features/2025-10-07/openai-s-...
This is another way that bubbles form, a cabal of cross-dealing giants that don't have solid revenue to ground the valuations is a very scary position.
I believe that a lot of AI is real, but the realness of AI's impact on the economy does not prevent a bubble. The dot com bubble didn't make the internet any less real or impactful on everyone's lives. So it feels like very scary times ahead.
Also, the devaluation of the dollar is an extremely tricky situation for the US. Morgan Stanley puts it at 10% less value in 2025, and another 10% drop by the end of 2026:
https://www.morganstanley.com/insights/articles/us-dollar-de...
I was never scared with the inflation during Biden because it seemed like we would be on track to put the economy in the right position, because it was global and the US was doing so much better than the rest of the world. But now, it feels like the US is intentionally entering recession and choosing a future of poverty.
> the devaluation of the dollar is an extremely tricky situation for the US
For many, the revaluation of the dollar is actually the scary scenario.
See for instance this (maybe too) subtle analysis from Yanis Varoufakis:https://unherd.com/2025/02/why-trumps-tariffs-are-a-masterpl...
AI sucked the air out of the room for almost no return, the crash is going to be something to behold.
To see if that is too much or not, we have to put that number in relation to the value those datacenters will create in the future.
If global GDP is $100T and labor is 50% of that, then the current TAM for intelligence is around $50T.
How much of that has to be automated per year to justify $400B of investment? For a 10% ROI it would be around 1%, right? But the datacenters will not be the only cost of generating artificial work. We also need energy, software and robots. So let's say 2%.
So it comes down to the question whether those datacenters built for $400B will automate 2% of global GDP in the foreseeable future.
And there is another option: That the TAM increases. That we use the new possibilities we have to build more products and services. And see global GDP grow. 2% AI driven global GDP growth would also justify the $400B datacenter buildout.
So let's think about a mix: 1% labor automation and 1% GDP growth per year via AI. That would be needed to justify continued spending of $400B per year for the AI buildout.