The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
I know nothing about finances at this level, so asking like a complete newbie: doesn't that just mean that instead of risking $10B they're risking $7-8B? It is a cheaper bet for sure, but doesn't look to me like a game changer when the range of the bet's outcome goes from 0 to 1000% or more.
I may be way off, but to me it seems like the AI bubble is largely a way to siphon money from institutional investors to the tech industry (and try to get away with it by proxying the investments) based on a volatile and unpredictable promise?
The existential risk is in companies smoking the AI crackpipe that sama (begging your pardon) handed them, thinking it feels great and then projecting[1] that every investment will hit like the first, and continuing to buy the <EXPLETIVE> crack that they can't afford, and they investors can't afford, and their clients can't afford, their vendors can't afford, the grid can't afford, the planet can't afford, the American people can't afford, and sama[2] can't afford, _because it's <EXPLETIVE> crack_!
The wise will shut up and take the win on the slop com bubble.
[1]: https://en.wikipedia.org/wiki/Chasing_the_dragon
[2]: For those following along at home, sama is Sam Altman, he was a part of the Y Combinator community a while back: https://news.ycombinator.com/threads?id=sama
Also, classifying business expenses as "cost to the tax payer" seems less than useful, unless you are a proponent of simply taxing gross receipts. Which has its merits, but then the discussion is about taxing gross receipts versus income with at least some deductible expenses, not anything to do with OpenAI.
It's the dumb as rocks MBAs that will go head first into the 5% chance deal.
This is only true if the probability distributions for the values of the individual deals are rather uncorrelated (or even better: stochastically mostly independent).
Sure, markets can crash, but this idea of “go to zero” never happens.
That’s the whole calculus of these investments. Many of them are expected to fail.
The fact that it’s painful to the average person means nothing to the people who run the system.
Even if every cryptocurrency becomes worthless overnight, that doesn’t represent the market going to zero.
I see you’ve edited your comment with more doom and gloom. It’s easy to view everything as a bubble when you’re in a negative mental space.
> a) collapse of the real estate bubble, especially commercial real estate;
Any proof of this bubble? Housing construction continues to lag demand. Offices are largely RTO and Covid-era remote jobs are basically legacy and grandfathered. Every remote employee I know who was laid off in the past couple years had to get a hybrid/in-person job. You can’t just assume 2008 is going to happen again without some real data that shows real estate instability. Where are the poorly qualified borrowers?
> b) the ongoing IT crash that is only just getting started;
That’s one industry of many. One specific industry struggling doesn’t mean much.
> c) whatever damage the (current, red-flavored) orangutan in the White House manages to accomplish in his 3+ remaining years of hell;
Lame duck presidency, he can’t crash shit. Congress will be unfriendly next year and already isn’t even very supportive within his own party.
> d) fear of looming war;
In what universe is any impending war impacting the American economy? You mean the one where defense contractors are hiring and the US is selling weapons to the nations who are doing the fighting?
> e) economic fallout from COVID which is still ongoing and expanding (hint--many destroyed businesses and people out of work);
You are gonna need to explain this one and back this up with some numbers that make sense.
> f) a thousand other icebergs, minefields, and financial hazards confronting us in the near future?
Sounds like internal anxiety demons that are not tangible.
Look, I’m in full agreement that AI will face some kind of correction or crash, but predicting once in a century catastrophe is a losing game.
Opportunity costs are a thing.
This is especially true when your investors/owners expect you to generate better returns than the risk-free rate.
So MSFT is effectively getting 2x the equity by putting money into OpenAI, it also conveys some financial engineering capability as they can choose to invest more when profits are high to smooth out cash flow growth.
isn't that what buybacks are for?
However, this discussion will be a perfect introduction to "finances at this level", where about 60% of the action is injecting more variables until you can fit a veneer of quantification onto any narrative.
Är the same time, MS revenues are looking real good, causing the stock price to go up. It's a win win win maybe win huge situation.
So just a loss for governments, or in other words, socializing the losses.
Pension funds buy shares in businesses such as Microsoft. The money going into the pension fund is not typically a function of the tax paid by companies such as Microsoft, but rather from a combination of actuaries’ recommendations, payroll tax receipts, and politicians’ priorities.
Therefore a pension funds’ equity holdings, such as Microsoft, doing well means taxes can be lower.
In the USA, Social Security defined benefit pensions are cash from workers today going to non workers today, same as Germany's national scheme (gesetzliche Rentenversicherung?).
The other defined benefit benefit pension schemes are what are usually invested in equities, and the investment restrictions section in this document indicate Germany's "occupational pensions" can also invest in equities. (page 12)
https://www.aba-online.de/application/files/2816/2945/5946/2...
How's that different than any other sort of R&D incentive? Would you rather that companies return as much money as possible to shareholders, future growth be damned? What about other sorts of tax incentives, which by definition also "just a loss for governments"? Are tax breaks for people with kids also "socializing the losses", given that most households don't have kids?
Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
That should be expected, because
https://european-union.europa.eu/priorities-and-actions/acti...
> The EU does not have a direct role in collecting taxes or setting tax rates.
> There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.
Dec 2023.
> Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
Ireland (due to pressure from EU) closed this in 2020. The amount of tax collected by Ireland quadrupled. See Figure 5 and 6 in link below.
https://budgetmodel.wharton.upenn.edu/issues/2024/10/14/the-...
Its clear that OP means "in the EU".
> Ireland (due to pressure from EU) closed this in 2020. The amount of tax collected by Ireland quadrupled. See Figure 5 and 6 in link below.
And Ireland fought against this tooth and nail. Yes, a country was fighting to have less income. All out of fear that the companies will leave the little tax heaven. Did they leave? No ...
> See Figure 5 and 6 in link below.
Figure 7 is also interesting if we look at the tax income increase and the outbound.
1. Amazon reports 250bn$+ revenue for entire EU in 2025. (of course, revenue != profit) while all 250bn$+ evaporates to somewhere. Their own page [1] reports 225k employees across EU, meaning that each employee returns whopping 1 million plus dollars! While being compensated less than 10% of their value!
2. In their own article [1], they boast how they invested (translated; smuggled money out) and enabled SMEs 20bn$+ revenue. (Like seriously, less than 10%?! actually goes back into the economy...)
3. Amazon says that they have invested 250bn$ in EU since 2010. It is completely unknown what or where that was invested. I do not see my street lightning being improved by the Amazon's investment or garbage being collected better.
4. Luxembourg's GDP is ~95bn$ in 2025. Amazon has contributed to that with the 0$ corporate tax. Obviously they employed about 4.5k people which they've decided to let go about 10% of them. Where the median/average yearly gross salary stands somewhere around 80k eur, it is hardly anywhere near 1mm+$ total income. I am guessing that they heat up the offices with burning the remaining cash...
[1]: https://www.aboutamazon.eu/news/job-creation-and-investment/...
For the date of the verdict for Amazon vs EU, apologies. The article date was November 2024 in the source [2].
[2]: https://www.techtimes.com/articles/308509/20241129/amazons-2...
For the Ireland, I only knew similarities between Luxembourg and specific laws allowing such loopholes pre-brexit period. The source is certainly interesting and I need to dive deeper to understand better.
An employee's "value" is not revenue / number of employees. There are many businesses where employees are reduced, but revenue does not decrease in direct proportion.
An employee's compensation is only related to what the employer thinks they can pay someone else and what the employee thinks another employer will pay them. Just like how the price of an apple is related to what the grocery store thinks a customer will pay for it and what the customer thinks a different grocery store will sell it for. (Obviously bounded on both sides by the minimum cost to produce and transport the apple, and by the maximum price a customer is able and willing to pay).
These big corps use holdings in low tax jurisdisctions like Ireland and Luxemburg, funnel all their EU subsidiaries’ revenues there and end up paying 0 tax in the individual EU countries.
This system is actually legal, EU lawmakers should pass laws to prevent this.
[1]: https://en.wikipedia.org/wiki/Dutch_Sandwich?wprov=sfla1
And the decision how to distribute these (corporate tax) should be done by the government. Essentially, companies evading [corporate] tax decides themselves where to distribute that money. Obviously, they make decisions that drives more profits and income, not the public good. Even if it improves living conditions (ie. delivery service would help elderly), it still requires that person to be user of the product. A layman/citizen cannot effectively utilize the benefits.
So EU lawmakers should determine the amount member countries collect in tax?
To give an (absurd) example; You work in country X, but the parent company is in country Y. Imagine your income tax is not going to where you reside but where you work, (usually the opposite) in this case, country Y. (~20-40% of the gross salary).
One day, your basic needs (electricity, water, etc) stops working. You call the relevant government department asking what's the problem. They reply with saying they do not know and cannot afford to figure our or fix because they do not have the money to do so.
But you've been paying at least 20% (and up to 46%) of your salary as the income tax. Where the money go? Why do you work here but someone else in the other side of the world getting that slice for free?
The countries like Ireland and Luxembourg need to stop granting these loopholes.
What should be taxed?
Amazon, as an example, has servers in country X. Country X taxes the transaction or the income from the server company.
Amazon pays delivery drivers in country X to deliver goods, and the driver is taxed through various means (vehicle, fuel, payroll, etc).
What is Amazon doing in country X that should be taxed?
its profits within that country (income minus actual expenses)
in practice expenses are fakely inflated to transfer tax payments top jurisdiction with near-zero taxes
Amazon can add up the costs to install and operate a datacenter or warehouse in country X, but most of the demand for services from that datacenter or warehouse will be due to expenses incurred in country Y.
> What should be taxed?
Profits of the company, like all other (local) companies do. > Amazon, as an example, has servers in country X. Country X taxes the transaction or the income from the server company.
Amazon has servers in Germany, Germany is unable to tax the transaction or income from Amazon, because;1. The user completes a transaction either on Amazon (buying a product) or in AWS (running an EC2 instance)
2. If the user is a business, there is no VAT. Because VAT is applied only to the end user. (To prevent compounding effect also). If it was the end-user, then end-user already pays for their VAT, usually around 18-20% in the case of an EC2 instance. But that has nothing to do with Amazon. User technically pays the VAT directly to the government depending on where he/she is located and where the server is.
3. Obviously Amazon does not sell the products or servers for free, they have a markup or profit margin, let's say 40% for a 100eur EC2 instance. So, 40eur lands into Amazon's bank account. While other 60eur goes to the operating expenses. (ie. Electricity, maintenance, employees' salaries, etc...)
4. In this case, Amazon should be taxed from that 40%, (ie. from the 40eur profit). Luxembourg corporate tax is about ~16-17%. Mind that US federal corporate tax is 21% itself. For the sake of simplicity, let's take 20% as the corporate tax. This would make 8eur going into the government's pocket. while 32eur stays as the profit with amazon.
5. All other companies provide the same service and has no magical entity outside pays that ~8eur to the government. Which in turn used to provide services to the citizens. (For example, Luxembourg has completely free public transportation that works 24/7, subsidized by the taxes)
6. However, Amazon having a magical entity, they declare all that 40eur profit belongs to the US entity due to the IP rights. They essentially say 100% of the things that are produced in Luxembourg, by employees in Luxembourg even owned by the US entity. Therefore, they do not pay any income tax as in fact there is no income on paper.
7. Instead, since they were able to save that 8eur, they can reduce the prices of the services up to that amount. But instead, Amazon usually reduces prices about half, reducing 4eur for customers and other 4eur going into Amazon's profit pocket.
8. It all seems nice so far, since users also benefit from reduced prices, right? Unfortunately, no. In the longer term, it hurts competition as other companies must pay taxes and losing the customers [to Amazon].
9. When there is no competition left, then Amazon can just start syphoning all the 8eur profit back to themselves. Even setting the prices as there is no longer any alternative to go.
10. Not only it hurts customers, it also hurts the random person on the street; as they receive services from government which were subsidized by the taxes. You may say that Amazon can or may invest even better products or services, but that's again not helping the layman unless he/she is an Amazon customer. And mind that a citizen does not need to be Amazon customer to get their electricity and water running.
> Amazon pays delivery drivers in country X to deliver goods, and the driver is taxed through various means (vehicle, fuel, payroll, etc).
Similar to the above, Amazon does not pay VAT or any other service taxes for any of the services they provide. But the driver does! It is even worse for the driver when he/she uses Amazon because on the net balance sheet, the driver pays income tax from his/her salary. Pays VAT for the services they receive. If he/she receives 1000eur salary at the end of the month, they can use at most about ~60% of their salary to receive goods and services. (~20% income tax + ~20% VAT). Hence, there is a corporate tax that balance these scenarios. But evading it causes more harm than good in the long run. > What is Amazon doing in country X that should be taxed?
All the profits (earnings, surpluses) they receive should be taxed.The consumer pays a certain price for a product and a portion of that money goes to taxes and costs to Amazon (including things that are taxed, like driver salaries and fuel). Those taxes are collected on the way from Amazon to the end user in every country they pass through, more or less.
Amazon is creating commerce that is taxed. They aren't skating by for free.
OpenAI is anyways seeking Govt Bailout for "National Security" reasons. Wow, I earlier scoffed at "Privatize Profits, Socialize Losses", but this appears to now be Standard Operating Procedure in the U.S.
https://www.citizen.org/news/openais-request-for-massive-gov...
So the U.S. Taxpayer will effectively pay for it. And not just the U.S. Taxpayer - due to USD reserve currency status, increasing U.S. debt is effectively shared by the world. Make billionaires richer, make the middle class poor. Make the poor destitute. Make the destitute dead. (All USAID cuts)
How do you square this thought with the actual rate of poverty being on a steady downward trend while billionaires do their things?
This does not show your "steady downward trend", but has considerably fluctuated over the last few years. It is an increase to 12.9% in 2024, compared to 7.1% in 2020-21. Will need to wait till end of 2026 for the 2025 computation.
https://thedocs.worldbank.org/en/doc/ec3d46c25a822d6d248e86d...
Please note that if you exclude China, the trend of poverty reduction is laughable.
"Slowed to a near standstill" means it's still moving in the right direction.
There may have been some global event in the 2020s that maybe had a bit of an impact on the global economy.
> Please note that if you exclude China, the trend of poverty reduction is laughable.
If you exclude the area of the world that used to be extremely poor but has benefitted massively from the wealth generated by creating products for the billionaires abroad, why would you exclude that?
It shouldn't be the job of the US taxpayer to feed someone that doesn't want to work, study, or pass a drug test, and it absolutely shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.
Our first obligations are toward our immediate families. As the human race is essentially a large extended family, the obligations dissipate the further out we go. We do have a general obligation to help those in need, but this obligation is prioritized. In classical texts, this is called the ordo amoris or "order of love" (in the older, more technically accurate terminology, order of charity, where "charity" - from caritas - means willing the good of the other).
Now, to address your comment specifically...
> There's already a lot that the US taxpayer is on the hook for that's a lot less valuable than a best on the next big thing in software, productivity, and warfare.
For example? Whatever the benefits of LLMs, I find this relative exuberance unreasonable.
> It shouldn't be the job of the US taxpayer to feed someone that doesn't want to work,
In someone able-bodied and of sound mind refuses to work, then we don't have an obligation to support someone like that. This is true. In fact, it would be uncharitable to enable their laziness, because it harms the character and virtue of that person. Of course, in practice, if someone you have determined is able to work is found starving and in danger of death, for example, then it is unlikely they are merely lazy. Would a man of sound mind allow himself to starve?
The manner in which we deal with such cases is a prudential matter, not a matter or principle. We need to determine how best to satisfy the principle in the given circumstances, and there is room for debate here.
> it absolutely shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.
If there is a humanitarian crisis somewhere in the world, for example, then there is a general obligation of the entire world to help those affected. How that happens, how that is coordinate, is a matter of prudence and implementation detail, as it were. Naturally, several factors enter the equation (proximity, wealth, etc).
This would make sense if every person was given similar opportunities, like providing quality education to all of our youngest and making higher education a mission rather than a business as a starter.
As a society we move at the speed of the weakest among us, we only move forward when we start lifting and helping the weakest and most vulnerable.
You also need to realize that not doing that work is also cause for other taxpayer money to be spent elsewhere, such as spending an average of 37k $ per incarcerated person, and that ignores all the damage that criminal might've caused, all the additional police staffing and personal security that is needed to be spent outside prisons, etc.
Those are complex systems, are you sure it wouldn't be better to spend the same gargantuan amount of money that's spent on millions of inmates and fighting crime into fighting the causes that make many fall into that?
Again, those are complex, but closed systems and the argument of "we shouldn't spend on X" often ignores the cost of not spending on X.
You’re right that these are complex systems, and just pouring more tax dollars and more debt into them isn’t working. Portions of our society need to value education, value contributing to society instead of taking, and reject criminality - but those changes require more than blind spending.
Much of the problem comes from a poor grasp of what education is and is for, and because of that, money and effort are not allocated properly. One source of the problem are various educational fads. I personally remember when computers were artificially jammed into school curricula for no good reason. There was absolutely no merit to what was being done. But how much do you think the companies selling that garbage made out?
Or consider the publishing industry that fleeces schools and students with 12978th editions of the same poorly-presented material packaged in overpriced books. Financially, education is quite cheap, but there are sectors of the economy devoted to convincing pedagogues and politicians that it isn't, and that what you need to do is buy in order to "change with the times". Sorry, but basic education isn't fast fashion. Materially, basic education is stable and cheap.
Another problem is that American culture is pragmatic to a fault. Americans have a long history of viewing education, particularly the university, with distaste, as some kind of "European", un-American, and aristocratic thing. This explains the appeal of the pragmatic turn of the university: you now go to university to "get a job". Of course, that isn't the core mission of the university, and most professions don't require anything the university might provide, especially not at these absurd costs (hence why GenZ is seeing something like a 1500% increase in pursuing trades).
We have a cultural momentum that must fizzle out or must be reshaped. Where the modern university specifically is concerned, its days may very well be numbered. It may very well be forced to undergo very painful changes, or crumble, with a new crop of smaller colleges taking their place. Where primary education is concerned, parents are increasingly taking their children out of the savage factory known as public education. This, too, may force public education to finally deal with its dysfunction, or collapse.
In another way, the top talent gets Ferraris for their tuition, the rest gets a bike. In a lot of European countries everyone can get the Toyota Camry of education, decent but not world class. That does scale though.
Spending isn't everything, it's how you apply that spending.
A lot.
As an European I can assure you even public second tier universities have excellent education.
Where they lag the rankings is where money matters: politics to be highly ranked and money for high impact research.
But when it comes to testing proficiency in e.g. science and math, the second university of Rome ranks higher than most ivy leagues in US ;)
For science/math would you rather go to Rome or MIT? I know which one I would pick if they cost the same. A lot of it is also the people you're surrounded by.
A morbid thought that would probably address the bulk of this: male birth control.
The backlash would be profound, it’ll never happen. But if there were a way to make a “perfect pill/shot/procedure” boys had maybe at birth to prevent unplanned pregnancies… just think about it.
I’m not even sure I’m advocating for it. Everyone says “education will fix all the things!” I think raising kids where the parents wanted to be parents would fix a whole lot, at least on the incarnation side.
Compare that to when we still had revolutions, where it was very hard for government to know what is going on, and to find individuals without a huge effort.
I think revolutions have become next to impossible, unless it is lead by significant parts of the elite that controls at least part of the apparatus.
That's not even counting the far more sophisticated propaganda methods, so that many of the affected people won't even begin to target the actual culprits but are lead to chase shadows, or one another.
> You can’t kill/arrest 25% of your population.
Unnecessary.
> That is why Russia/China/etc are so scared to let any protests begin
Especially Russia's issue is that lots of the elites want change too. But you also underestimate that what the population also does not want is a repeat of 1990s chaos, and Russian weakness.
What about someone who works and still can’t afford enough housing/food?
> shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.
I mean where’s the profit in that, am i right?
Food stamps? The original comment is addressing those not working or studying, or staying off drugs.
>I mean where’s the profit in that, am i right?
We’re $38 trillion dollars in debt. Digging a deeper whole isn’t a sound decision.
Their investors, if publicly traded like Microsoft do have to take write-downs on their financial statements but those aren't realized losses for tax purposes. The only tax "benefit" Microsoft might get from the OpenAI investment is writing off the amount it invested if/when OpenAI goes bankrupt.
So kinda looking at a bank level run on tech companies if they go broke.
Was that an organic "it's not A, it's B" or synthetic?
Nobody is talking about this because it's not a thing.
People here will shit on LLMs all day for being confidently incorrect, then upvote aggressively financially illiterate comments like this.
Note: other people seem to be confused because companies can write off investments in corporate subsidiaries before the subsidiary is dissolved or sold...for book purposes. This creates what is known in the accounting world as a book-tax difference. If you have a few weeks to spare, look up tax provisions...
Also integration with other services. I just had Gemini summarize the contents of a Google Drive folder and it was effortless & effective
The NSA and GHCQ and basically every TLA with the ability to tap a fibre cable had figured out the gap in Google’s armour: Google’s datacenter backhaul links were unencrypted. Tap into them, and you get _everything_.
I’ve no idea whether Snowdon’s leaks were a revelation or a confirmation for google themselves; either way, it’s arguably a total breach.
That page says it was only 2 accounts and none of the messages within the mail was accessed. I wouldn't call that very significant.
While their competitors have to deal with actively hostile attempts to stop scraping training data, in Google's case almost everyone bends over backwards to give them easy access.
I agree with the rest though
I did that when I was retraining Stable Audio for fun and it really turned out to be trivial enough to pull of as a little evening side project.
Reminds me of Reddit's cracking down on API access after realizing that their data was useful. But I'd expect both youtube to be quicker on the gun knowing about AI data collection, and have more time because of the orders of magnitude greater bandwidth required to scrape video.
https://academictorrents.com/details/2d056b22743718ac81915f2...
Mozilla's business model isn't really something to emulate, even if the stock market doesn't really see it that way.
Your interpretation of casual as stuff like r/relationships is itself "techie talk".
Google, though, has been doing it for literal decades. That could mean that they have something nobody else (except archive.org) has - a history on how the internet/knowledge has evolved.
Google suffers from classic Innovator's Dilemma and need competition to refocus on what ought to be basic survival instincts. What is worse is the search users are not the customers. The customers of Google Search are the advertisers and they will always prioritise the needs of the customers and squander their moats as soon as the threat is gone.
Sergey Brin interview: https://x.com/slow_developer/status/1999876970562166968?s=20
This attitude also partially explains the black vikings incident.
This will be hard for them to integrate in a way that won't annoy users / will be better implemented than any other competitor in the same space.
Or perhaps we just deal with all AI across the board serving us ads.... this makes more sense unfortunately.
And yet they’re there, in the form of prominent product placement in all of their original series along with strategic placement in the frame to make sure they appear in cropped clips posted to social media and made into gifs.
Stranger Things alone has had 100-200 brands show up under the warm guise of nostalgia, with Coke alone putting up millions for all the less-than-subtle screen time their products get.
I’m certain AI providers will figure out how to slyly put the highest bidder into a certain proportion of output without necessarily acting out that scene in Wayne’s World.
It's like that old concept of saying something wrong in a forum on purpose to have everyone flame you for being wrong and needing to prove themselves better by each writing more elaborate answers.
You catch more fish with bait.
Tesla does not have live video feed from (every) Tesla car.
The cost of entry is far beyond extraordinary. You're acting like anybody can gain entry, when the exact opposite is the case. The door is closing right now. Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.
Why aren't there a dozen more Anthropics, given the valuation in question (and potential IPO)? Because it'll cost you tens of billions of dollars just to try to keep up. Nobody will give you that money. You can't get the GPUs, you can't get the engineers, you can't get the dollars, you can't build the datacenters. Hell, you can't even get the RAM these days, nor can you afford it.
Google & Co are capturing the market and will monetize it with advertising. They will generate trillions of dollars in revenue over the coming 10-15 years by doing so.
The barrier to entry is the same one that exists in search: it'll cost you well over one hundred billion dollars to try to be in the game at the level that Gemini will be at circa 2026-2027, for just five years.
Please, inform me of where you plan to get that one hundred billion dollars just to try to keep up. Even Anthropic is going to struggle to stay in the competition when the music (funding bubble) stops.
There are maybe a dozen or so companies in existence that can realistically try to compete with the likes of Gemini or GPT.
Apparently the DeepSeek folks managed that feat. Even with the high initial barriers to entry you're talking about, there will always be ways to compete by specializing in some underserved niche and growing from there. Competition seems to be alive and well.
Try “@gmail” in Gemini
Google’s surface area to apply AI is larger than any other company’s. And they have arguably the best multimodal model and indisputably the best flash model?
Is it better for society for promising startups to die on the open market, or get acquired by a monopoly? The third option -- taking down the established players -- appears increasingly unlikely.
Is there any evidence that this is the case ? For very big merger (like nvdia and Arm tried) sure, but I can't think of a single time regulator stop a big player from buying a start up.
What I know is that a lot of deals aren't even being considered that once were, and antitrust is a huge factor in that consideration.
If anything the current system is beyond redemption and should probably be nationalized for the betterment of society. Government investment in technology brought us the transistor and internet, the two things that enabled any of this to exist and it was massively subsidized for the betterment of the public.
Maybe we should follow that model again.
I think this is a problem for Google. Most users aren't going to do that unless they're told it's possible. 99% of users are working to a mental model of AI that they learned when they first encountered ChatGPT - the idea that AI is a separate app, that they can talk to and prompt to get outputs, and that's it. They're probably starting to learn that they can select models, and use different modes, but the idea of connecting to other apps isn't something they've grokked yet (and they won't until it's very obvious).
What people see as the featureset of AI is what OpenAI is delivering, not Google. Google are going to struggle to leverage their position as custodians of everyone's data if they can't get users to break out of that way of thinking. And honestly, right now, Google are delivering lots of disparate AI interfaces (Gemini, Opal, Nano Banana, etc) which isn't really teaching users that it's all just facets of the same system.
Google is telling this in about a hundred different popups and inline hints when you use any of its products
It already is. In terms of competition, I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4.
The majority of the cost/innovation now is training this 1-2 year old technology on increasingly large amounts of content, and developing more hardware capable of running these larger models at more scale. I think it's fair to say the majority of capital is now being dumped into hardware, whether that's HBM and research related to that, or increasingly powerful GPUs and TPUs.
But these components are applicable to a lot of other places other than AI, and I think we'll probably stumble across some manufacturing techniques or physics discoveries that will have a positive impact on other industries.
> that ends up in a race to the bottom competing on cost and efficiency of delivering
One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.
> models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
I definitely agree with the asymptotic performance. But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off, and I think it's safe to assume that in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model while still being capable of multitasking. As it gets cheaper, more applications for it become more practical.
---
Regarding OpenAI, I think it definitely stands in a somewhat precarious spot, since basically the majority of its valuation is justified by nothing less than expectations of future profit. Unlike Google, which was profitable before the introduction of Gemini, AI startups need to establish profitability still. I think although initial expectations were for B2C models for these AI companies, most of the ones that survive will do so by pivoting to a B2B structure. I think it's fair to say that most businesses are more inclined to spend money chasing AI than individuals, and that'll lead to an increase in AI consulting type firms.
I suspect most of the excitement and value will be on edge devices. Models sized 1.7B to 30B have improved incredibly in capability in just the last few months and are unrecognizably better than a year ago. With improved science, new efficiency hacks, and new ideas, I can’t even imagine what a 30B model with effective tooling available could do in a personal device in two years time.
How Flash in SSDs works is you have tens to hundreds of dies stacked on top of each other in the same package, and their outputs are multiplexed so that only one of them can talk at the same time.
We do it like this because we still can get 1-2 GB/s out of a chip this way, and having the ability to read hundreds of times faster is not justified for storage use.
But if we connected these chips to high speed transcievers, we could get out all the 100s of GBs of bandwidth at the same time.
I'm probably oversimplifying things, and it's not that simple IRL, but I'm sure people are already working on this (I didn't come up with the idea), and it might end up working out and turn into a commercial product.
I think the comparison is only half valid since personal computers were really just a continuation of the innovation that was general purpose computing.
I don't think LLMs have quite as much mileage to offer, so to continue growing, "AI" will need at least a couple step changes in architecture and compute.
It was model improvements, followed by inference time improvements, and now it's RLVR dataset generation driving the wheel.
Citation needed!
If we consider "AI" to be the current LLM and ImageGen bubble, I'm not sure we can say that.
We were all wowed that we could write a brief prompt and get 5,000 lines of React code or an anatomically questionable deepfake of Legally Distinct Chris Hemsworth dancing in a tutu. But once we got past the initial wow, we had to look at the finished product and it's usually not that great. AI as a research tool will spit back complete garbage with a straight face. AI images/video require a lot of manual cleanup to hold up to anything but the most transient scrutiny. AI text has such distinct tones that it's become a joke. AI code isn't better than good human-developed code and is prone to its own unique fault patterns.
It can deliver a lot of mediocrity in a hurry, but how much of that do we really need? I'd hope some of the post-bubble reckoning comes in the form of "if we don't have AI to do it (vendor failures or pricing-to-actual-cost makes it unaffordable), did we really need it in the first place?" I don't need 25 chatbots summarizing things I already read or pleading to "help with my writing" when I know what I want to say.
The issue is that generation of error-prone content is indeed not very valuable. It can be useful in software engineering, but I'd put it way below the infamous 10x increase in productivity.
Summarizing stuff is probably useful, too, but its usefulness depends on you sitting between many different communication channels and being constantly swamped in input. (Is that why CEOs love it?)
Generally, LLMs are great translators with a (very) lossly compressed knowledge DB attached. I think they're great user Interfaces, and they can help streamline buerocracy (instead of getting rid of it) but they will not help getting down the cost of production of tangible items. They won't solve housing.
My best bet is in medicine. Here, all the areas that LLMs excell at meet. A slightly distopian future cuts the expensive personal doctors and replaces them with (few) nurses and many devices and medicine controlled by a medical agent.
Imagine a trillion dollars (frankly it might be more, we'll see) shoved into clean energy generation and huge upgrades to our distribution.
With a bubble burst all we'd be left with is a modern grid and so much clean energy we could accelerate our move off fossil fuels.
Plus a lot of extra compute, that's less clear of a long term value.
Alas.
As stated in TFA, this simply has not been demonstrated , nor are there any artifacts of proof. It's reasonable to suspect that there is no special apparatus behind the curtain in this Oz.
From TFA: "One vc [sic] says discussion of cash burn is taboo at the firm, even though leaked figures suggest it will incinerate more than $115bn by 2030."
I think we will end up with market similar to cloud computing. Few big players with great margins creating cartel.
I think this is something the other big players could replicate rapidly, even simulating the exact UI, interactions, importing/exporting existing items, etc. that people are used to with claude products. I don't think this is that big of a moat in the long run. Other big players just seem to be carving up the landscape and see where they can can fit in for now, but once resource rich eyes focus on them, Anthropic's "moat" will disappear.
OpenCode has LSPs out of the box (coming to Claude Code, but not there yet), has a more extensive UI (e.g. sidebar showing pending todos), allows me to switch models mid-chat, has a desktop app (Electron-type wrapper, sure, but nevertheless, desktop; and it syncs with the TUI/web versions so you can use both at the same time), and so on.
So far I like it better, so for me that moat isn't that. The technical moat is still the superiority of the model, and others are bound to catch up there. Gemini 3 Preview is already doing better at some tasks (but frequently goes insane, sadly).
I can use Claude in Jetbrains IntelliJ and in Zed, I can use it with OpenCode, and there are lots of other agent tools. Everyone can build these tools around an LLM, and they're already being commodified.
The moat right now is the quality of the model, not the client. Opus is just so much better than the competitors, at least for now.
I haven't experienced this myself, but RooCode does something similar to OpenCode's approach and the maintainer has reported some bans [1].
Google on the other hand, is being very strict about keeping you locked in to their tools, unless you use API keys, of course.
[1] https://github.com/RooCodeInc/Roo-Code/pull/10077#issuecomme...
Wasn't this released a couple of weeks ago?
[1] https://github.com/anthropics/claude-code/issues/14803#issue...
AI answers is good enough and there is a long history of companies who couldn’t monetize traffic via ads. The canonical example is Yahoo. Yahoo was one of the most traffic sites for 20 years and couldn’t monetize.
2nd issue: defaults matter. Google is the default search engine for Android devices, iOS devices and Macs whether users are using Safari or Chrome. It’s hard to get people to switch
3rd issue: any money that OpenAI makes off search ads, I’m sure Microsoft is going to want there cut. ChatGPT uses Bing
4th issue: OpenAIs costs are a lot higher than Google and they probably won’t be able to command a premium in ads. Google has its own search engine, its own servers, its own “GPUs” [sic],
5th: see #4. It costs OpenAI a lot more per ChatGPT request to serve a result than it costs Google. LLM search has a higher marginal cost.
this kind of things may take some time to spread across population
There’s a couple of things going on but put simply - when there is no real lock in, humans enjoy variety. Until one firm creates a superior product with lock in, only those who are generating cash flows will survive.
OAI does not fit that description as of today.
"Some messages are tough to write, let's thinky-think it through together."
"Golly, your codey-code could use some help. Ask me what I thinky-think!"
"I'm here to help you thinky-think while you worky-work!"
And among them the overwhelming majority of companies in the sectors died. Out of the 2000ish car-related companies that existed in 1925 only 3 survived to today. And none of those 3 ended up a particularly good long term investment.
I don't know why people always imply that "the bubble will burst" means that "literally all Ai will die out and nothing will remain that is of use". The Dotcom bubble didn't kill the internet. But it was a bubble and it burst nonetheless, with ramifications that spanned decades.
All it really means when you believe a bubble will pop is "this asset is over-valued and it will soon, rapidly deflate in value to something more sustainable" . And that's a good thing long term, despite the rampant destruction such a crash will cause for the next few years.
The real problem is the massive over-promises of transforming every industry, replacing most human labor, and eventually reaching super-intelligence based on current models.
I hope we can agree that these are all wholly unattainable, even from a purely technological perspective. However, we are investing as if there were no tomorrow without these outcomes, building massive data-centers filled with "GPUs" that, contrary to investor copium, will quickly become obsolete and are increasingly useless for general-purpose datacenter applications (Blackwell Ultra has NO FP64 hardware, for crying out loud...).
We can agree that the bubble deflating, one way or another, is the best outcome long term. That said, the longer we fuel these delusions, the worse the fallout will be when it does. And what I fear is that one day, a bubble (perhaps this one, perhaps another) will grow so large that it wipes out globalized free-market trade as we know it.
That's what I am trying to say: every big technology player, every industry, every government is all in on AI. That means you and I are along for the ride, whether we like it or not.
> Consider that you'll be wiping your ass with DIMMs once this one bursts; I can always put more memory to good use.
Except you can't, because DRAM makers have almost entirely pivoted from making (G)DDR chips to making HBM instead. HBM must be co-integrated at the interposer level and 3D-stacked, resulting in terrible yield. This makes it extremely pricy and impossible to package separately (no DIMMs).
So when I say the world is all in on this, I mean it. With every passing minute, there is less and less we can salvage once this is over; for consumer DRAM, it's already too late.
But can it run Crysis?
However, if you actually need the much higher precision of FP64 for scientific computing (like most non-AI data center users do) and extremely slow emulation is not an option, consider yourself fucked.
Have you thought that there was a massive physical infrastructure left behind by the original railroad builders, all compatible with future vehicles? Other companies were able to buy the railroads for low prices and use.
Large Language Models change their power consumption requirements monthly, the hardware required to run them is replaced at a rapid rate too. If it were to stop tomorrow, what would you be left with? Out of date hardware, massively wasted power, and a gigantic hole in your wallet.
You could argue you have the blueprints for LLM building, known solutions, and it could all be rebuilt. The thing is, would you want to rebuild, and invest so much again for arguably little actual, tangible output? There isn't anything you can reuse, like others that came after could reuse the railroads.
Practically, what I'm finding is that whenever I ask Claude to search stuff on Reddit, it can't but Gemini can. So I think the practical advantages are where certain organizations have unfair data advantages. What I found out is that LLMs work a lot better when they have quality data.
I don't expect AGI or Super intelligence to take that long but I do think it'll happen in private labs now. There's an AI business model (pay per token) that folks can use also.
I appreciate the optimism for what would be the biggest achievement (and possibly disaster) in human history. I wish other technologies like curing cancer, Alzheimer's, solving world hunger and peace would have similar timelines.
This comparison keeps popping up, and I think it's misleading. The pace of technology uptake is completely different from that of railroads: the user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis. Even if you think that railroads and AI are equally impactful (I don't, I think AI will be far more impactful) the rapidity with which investments can turn into revenue and profit makes the situation entirely different from an investor's point of view.
The pace was slower indeed. It takes time to build the railroads. But at that time advancements also lasted longer. Now it is often cash grabs until the next thing. Not comparable indeed but for other reasons.
Well, I rotate about a dozen of free accounts because I don't want to send 1 cent their way, I imagine I'm not the only one. I do the same for gemini, claude and deepseek, so all in all I account for like 50 "unique" weekly users
Apparently they have about 5% of paying customers, the amount of total users is meaningless, it just tells you how much money they burn and isn't an indication of anything else.
For someone who doesn't like the product and doesn't care about it, you surely make a lot of effort to use it.
It's also literally 0 effort, click > sign out > click > sign in. It saves me $200 a month, that's not too far from half of my rent
Also, maybe I'm missing something, but no amount of free accounts on ChatGPT gives you what you get with a paid subscription, especially with a $200 one; and there's paid plans from just $8/month.
> Also, maybe I'm missing something, but no amount of free accounts on ChatGPT gives you what you get with a paid subscription, especially with a $200 one
These days I'm mostly running opus 4.5 through "antigravity" and I'd rather become a potatoe farmer than give $8 to Altman
If you have to stop torrenting it doesn't mean that you have to pay $20 per movie. There is a price >0 that you're willing to pay to do something you love. On youtube there's a lot of movies for 4 or 5 dollars.
I'm also using Claude, both through Cursor (paid by my company) and privately (paid by me, $20/ month).
Doesn't have anything to do with AI itself. Consider Instagram then TikTok before this, WhatsApp before, etc. There is a clear adoption curve timeline : it's going WorldWide faster. AI is not special in that sense. It doesn't mean AI itself isn't special (arguable, in fact Narayanan precisely argue it's "normal") but rather than adoption pace is precisely on track with everything else.
> I think AI will be far more impactful
is not correct IMO. Those are two very different areas. The impact of railroads on transport and everything transport-related cannot be understated. By now roads and cars have taken over much of it, and ships and airplanes are doing much more, but you have to look at the context at the time.
AI enables people to... produce even more useless slop than before?
> Digital content of low quality that is produced usually in quantity by means of artificial intelligence.
Chosen by the editors as word of the year, by the way.
The point is that AI can produce slop (as people do, too), but it's just silly to imply that everything it can produce is slop. That's just lazy, sloppy thinking.
However, I do think that the majority (or mainstream) use of GenAI today is indeed not very useful or even harmful. And I do think that something like railroads are more useful by orders of magnitude.
What are you basing this opinion on?
that is why people use slop qualifier, rather than not using qualifier
- take your data
- make a model
- sell it back to you
Eventually all of the available data will have been squeezed for all it's worth the only way to differentiate oneself as an AI company will be to propel your users to new heights so that there's new stuff to learn. That growth will be slower, but I think it'll bear more meaningful fruit.
I'm not sure if today's investors are patient enough to see us through to that phase in any kind of a controlled manner, so I expect a bumpy ride in the interim.
When you look at models that were built for a specific purpose, closely intertwined with experts who care about that purpose, they absolutely propel communities to new heights. Consider the impact of alphafold, it won a Nobel prize, proteomics is forever changed.
The issue is that that's not currently the business model that's aimed at most of us. We have to have a race to the bottom first. We can have nice things later, if we're lucky, once a certain sort of investor goes broke and a different sort takes the helm. It's stupid, but its a stupidity that predates AI by a long shot.
We know that the model training on the model training on the model leads to model collapse...
Value is determined by what we value, it's a choice. If a bunch of scientists value good approximations for how a protein will fold, and then a model generates more such things in a year than those scientists could make in a century, that's a lot of value. Not extracted from anyone, created.
I think this conflates together a lot of different types of AI investment - the application layer vs the model layer vs the cloud layer vs the chip layer.
It's entirely possible that it's hard to generate an economic profit at the model layer, but that doesn't mean that there can't be great returns from the other layers (and a lot of VC money is focused on the application layer).
One doesn't need tens of billions for them.
Why "soon"? All your arguments may be correct, but none of them imply when the pending implosion will happen.
The other, highly invested, companies (if openai and anthropic) may be in for a free fall.
You never wake to be left in the wake of "the next big thing".
As a loan officer in Japan who remembers the 1989 bubble, I see the same pattern. In the traditional "Shinise" world I work with, Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.
In 1989, we also bet that land prices would outrun gravity forever. But usually, Physics (and Debt) wins in the end. When the railway bubble bursts, only those with "Oxygen" will survive.
To be honest, in 1989, I was just a child. I didn't drink the champagne. But as a banker today, I am the one cleaning up the broken glass. So I can tell you about 1989 from the perspective of a "Survivor's Loan Officer."
I see two realities every day.
One is the "Zombie" companies. Many SMEs here still list Golf Club Memberships on their books at 1989 prices. Today, they are worth maybe 1/20th of that value. Technically, these companies are insolvent, but they keep the "Ghost of 1989" on the books, hoping to one day write it off as a tax loss. It is a lie that has lasted 30 years.
But the real estate is even worse. I often visit apartment buildings built during the bubble. They are decaying, and tenants have fled to newer, modern buildings. The owner cannot sell the land because demolition costs hundreds of thousands of dollars—more than the land is worth.
The owner is now 70 years old. His family has drifted apart. He lives alone in one of the empty units, acting as the caretaker of his own ruin.
The bubble isn't just a graph in a history book. It is an old man trapped in a concrete box he built with "easy money." That is why I fear the "Cash Burn" of AI. When the fuel runs out, the wreckage doesn't just disappear. Someone has to live in it.
But in my experience as a banker, the ones left in the wreckage are rarely the ones who drank the champagne. It is usually the ones who were hired to clean the glasses.
I hope history proves me wrong this time.
Mh, not sure what you mean: „Manuel“ and „Kießling“ are literally my first and last name.
I've always had a morbid fascination with financial bubbles and the Japanese one of the late 1980s might be the most epic in history (definitely in modern times at least).
But I appreciate your perspective. It is refreshing to know that someone finds a poetic texture in what I simply call "bad loans."
For OpenAI, cash is oxygen too; they're burning it all to reach escape velocity. They could use it to weather the upcoming storm, but I don't think they will.
It is a magnificent gamble. If they reach escape velocity (AGI), they own the future. But if they run out of fuel mid-air, gravity is unforgiving.
As a loan officer, I prefer businesses that don't need to leave the atmosphere to survive.
I think this is analysis is too surface level. We are seeing Google Gemini pull away in terms of image generation, and their access to billions of organic user images gives them a huge moat. And in terms of training data, Google also has a huge advantage there.
The moat is the training data, capital investment, and simply having a better AI that others cannot recreate.
I don't see how Google doesn't win this thing.
Eventually the curves cross. Eventually the computer you can get for, say, $2000, becomes able to run the best models in existence.
The only way this doesn’t happen is if models do not asymptote or if computers stop getting cheaper per unit compute and storage.
This wouldn’t mean everyone would actually do this. Only sophisticated or privacy conscious people would. But what it would mean is that AI is cheap and commodity and there is no moat in just making or running models or in owning the best infrastructure for them.
Google was built on the shoulders of a lot of infrastructure tech developed by former search engine giants. Unfortunately the equity markets decided to devalue those giants instead of applaud them for their contributions to society.
Ranking was Google's 5% contribution to it. They stood on the shoulders of people who invented physical server and datacenter infrastructure, Unix/Linux, file systems, databases, error correction, distributed computing, the entire internet infrastructure, modern Ethernet, all kinds of stuff.
Everyone stood on the shoulders of file systems and databases, ethernet (and firewalls and netscreens, ...) Well, maybe a few stood on the shoulder of PHP.
Google did in fact pretty much figure out how to scale large number of servers (their racking, datacenters, clustering, global file systems etc) before most others did. I believe it was their ability to run the search engine cheap enough that enabled them to grow while largely retaining profitability early on.
For me, I think that, the possible winners will be close to fully funded up front and the losers will be trying to turn debt into profit and fail.
The rest of us self hoster types are hoping for a massive glut of GPUs and RAM to be dumped in a global fire sale. We are patient and have all those free offerings to play with for now to keep us going and even the subs are so far somewhat reasonable but we will flee in droves as soon as you try to ratchet up the price.
It's a bit unfortunate but we are waiting for a lot of large meme companies to die. Soz!
I may add that investors are mostly US-centric, and so will the bubble-bursting chaos that ensues.
The... what now?
They are getting better, faster etc etc.
And I get down voted again for the truth people don't want to hear lol
They'll probably all help in the future, but the current LLM craze doesn't seem to be helped by them at this moment in time, and an economic cycle like this has a boom phase of at most 10 years. So any changes in basic computing hardware will probably help with the next LLM++ tech.
You can order one already. They are not great, but I expect in a few years, things will look very very different.
Not decades
They only lasted a couple of decades as the main transportation method. I'd say the internal combustion engine was a lot more transformative.
It is not a railroad and the railroads did not explode in a bubble (OK a few early engines did explode but that is engineering). I think LLM driven investments in massive DCs is ill advised.
AI feels like a solution looking for a problem. Especially with 90% of consumer facing products. Were people asking for better chatbots, or to quickly deepfake some video scene? I think the bubble popping will re-reveal some incredible backend tools in tech, medical, and (eventually) robotics. But I don't think this is otherwise solving the problems they marketed on.
The problem is increasing profits by replacing paid labor with something "good enough".
> There's no evidence of a technological moat or a competitive advantage in any of these companies.
I disagree based on personal experience. OpenAI is a step above in usefulness. Codex and GPT 5.2 Pro have no peers right now. I'm happy to pay them $200/month.I don't use my Google Pro subscription much. Gemini 3.0 Pro spends 1/10th of the time thinking compared to GPT 5.2 Thinking and outputs a worse answer or ignores my prompt. Similar story with Deepseek.
The public benchmarks tell a different story which is where I believe the sentiment online comes from, but I am going to trust my experience, because my experience can't be benchmaxxed.
I find codex & 5.2 Pro next to useless and nothing holds a candle to Opus 4.5 in terms of utility or quality.
There's probably something in how varied human brains and thought processes are. You and I likely think through problems in some fundamentally different way that leads to us favouring different models that more closely align with ourselves.
No one seems to ever talk about that though and instead we get these black and white statements about how our personally preferred model is the only obvious choice and company XYZ is clearly superior to all the competition.
Personally I find GPT 5.2 to be nearly useless for my use case (which is not coding).
But Gemini will put me in my place. Sometimes I ask my question to Gemini because I don’t trust ChatGPT’s affirmations.
Truthfully I just use both.
1. Glazes me 2. Lists a variety of assumptions (some can be useful / interesting)
Answers the question
At least this way I don't spend a day pursuing an idea the wrong way because ChatGPT never pointed out something obvious.
There’s also no real moat with all the major models converging to be “good enough” for nearly all use cases. Far beyond a typical race to the bottom.
Those like Google with other products will just add AI features and everyone else trying to make AI their product will just get completely crushed financially.
We just don't know who will win in which area yet. It doesn't mean there is no moat.
Maybe the new more efficient models made it better for Claude users but that was my experience a couple months ago.
For professional usage though, Calude Code is so much ahead of Antigravity that it didn't even make sense to make a formal comparison. That, even when using the same model (Opus).
OpenAI says they're very profitable on inference.
Great, but they need to burn billions on advertising, freemeium, and mostly R&D for new models.
Meanwhile, Google and Microsoft are public companies and their stock can drop and their shareholders can force them to exit.
Search was even easier to switch. At least ChatGPT has memory.
Most chat apps are the same as Whatsapp. All of them are free too.
"Ask ChaGPT" is the equivalent to "google it" in 2025.
The comparison with WhatsApp feels like trolling. WhatsApp has a network effect...
Claude has 1% based on this: https://gs.statcounter.com/ai-chatbot-market-share
Consumers overwhelmingly use ChatGPT over Claude. ChatGPT dominance has not wavered.
There is tons of money to be made at the application layer, and VCs will start looking at that once the infrastructure layer collapses.
Here's a blog post I wrote about that: https://parsnip.substack.com/p/models-arent-moats
OpenAI challenging Google search is a winner takes all situation, not to mention the vast amounts of user data.
On the other hand, us lesser mortals can leverage AI like a commoditized service to build applications with it.
I'll be sad when $20 a month Claude goes away.
The problem is, they can't find the moat, despite searching very hard, whatever you bake into your AI, your competitors will be able to replicate in few months. This is why OpenAI is striking deal with Disney, because copyright provides such moat.
Been saying this since the 2016 Alice case. Apple jumped into content production in 2017. They saw the long term value of copyright interests.
https://arstechnica.com/information-technology/2017/08/apple...
Alice changed things such that code monkeys algorithms were not patentable (except in some narrow cases where true runtime novelty can be established.) Since the transformers paper, the potential of self authoring content was obvious to those who can afford to think about things rather than hustle all day.
Apple wants to sell AI in an aluminum box while VCs need to prop up data center agrarianism; they need people to believe their server farms are essential.
Not an Apple fanboy but in this case, am rooting for their "your hardware, your model" aspirations.
Altman, Thiel, the VC model of make the serfs tend their server fields, their control of foundation models, is a gross feeling. It comes with the most religious like sense of fealty to political hierarchy and social structure that only exists as hallucination in the dying generations. The 50+ year old crowd cannot generationally churn fast enough.
Plus moving all that data about is expensive. Keeping things in the datacenter is means its faster and easier to secure.
But really, so has everyone else. There's two "races" for AI - creating models, and finding a consumer use case for them. Apple just isn't competing in creating models similar to the likes of OpenAI or Google. They also haven't really done much with using AI technology to deliver 'revolutionary' general purpose user-facing features using LLMs, but neither has anyone else beyond chat bots.
I'm not convinced ChatGPT as a consumer product can sustain current valuations, and everyone is still clamouring to find another way to present this tech to consumers.
Good lord, expressing that kind of sentiment does not make for a useful and engaging conversation here on hacker news.
Will they really be able to replicate the quality while spending significantly less in compute investment? If not then the moat is still how much capital you can acquire for burning on training?
Studio Ghibli, Sora app. Go viral, juice numbers then turn the knobs down on copyrighted material. Atlas I believe was a less successful than they would've hoped for.
And because of too frequent version bumps that are sometimes released as an answer to Google's launch, rather than a meaningful improvement - I believe they're also having harder time going viral that way
Overall OpenAI throws stuff at the wall and see what sticks. Most of it doesn't and gets (semi) abandoned. But some of it does and it makes for better consumer product than Gemini
It seems to have worked well so far, though I'm sceptical it will be enough for long
Going viral as a billion dollar company spending upward of 1T is still not sustainable. You can't pay off a trillion dollars on "engagement". The entire advertising industry is "only" worth 1T as is: https://www.investors.com/news/advertising-industry-to-hit-1...
And there's something else about the diminishing returns of going viral... AI kind of breaks the usual assumptions in software: that building it is the hard part and that scaling is basically free. In that sense, AI looks more like regular commodities or physical products, in that you can't just Ctrl-C/Ctrl-V: resources are O(N) on the number of users, not O(log N) like regular software.
Normal people are already getting tired of AI Slop
(The obvious well-paying market would be erotic / furry / porn, but it's too toxic to publicly touch, at least in the US.)
As for photo/video very large number of people use it for friends and family (turn photo into creative/funny video, change photo, etc.).
Also I would think photoshop-like features are coming more and more in chatgpt and alike. For example, “take my poorly-lit photo and make it look professional and suitable for linkedin profile”
Even if developers are 1:1000 of your users, I'm going to guess that ratio shifts a lot when you look at subscribers.
If Gemini can create or edit an image, chatgpt needs to be able to do this too. Who wants to copy&paste prompts between ai agents?
Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
OpenAI is also relevant bigger than antropic and is known as a generic 'helper'. Antropic probably saw the benefits of being more focused on developer which allows it to succeed longer in the game for the amount of money they have.
It'll just end up spreading itself too thin and be second or third best at everything.
The 500lb gorilla in the room is Google. They have endless money and maybe even more importantly they have endless hardware. OpenAI are going to have an increasingly hard time competing with them.
That Gemini 3 is crushing it right now isn't the problem. It's Gemini 4 or 5 that will likely leave them in the dust for the general use case, meanwhile specialist models will eat what remains of their lunch.
An AI!
The specialist vs generalist debate is still open. And for complex problems, sure, having a model that runs on a small galaxy may be worth it. But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.
> But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.
not an expert by any means, but wouldn't smaller but highly refined models also output more reproducible results?intuitively it sounds akin to the unix model...
> you just prompt engineer a bit and it often magically works
yea, i agree, thats the biggest selling point right nowi just get the feeling reproducibility, performance and cost will start to become more and more important as time goes on... jmo tho
I think you are confusing generation with analysis. As far I am aware your model does not need to be good at generating images to be able to decode an image.
Now there are all sorts of tricks to get the output of this to be good, and maybe they shouldn't be spending time and resources on this. But the core capability is shared.
I think that hasn't been the case since DeepDream?
I think it's important to OpenAI to support as many use-cases as possible. Right now, the experience that most people have with ChatGPT is through small revenue individual accounts. Individual subscriptions with individual needs, but modest budgets.
The bigger money is in enterprise and corporate accounts. To land these accounts, OpenAI will need to provide coverage across as many use-cases as they can so that they can operate as a one-stop AI provider. If a company needs to use OpenAI for chat, Anthropic for coding, and Google for video, what's the point? If Google's chat and coding is "good enough" and you need to have video generation, then that company is going to go with Google for everything. For the end-game I think OpenAI is playing for, they will need to be competitive in all modalities of AI.
The entertainment industry is by far the easiest way to tap into global discretionary income.
[1] https://arxiv.org/pdf/2509.20328
[2] https://deepmind.google/blog/genie-3-a-new-frontier-for-worl...
You could imagine an entirely new cultural engine where entire genres are born off of random reddit "hey have you guys every considered" comments.
However, the practical reality seems to be that you get tick toc style shorts that cost a bunch to create and have a dubious grasp on causality that have to compete with actual tick toc, a platform that has its endless content produced for free.
1. Google books, which they legally scanned. No dubious training sets for them. They also regularly scrape the entire internet. And they have YouTube. Easy access to the best training data, all legally.
2. Direct access to the biggest search index. When you ask ChatGPT to search for something it is basically just doing what we do but a bit faster. Google can be much smarter, and because it has direct access it's also faster. Search is a huge use case of these services.
3. They have existing services like Android, Gmail, Google Maps, Photos, Assistant/Home etc. that they can integrate into their AI.
The difference in model capability seems to be marginal at best, or even in Google's favour.
OpenAI has "it's not Google" going for it, and also AI brand recognition (everyone knows what ChatGPT is). Tbh I doubt that will be enough.
In my view Google is uniquely well positioned because, contrary to the others, it controls most of the raw materials for Ai.
This really is the critical bit. A year ago, the spin was "ChatGPT AI results are better than search, why would you use Google?", now it's "Search result AI is just as good as ChatGPT, why bother?".
When they were disruptive, it was enough to be different to believe that they'd win. Now they need to actually be better. And... they kinda aren't, really? I mean, lots of people like them! But for Regular Janes at the keyboard, who cares? Just type your search and see what it says.
I use it several times a day just to change text in image form to text form so you can search it and the like.
It's built into chrome but they move the hidden icon about regularly to confuse you. This month you click the url and it appears underneath, helpfully labeled "Ask Google about this page" so as to give you little idea it's Google Lens.
It is far behind, and GPT hasn't exactly stopped growing either. Weekly Active Users, Monthly visits...Gemini is nowhere near. They're comfortably second, but second is still well below first.
>ai overviews in search are super popular and staggeringly more used than any other ai-based product out there
Is it ? How would you even know ? It's a forced feature you can not opt out of or not use. I ignore AI overviews, but would still count as a 'user' to you.
Search Traffic: https://x.com/Similarweb/status/2003078223135990246
Gemini - 1.4b visits - +14.4% MoM
Yeah, ChatGPT is still more popular, but this does not show Gemini struggling exactly.
https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...
But on the contrary, Nano Banana is very good, so I don't know. And in the end, I'm pretty confident Google will be the AI race winner, because they got the engineers, they tech background and the money. Unless Google Adsense die, they can continue the race forever.
Gemini is built into Android and Google search. People may not be going to gemini.google.com, but that does not mean adoption is low.
If they can achieve that they will cut off a key source of blood supply to MSFT+OAI. There is not much money in the consumer market segment from subscribers and entering the ad-business is going to be a lot tougher than people think.
Is it relative adoption or absolute ? I mean, the people using Gemini are they new or coming from another provider, like OpenAI ? (said differently: Is Google eating OpenAI lunch or just reaching new customers ?)
https://searchengineland.com/nearly-all-chatgpt-users-visit-...
But even more importantly, it obviously isn’t losing money from advertisers to ChatGPT. You can look at their quarterly results.
But you cannot use it with an API key.
If you're on a workspace account, you can't have normal individual plan.
You have to have the team plan with $100/month or nothing.
Google's product management tier is beyond me.
> With Chrome being the largest browser by market share, that's a powerful de facto default.
where art thou anti-trust enforcement...Absolutely no one besides ChromeOS users are forced to use Chrome.
Google has spent over a decade advertising Chrome on all their properties and has an unlimited budget and active desire to keep Chrome competitive. Mozilla famously needs Google’s sponsorship to stay solvent. Apple maintains Safari to have no holes in their ecosystem.
Stop being silly defending trillion dollar companies that are actively making the internet worse, it’s not productive or funny.
>whereas OpenAI has a clear opportunity with advertising.
Personally, having "a clear opportunity with advertising" feels like a last ditch effort for a company that promised the moon in solving all the hard problems in the world.
ChatGPT isn't bad, I use it for some things / pay for it, but their spend and moves make me think that they don't seem confidant in it ...
Is all the doomer-ism about AI companies not being profitable right? Do the AI companies believe it? Seems like it sometimes.
A lot of people now reach for ChatGPT by default instead of Google, even with the AI summaries. I wonder whether they just prefer the interface of the chat apps to Google that can be a bit cluttered in comparison.
I’m one of those people, and the reason for that is that Google’s AI summaries are awful more times than not. With ChatGPT I can (kind of) set how much “thinking” to do for each query and guide the model into producing better results via prompting.
For all we know, they could be accumulating capital to weather an AI winter.
It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.
It said: OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome.
However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high.
If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too.
[1] https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...
Their investors surely do (absent outrageous fraud).
> For all we know, they could be accumulating capital to weather an AI winter.
If they were, their investors would be freaking out (or complicit in the resulting fraud). This seems unlikely. In point of fact it seems like they're playing commodities market-cornering games[1] with their excess cash, which implies strongly that they know how to spend it even if they don't have anything useful to spend it on.
[1] Again c.f. fraud
Right, this is nonsense. Even if investors wanted to be complicit in fraud, it's an insane investment. "Give us money so we can survive the AI winter" is a pitch you might try with the government, but a profit-motivated investor will... probably not actually laugh in your face, but tell you they'll call you and laugh about you later.
No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o.
I’m sure all these AI labs have extensive data gathering, cleanup, and validation processes for new data they train the model on.
Or at least I hope they don’t just download the current state of the web on the day they need to start training the new model and cross their fingers.
I'd love a blog or coffee table book of "where are they now" for the director level folks who do dumb shit like this.
This isn't really accurate.
Firstly, GPT4.5 was a new training run, and it is unclear how many other failed training runs they did.
Secondly "all subsequent models are routing systems and prompt chains built on top of 4" is completely wrong. The models after gpt4o were all post-trained differently using reinforcement learning. That is a substantial expense.
Finally, it seems like GPT5.2 is a new training run - or at least the training cut off date is different. Even if they didn't do a full run it must have been a very large run.
https://www.theinformation.com/articles/openai-says-business...
https://epoch.ai/blog/training-compute-of-frontier-ai-models...
At the very least they made GPT 4.5, which was pretty clearly trained from scratch. It was possibly what they wanted GPT-5 to be but they made a wrong scaling prediction, people simply weren't ready to pay that much money.
I know sama says they aren’t trying to train new models, but he’s also a known liar and would definitely try to spin systemic failure.
Doubtful. This would be the very antithesis of the Silicon Valley way.
I know this is the latest catastrophizion meme for AI companies, but what is it even supposed to mean? OpenAI failing wouldn’t mean AI disappears and all of their customers go bankrupt, too. It’s not like a bank. If OpenAI became insolvent or declared bankruptcy, their intellectual property wouldn’t disappear or become useless. Someone would purchase it and run it again under a new company. We also have multiple AI companies and switching costs are not that high for customers, although some adjustment is necessary when changing models.
I don’t even know what people think this is supposed to mean. The US government gives them money for something to prevent them from filing for bankruptcy? The analogy to bank bailouts doesn’t hold.
If you look at the financial crisis, the US government decided to bail out AIG, after passing on Bear Sterns, because big banks like Goldman Sachs and Morgan Stanley (and even Jack Welch's General Electric) all had huge counterparty risk with AIG.
Someone else put it succintly.
"When A million dollar company fails, it's their problem. When a billion dollar company fails, it's our problem"
In essence, there's so much investment in AI that it's a significant part of the US GDP. If AI falters, that is something that the entire stock market will feel, and by effect, all Americans. No matter how detached from tech they are. In other words, the potential for the another great depression.
In that regard, the government wants to avoid that. So they will at least give a small bailout to lessen the crash. But more likely (as seen with the Great Financial Crisis), they will likely supply billions upon billions to prop up companies that by all business logic deserved to fail. Because the alternative would be too politically damaging to tolerate.
----
That's the theory. These all aren't certain and there are arguments to suggest that a crash in AI wouldn't be as bad as any of the aforementioned crashes. But that's what people mean by "become too big to fail and get bailed out".
The stock market isn't rational, its a room full of people talking loudly, and moving to various tables.
All it takes is someone outside the room to shout something that triggers panic, and most of the people in the room will run for the exit.
And that's ignoring the dominoes of other AI firms being pulled out of because OpenAi falters.
If they aren't dumb, why are they investing in MSFT now then if it's a bubble that's doomed to fail? And even in the worst case scenario, a 10-15% decline in the S&P 500 won't trigger the next Great Depression. (Keep in mind that we already had a ~20% drawdown in public equities during the interest rate hikes of 2022/2023 and the economy remained pretty robust throughout.)
>And even in the worst case scenario, a 10-15% decline in the S&P 500 won't trigger the next Great Depression
Only if you believe the 10% decline won't domino and that the S&P500 is secluded from the rest of the global economy. I wish I shared your optimism.
> and the economy remained pretty robust throughout.
Yeah and we voted the person who orchestrated that out. We don't have the money to pump trillions back in a 2nd time in such a short time. Something's gonna give, and soon.
So your hypothesis is that a 10% decline in the S&P 500 will trigger the next Great Depression, i.e. years of negative GDP growth and unemployment? I agree that it could cause a slight economic slowdown, but I don't think AI and tech stocks are a large enough part of the economy to cause a Great Depression-style catastrophe.
An expected outcome from a AI blowout is the uncertainty and everyone holding onto their assets and credit recalls plus interest rate hikes.
During the great depression it wasn't the stock market collapse that caused it as much as it was the credit crunch that followed. Prior to the blowout people literally bought stocks on credit.
Yup. I won't say it's the only factor, nor biggest. But I'm focusing on this topic and not 40+ years of government economic abandonment of the working class. It's the straw that will break the camel's back.
Yes but with all stock growth being in AI companies it would tank the market for one. Secondly, all of those dollars they are using are backed by creditors who would have a default. short of another TARP (likely IMO, the US NEEDS to keep pumping AI to compete with China) .... it could scare investors off too..
Plus with the growth in AI effecting the overall makeup of the stockmarket, something like this hurts every Americans 401k
That happened a long time ago! Microsoft already owns the model weights!
Citation is needed
It’s going to crash, guaranteed
What a silly calculation.
OpenAI’s customer base is global. Using US population as the customer base is deliberately missing the big picture. The world population is more than 20X larger than the US population.
It’s also obvious that they’re selling heavily to businesses, not consumers. It’s not reasonable to expect consumers to drive demand for these services.
I'd be willing to bet that, like many US websites, OpenAI's users are at lest 60% American. Just because there's 20x more people out there doesn't mean they have the same exposure to American products.
For instance, China is an obvious one. So that's 35%+ of the population already mostly out of consideration.
>It’s also obvious that they’re selling heavily to businesses, not consumers.
I don't think a few thousand companies can outspend 200m users paying $200 a month. I won't call it a "mathematical impossibility", but the math also isn't math-ing here.
Since when is English everyone's primary language?
If it happens in the next 3 years, tho, and Altman promises enough pork to the man, it could happen.
Not that I have an opinion one way or another regarding whether or not they'd be bailed out, but this particular argument doesn't really seem to fit the current political landscape.
Some players have to play, like google, some players want to play like USA vs. China.
Besides that, chatting with an LLM is very very convincing. Normal non technical people can see what 'this thing' can already do and as long as the progress is continuing as fast as it currently is, its still a very easy to sell future.
I don't think you have the faintest clue of what you're talking about right now. Google authored the transformer architecture, the basis of every GPT model OpenAI has shipped. They aren't obligated to play any more than OpenAI is, they do it because they get results. The same cannot be said of OpenAI.
Just a case of too many companies have skin in OpenAI's game for it to be allowed to fail now.
(Adjacent to this is how crazy it was that Meta were accused of torrenting ebooks. Did they need them for the underlying knowledge? I can’t imagine they needed them for natural langauge examples.)
Their cost to serve each request is roughly 3 orders of magnitude higher than conventional web sites.
While it is clear people see value in the product, we only know they see value at today’s subsidized prices. It is possible that inference prices will continue their rapid decline. Or it is possible that OAI will need to raise prices and consumers will be willing to pay more for the value.
And the backstop on asset prices at the expense of the currency's purchasing power.
The reason people are so skeptical is that OpenAI is applying the standard startup justification for big spending to a business model where it doesn't seem to apply.
No, inference is really cheap today, and people saying otherwise simply have no idea what they are talking about. Inference is not expensive.
> Even at $200 a month for ChatGPT Pro, the service is struggling to turn a profit, OpenAI CEO Sam Altman lamented on the platform formerly known as Twitter Sunday. "Insane thing: We are currently losing money on OpenAI Pro subscriptions!" he wrote in a post. The problem? Well according to @Sama, "people use it much more than we expected."
Altman also said 4 months ago:
Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
https://simonwillison.net/2025/Aug/17/sam-altman/a spot on the iOS home screen? yes.
infrastructure to serve LLM requests? no.
good LLM answers? no.
the economist can't tell the difference between scarcity and real scarcity.
it is extremely rare to buy a spot on the iOS home screen, and the price for that is only going up - think of the trend of values of tiktok, whatsapp and instagram. that's actually scarce.
that is what openai "owns." you're right, #5 app. you look at someone's home screen, and the things on it are owned by 8 companies, 7 of which are the 7 biggest public companies in the world, and the 8th is openai.
whereas infrastructure does in fact get cheaper. so does energy. they make numerous mistakes - you can't forecast retail prices Azure is "charging" openai for inference. but also, NVIDIA participates in a cartel. GPUs aren't actually scarce, you don't actually need the highest process nodes at TSMC, etc. etc. the law can break up cartels, and people can steal semiconductor process knowledge.
but nobody can just go and "create" more spots on the iOS home screen. do you see?
I use it in conjunction with Claude. I’ve gotten pretty good results using both of them in tandem.
However on a principal basis I prefer to self host, I wonder if an advantage of OpenAI imploding wouldn’t generate basement level prices of useful chips? Ideally I want to run my LLM and train it on my data.
I see Google doing to OpenAI today what Microsoft did to Netscape back then, using their dominant position across multiple channels (browser, search, Android) to leverage their way ahead of the first mover.
A small anecdote: when ChatGPT went down a few months ago, a lot of young people (especially students) just waited for it to come back up. They didn't even think about using an alternative.
This "moat" that OpenAI has is really weak
GPT5.2 Codex is the best coding model right now in benchmarks. I use it exclusively now.
Why would you want my money to be used to build datacenter that won’t benefit me ? I might use a LLM once a month, many people never use it.
Let the one who use it pay for it.
No chance they're going to take risks to share that hardware with anyone given what it does.
The scaled down version of El Capitan is used for non-classified workloads, some of which are proprietary, like drug simulation. It is called Tuolumne. Not long ago, it was nevertheless still a top ten supercomputer.
Like OP, I also don't see why a government supercomputer does it better than hyperscalers, coreweave, neoclouds, et al, who have put in a ton of capital as even compared to government. For loads where institutional continuity is extremely important, like weather -- and maybe one day, a public LLM model or three -- maybe. But we're not there yet, and there's so much competition in LLM infrastructure that it's quite likely some of these entrants will be bag holders, not a world of juicy margins at all...rather, playing chicken with negative gross margins.
these things constitute public goods that benefit the individual regardless of participation.
Uncanny really.
What is the justification for considering data centers capable of running LLMs to be a public good?
There are many counter examples of things many people use but are still private. Clothing stores, restaurants and grocery stores, farms, home appliance factories, cell phone factories, laundromats and more.
Why not an LLM datacenter if it also offers information? You could say it's the public library of the future maybe.
This is not at all true of generative AI.
OpenAI ask for 1m GPUs for a month, Anthropic ask for 2m, the government data center only has 500,000, and a new startup wants 750,000 as well.
Do you hand them out to the most convincing pitch? Hopefully not to the biggest donor to your campaign.
Now the most successful AI lab is the one that's best at pitching the government for additional resources.
UPDATE: See comment below for the answer to this question: https://news.ycombinator.com/item?id=46438390#46439067
It would still likely devolve into most-money-wins, but it is not an insurmountable political obstacle to arrange some sort of sharing.
Edit: I meant to say over subscribed, not over provisioned. There are far more jobs in the queue than can be handled at once
https://www.ornl.gov/news/doe-incite-program-seeks-2026-prop...
> The Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program has announced the 2026 Call for Proposals, inviting researchers to apply for access to some of the world’s most powerful high-performance computing systems.
> The proposal submission window runs from April 11 to June 16, 2025, offering an opportunity for scientific teams to secure substantial computational resources for large-scale research projects in fields such as scientific modeling, simulation, data analytics and artificial intelligence. [...]
> Individual awards typically range from 500,000 to 1,000,000 node-hours on Aurora and Frontier and 100,000 to 250,000 node-hours on Polaris, with the possibility of larger allocations for exceptional proposals. [...]
> The selection process involves a rigorous peer review, assessing both scientific merit and computational readiness. Awards will be announced in November 2025, with access to resources beginning in 2026.
Not sure OpenAI/Anthropic etc would be OK with a six month gap between application and getting access to the resources, but this does indeed demonstrate that government issued super-computing resources is a previously solved problem.
In theory it makes the process more transparent and fair, although slower. That calculus has been changing as of late, perhaps for both good and bad. See for example the Pentagon's latest support of drone startups run by twenty-year-olds.
The question of public and private distinctions in these various schemes are very interesting and imo, underexplored. Especially when you consider how these private LLMs are trained on public data.
people have no idea about how big the military and defense budgets worldwide are next to any other example of a public budget.
throw as many pie charts out as you want; people just can't see the astronomical difference in budgets.
I think it's based on how the thing works; a good defense works until it doesn't -- the other systems/budgets in place have a bit more of a graceful failure. This concept produces an irrationality in people that produces windfalls of cash availability.
I see no argument why the government would jump into a hype cycle and start building infra that speculative startups are interested in. Why would they take on that risk compared to private investors, and how would they decide to back that over mammoth cloning infra or whatever other startups are doing?
Hmm, what about member-owned coöperatives? Like what we have for stock exchanges.
Everything is happening exactly as it should. If the "bubble" "pops", that's just the economic laws doing what they naturally do.
The government has better things to do. Geopolitics, trade, transportation, resources, public health, consumer safety, jobs, economy, defense, regulatory activities, etc.
They need a better marketing strategy.
There's an element of arms race between players, and the genie is out of the bottle now so have to move with it. Game theory is more driving this than economics in the short term.
Marginal gains on top of these investments probably have a ROI now (i.e. new investments from this point).
I am not saying OpenAI is Amazon but am saying I have seen this before where masses are going “oh business is bad, losses are huge, where is path to profitability…”
I do know that in the late aughts, people were writing stories about how Amazon was a charity run on behalf of the American consumer by the finance industry.
That being said, if I was Sam Altman I'd also be stocking up on yachts, mansions and gold plated toilets while the books are still private. If there's $10bn a year in outgoings no one's going to notice a million here and there.
MS Office has about 345 million active users. Those are paying subscriptions. IMHO that's roughly the totally addressable market for OpenAI for non coding users. Coding users is another few 20-30 million.
If OpenAI can convert double digit percentages of those onto 20$ and 50$ per month subscriptions by delivering good enough AI that works well, they should be raking in cash by the billions per month adding up to close to the projected 2030 cash burn per year. That would be just subscription revenue. There is also going to be API revenue. And those expensive models used for video and other media creation are going to be indispensable for media and advertising companies which is yet more revenue.
The office market at 20$/month is worth about 82 billion per year in subscription revenue. Add maybe a few premium tiers to that at 50$/month and 100$/month and that 2030 130 billion per year in cash burn suddenly seems quite reasonable.
I've been quite impressed with Codex in the last few months. I only pay 20$/month for that currently. If that goes up, I won't loose sleep over it as it is valuable enough to me. Most programmers I know are on some paid subscription to that, Anthropic's Claude, or similar. Quite a few spend quite a bit more than that. My Chat GPT Plus subscription feels like really good value to me currently.
Agentic tooling for business users is currently severely lacking in capability. Most of the tools are crap. You can get models to generate text. But forget about getting them to format that text correctly in a word processor. I'm constantly fixing bullets, headings and what not in Google docs for my AI assisted writings. Gemini is close to ff-ing useless both with the text and the formatting.
But I've seen enough technology demos of what is possible to know that this is mostly a UX and software development problem, not a model quality problem. It seems companies are holding back from fully integrating things mainly for liability reasons (I suspect). But unlocking AI value like that is where the money is. Something similarly useful as codex for business usage with full access to your mail, drive, spread sheets, slides, word processors, CRMs, and whatever other tools you use running in YOLO mode (which is how I use codex in a virtual machine currently, --yolo). That would replace a shit ton of manual drudgery for me. It would be valuable to me and lots of other users. Valuable as in "please take my money".
Currently doing stuff like this is a very scary thing to do because it might make expensive/embarrassing mistakes. I do it for code because I can contain the risk to the vm. It actually seems to be pretty well behaved. The vm is just there to make me feel good. It could do all sorts of crazy shit. It mostly just does what I ask it to. Clearly the security model around this needs work and instrumentation. That's not a model training problem though.
Something like this for business usage is going to be the next step in agent powered utility that people will pay for at MS office levels of numbers of users and revenue. Google and MS could do it technically but they have huge legal exposure via their existing SAAS contracts and they seem scared shitless of their own lawyers. OpenAI doing something aggressive in this space in the next year or so is what I'm expecting to happen.
Anyway, the bubble predictors seem to be ignoring the revenue potential here. Could it go wrong for OpenAI? Sure. If somebody else shows up and takes most of the revenue. But I think we're past the point where that revenue is not looking very realistic. Five years is a long time for them to get to 130 billion per year in revenue. Chat GPT did not exist five years ago. OpenAI can mess this up by letting somebody else take most of that revenue. The question is who? Google, maybe but I'm underwhelmed so far. MS, seems to want to but unable to. Apple is flailing. Anthropic seems increasingly like an also ran.
There is a hardware cost bubble though. I'm betting OpenAI will get a lot more bang for its buck in terms of hardware by 2030. It won't be NVidia taking most of that revenue. They'll have competition and enter a race to the bottom in terms of hardware cost. If OpenAI burning 130 billion per year, it will probably be getting a lot more compute for it than currently projected. IMHO that's a reasonable cost level given the total addressable market for them. They should be raking in hundreds of billions by then.
There is a hardware cost bubble though. I'm betting OpenAI will get a lot more bang for its buck in terms of hardware by 2030. It won't be NVidia taking most of that revenue.
Whoever has the most compute will ultimately be the winner. This is why these companies are projecting hundreds of billions in infrastructure spend.With more compute, you can train better models, serve them to more users, serve them faster. The more users, the more compute you can buy. It's a run away cycle. We're seeing only 3 (4 if you count Meta) frontier LLM providers left in the US market.
Nvidia's margins might come down by 2030. It won't stay in the 70s. But the overall market can expand quicker than Nvidia's profits shrink so that they can be more profitable in 2030 despite lower market share.
Is it necessary to a point you want to make?
You can just point to behavior of a given entity, such as to conclude it's untrustworthy, without the problematic area of armchair psychoanalysis.
You might want to include an "Edit:" when substantially changing or replacing a comment.
2026: US AI companies pump stocks -> market correction -> taxpayer bailout
Mark my words. OpenAI will be bailed out by US taxpayers.
Banks get bailed out because if confidence in the banking system disappears and everyone tries to withdraw their money at once, the whole economy seizes up. And whoever is Treasury Secretary (usually an ex Wall Street person) is happy to do it.
I don't see OpenAI having the same argument about systemic risk or the same deep ties into government.
Banks needed bailout to keep lending money. Auto industry needed one to keep employing lot of people. AI doesn't employ that many.
I just don't believe bailout can happen before it is too late for it to be effective in saving the market.
The same can happen now on the side of private credit that gradually offloads its junk to insurance companies (again):
As a result, private credit is on the rise as an investment option to compensate for this slowdown in traditional LBO (Figure 2, panel 2), and PE companies are actively growing the private credit side of their business by influencing the companies they control to help finance these operations. Life insurers are among these companies. For instance, KKR’s acquisition of 60 percent of Global Atlantic (a US life insurer) in 2020 cost KKR approximately $3billion.
https://www.imf.org/en/Publications/global-financial-stabili...
But it might mean that LLMs don't really improve much from where they are today, since there won't be the billions of dollars to throw at training for small incremental improvements that consumers mostly don't care to pay anything for.
I don’t really have faith the current LLMs will improve dramatically anyway, not without totally new approaches to AI.
I'd love to see the rationale that OpenAI (not "AI" everywhere) is more valuable than chocolate globally.
... so crash early 2026?
Even as an enormous chocolate lover (in all three senses) who eats chocolate several times a week, I'd probably choose AI instead.
OpenAI has alternatives, but also I do spend more money on OpenAI than I do on chocolate currently.
Maybe instead of the chocolate market, look at the global washing machine market of $65 billion.
I’d rather give up both AI and chocolate than my washing machine.
What's interesting is the strategic positioning. They need to maintain leadership while somehow finding a sustainable business model. The API pricing already feels like it's in a race to the bottom as competition intensifies.
For startups building on top of LLM APIs, this should be a wake-up call about vendor lock-in risks. If OpenAI has to dramatically change their pricing or pivot their business model to survive, a lot of downstream products could be impacted. Diversifying across multiple model providers isn't just good engineering - it's business risk management.
AI is at 1% of total US GDP right now.
We have 6x more to go.