The simple evidence for this is that everyone who has invested the same resources in AI has produced roughly the same result. OpenAI, Anthropic, Google, Meta, Deepseek, etc. There's no evidence of a technological moat or a competitive advantage in any of these companies.
The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did. That doesn't mean AI is going to go away, or that it won't change the world - railroads are still here and they did change the world - but from a venture investment perspective, get ready for a massive downturn.
I know nothing about finances at this level, so asking like a complete newbie: doesn't that just mean that instead of risking $10B they're risking $7-8B? It is a cheaper bet for sure, but doesn't look to me like a game changer when the range of the bet's outcome goes from 0 to 1000% or more.
Also, classifying business expenses as "cost to the tax payer" seems less than useful, unless you are a proponent of simply taxing gross receipts. Which has its merits, but then the discussion is about taxing gross receipts versus income with at least some deductible expenses, not anything to do with OpenAI.
It's the dumb as rocks MBAs that will go head first into the 5% chance deal.
However, this discussion will be a perfect introduction to "finances at this level", where about 60% of the action is injecting more variables until you can fit a veneer of quantification onto any narrative.
So just a loss for governments, or in other words, socializing the losses.
Pension funds buy shares in businesses such as Microsoft. The money going into the pension fund is not typically a function of the tax paid by companies such as Microsoft, but rather from a combination of actuaries’ recommendations, payroll tax receipts, and politicians’ priorities.
Therefore a pension funds’ equity holdings, such as Microsoft, doing well means taxes can be lower.
In the USA, Social Security defined benefit pensions are cash from workers today going to non workers today, same as Germany's national scheme (gesetzliche Rentenversicherung?).
The other defined benefit benefit pension schemes are what are usually invested in equities, and the investment restrictions section in this document indicate Germany's "occupational pensions" can also invest in equities. (page 12)
https://www.aba-online.de/application/files/2816/2945/5946/2...
Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
These big corps use holdings in low tax jurisdisctions like Ireland and Luxemburg, funnel all their EU subsidiaries’ revenues there and end up paying 0 tax in the individual EU countries.
This system is actually legal, EU lawmakers should pass laws to prevent this.
That should be expected, because
https://european-union.europa.eu/priorities-and-actions/acti...
> The EU does not have a direct role in collecting taxes or setting tax rates.
> There was a lawsuit in Belgium but Amazon has won that in late-2024 since they had a separate agreement in/with Luxembourg.
Dec 2023.
> Speaking for EU, all big tech already not paying taxes one way or another, either using Dublin/Ireland (Google, Amazon, Microsoft, Meta, ...) and Luxembourg (Amazon & Microsoft as far as I can tell) to avoid such corporate/income taxes. Simply possible because all the earnings go back to the U.S. entity in terms of "IP rights".
Ireland (due to pressure from EU) closed this in 2020. The amount of tax collected by Ireland quadrupled. See Figure 5 and 6 in link below.
https://budgetmodel.wharton.upenn.edu/issues/2024/10/14/the-...
Its clear that OP means "in the EU".
> Ireland (due to pressure from EU) closed this in 2020. The amount of tax collected by Ireland quadrupled. See Figure 5 and 6 in link below.
And Ireland fought against this tooth and nail. Yes, a country was fighting to have less income. All out of fear that the companies will leave the little tax heaven. Did they leave? No ...
> See Figure 5 and 6 in link below.
Figure 7 is also interesting if we look at the tax income increase and the outbound.
OpenAI is anyways seeking Govt Bailout for "National Security" reasons. Wow, I earlier scoffed at "Privatize Profits, Socialize Losses", but this appears to now be Standard Operating Procedure in the U.S.
https://www.citizen.org/news/openais-request-for-massive-gov...
So the U.S. Taxpayer will effectively pay for it. And not just the U.S. Taxpayer - due to USD reserve currency status, increasing U.S. debt is effectively shared by the world. Make billionaires richer, make the middle class poor. Make the poor destitute. Make the destitute dead. (All USAID cuts)
It shouldn't be the job of the US taxpayer to feed someone that doesn't want to work, study, or pass a drug test, and it absolutely shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.
That's pretty close to the story other Brits give themselves for why losing the empire was actually a good thing for the UK.
This would make sense if every person was given similar opportunities, like providing quality education to all of our youngest and making higher education a mission rather than a business as a starter.
As a society we move at the speed of the weakest among us, we only move forward when we start lifting and helping the weakest and most vulnerable.
You also need to realize that not doing that work is also cause for other taxpayer money to be spent elsewhere, such as spending an average of 37k $ per incarcerated person, and that ignores all the damage that criminal might've caused, all the additional police staffing and personal security that is needed to be spent outside prisons, etc.
Those are complex systems, are you sure it wouldn't be better to spend the same gargantuan amount of money that's spent on millions of inmates and fighting crime into fighting the causes that make many fall into that?
Again, those are complex, but closed systems and the argument of "we shouldn't spend on X" often ignores the cost of not spending on X.
What about someone who works and still can’t afford enough housing/food?
> shouldn't be the job of the US taxpayer to feed another country's citizens half a world away.
I mean where’s the profit in that, am i right?
Compare that to when we still had revolutions, where it was very hard for government to know what is going on, and to find individuals without a huge effort.
I think revolutions have become next to impossible, unless it is lead by significant parts of the elite that controls at least part of the apparatus.
That's not even counting the far more sophisticated propaganda methods, so that many of the affected people won't even begin to target the actual culprits but are lead to chase shadows, or one another.
Was that an organic "it's not A, it's B" or synthetic?
So kinda looking at a bank level run on tech companies if they go broke.
Also integration with other services. I just had Gemini summarize the contents of a Google Drive folder and it was effortless & effective
The NSA and GHCQ and basically every TLA with the ability to tap a fibre cable had figured out the gap in Google’s armour: Google’s datacenter backhaul links were unencrypted. Tap into them, and you get _everything_.
I’ve no idea whether Snowdon’s leaks were a revelation or a confirmation for google themselves; either way, it’s arguably a total breach.
That page says it was only 2 accounts and none of the messages within the mail was accessed. I wouldn't call that very significant.
While their competitors have to deal with actively hostile attempts to stop scraping training data, in Google's case almost everyone bends over backwards to give them easy access.
I agree with the rest though
I did that when I was retraining Stable Audio for fun and it really turned out to be trivial enough to pull of as a little evening side project.
Reminds me of Reddit's cracking down on API access after realizing that their data was useful. But I'd expect both youtube to be quicker on the gun knowing about AI data collection, and have more time because of the orders of magnitude greater bandwidth required to scrape video.
Google, though, has been doing it for literal decades. That could mean that they have something nobody else (except archive.org) has - a history on how the internet/knowledge has evolved.
Google suffers from classic Innovator's Dilemma and need competition to refocus on what ought to be basic survival instincts. What is worse is the search users are not the customers. The customers of Google Search are the advertisers and they will always prioritise the needs of the customers and squander their moats as soon as the threat is gone.
Sergey Brin interview: https://x.com/slow_developer/status/1999876970562166968?s=20
This attitude also partially explains the black vikings incident.
This will be hard for them to integrate in a way that won't annoy users / will be better implemented than any other competitor in the same space.
Or perhaps we just deal with all AI across the board serving us ads.... this makes more sense unfortunately.
And yet they’re there, in the form of prominent product placement in all of their original series along with strategic placement in the frame to make sure they appear in cropped clips posted to social media and made into gifs.
Stranger Things alone has had 100-200 brands show up under the warm guise of nostalgia, with Coke alone putting up millions for all the less-than-subtle screen time their products get.
I’m certain AI providers will figure out how to slyly put the highest bidder into a certain proportion of output without necessarily acting out that scene in Wayne’s World.
It's like that old concept of saying something wrong in a forum on purpose to have everyone flame you for being wrong and needing to prove themselves better by each writing more elaborate answers.
You catch more fish with bait.
Tesla does not have live video feed from (every) Tesla car.
This comparison keeps popping up, and I think it's misleading. The pace of technology uptake is completely different from that of railroads: the user base of ChatGPT alone went from 0 to 200 million in nine months, and it's now- after just three years- around 900 million users on a weekly basis. Even if you think that railroads and AI are equally impactful (I don't, I think AI will be far more impactful) the rapidity with which investments can turn into revenue and profit makes the situation entirely different from an investor's point of view.
The pace was slower indeed. It takes time to build the railroads. But at that time advancements also lasted longer. Now it is often cash grabs until the next thing. Not comparable indeed but for other reasons.
Well, I rotate about a dozen of free accounts because I don't want to send 1 cent their way, I imagine I'm not the only one. I do the same for gemini, claude and deepseek, so all in all I account for like 50 "unique" weekly users
Apparently they have about 5% of paying customers, the amount of total users is meaningless, it just tells you how much money they burn and isn't an indication of anything else.
For someone who doesn't like the product and doesn't care about it, you surely make a lot of effort to use it.
It's also literally 0 effort, click > sign out > click > sign in. It saves me $200 a month, that's not too far from half of my rent
Also, maybe I'm missing something, but no amount of free accounts on ChatGPT gives you what you get with a paid subscription, especially with a $200 one; and there's paid plans from just $8/month.
> I think AI will be far more impactful
is not correct IMO. Those are two very different areas. The impact of railroads on transport and everything transport-related cannot be understated. By now roads and cars have taken over much of it, and ships and airplanes are doing much more, but you have to look at the context at the time.
AI enables people to... produce even more useless slop than before?
Try “@gmail” in Gemini
Google’s surface area to apply AI is larger than any other company’s. And they have arguably the best multimodal model and indisputably the best flash model?
Is it better for society for promising startups to die on the open market, or get acquired by a monopoly? The third option -- taking down the established players -- appears increasingly unlikely.
Is there any evidence that this is the case ? For very big merger (like nvdia and Arm tried) sure, but I can't think of a single time regulator stop a big player from buying a start up.
I think this is a problem for Google. Most users aren't going to do that unless they're told it's possible. 99% of users are working to a mental model of AI that they learned when they first encountered ChatGPT - the idea that AI is a separate app, that they can talk to and prompt to get outputs, and that's it. They're probably starting to learn that they can select models, and use different modes, but the idea of connecting to other apps isn't something they've grokked yet (and they won't until it's very obvious).
What people see as the featureset of AI is what OpenAI is delivering, not Google. Google are going to struggle to leverage their position as custodians of everyone's data if they can't get users to break out of that way of thinking. And honestly, right now, Google are delivering lots of disparate AI interfaces (Gemini, Opal, Nano Banana, etc) which isn't really teaching users that it's all just facets of the same system.
Google is telling this in about a hundred different popups and inline hints when you use any of its products
It already is. In terms of competition, I don't think we've seen any groundbreaking new research or architecture since the introduction of inference time compute ("thinking") in late 2024/early 2025 circa GPT-o4.
The majority of the cost/innovation now is training this 1-2 year old technology on increasingly large amounts of content, and developing more hardware capable of running these larger models at more scale. I think it's fair to say the majority of capital is now being dumped into hardware, whether that's HBM and research related to that, or increasingly powerful GPUs and TPUs.
But these components are applicable to a lot of other places other than AI, and I think we'll probably stumble across some manufacturing techniques or physics discoveries that will have a positive impact on other industries.
> that ends up in a race to the bottom competing on cost and efficiency of delivering
One could say that the introduction of the personal computer became a "race to the bottom." But it was only the start of the dot-com bubble era, a bubble that brought about a lot of beneficial market expansion.
> models that have all reached the same asymptotic performance in the sense of intelligence, reasoning, etc.
I definitely agree with the asymptotic performance. But I think the more exciting fact is that we can probably expect LLMs to get a LOT cheaper in the next few years as the current investments in hardware begin to pay off, and I think it's safe to assume that in 5-10 years, most entry-level laptops will be able to manage a local 30B sized model while still being capable of multitasking. As it gets cheaper, more applications for it become more practical.
---
Regarding OpenAI, I think it definitely stands in a somewhat precarious spot, since basically the majority of its valuation is justified by nothing less than expectations of future profit. Unlike Google, which was profitable before the introduction of Gemini, AI startups need to establish profitability still. I think although initial expectations were for B2C models for these AI companies, most of the ones that survive will do so by pivoting to a B2B structure. I think it's fair to say that most businesses are more inclined to spend money chasing AI than individuals, and that'll lead to an increase in AI consulting type firms.
I suspect most of the excitement and value will be on edge devices. Models sized 1.7B to 30B have improved incredibly in capability in just the last few months and are unrecognizably better than a year ago. With improved science, new efficiency hacks, and new ideas, I can’t even imagine what a 30B model with effective tooling available could do in a personal device in two years time.
It was model improvements, followed by inference time improvements, and now it's RLVR dataset generation driving the wheel.
I think the comparison is only half valid since personal computers were really just a continuation of the innovation that was general purpose computing.
I don't think LLMs have quite as much mileage to offer, so to continue growing, "AI" will need at least a couple step changes in architecture and compute.
Citation needed!
I think we will end up with market similar to cloud computing. Few big players with great margins creating cartel.
I think this is something the other big players could replicate rapidly, even simulating the exact UI, interactions, importing/exporting existing items, etc. that people are used to with claude products. I don't think this is that big of a moat in the long run. Other big players just seem to be carving up the landscape and see where they can can fit in for now, but once resource rich eyes focus on them, Anthropic's "moat" will disappear.
OpenCode has LSPs out of the box (coming to Claude Code, but not there yet), has a more extensive UI (e.g. sidebar showing pending todos), allows me to switch models mid-chat, has a desktop app (Electron-type wrapper, sure, but nevertheless, desktop; and it syncs with the TUI/web versions so you can use both at the same time), and so on.
So far I like it better, so for me that moat isn't that. The technical moat is still the superiority of the model, and others are bound to catch up there. Gemini 3 Preview is already doing better at some tasks (but frequently goes insane, sadly).
AI answers is good enough and there is a long history of companies who couldn’t monetize traffic via ads. The canonical example is Yahoo. Yahoo was one of the most traffic sites for 20 years and couldn’t monetize.
2nd issue: defaults matter. Google is the default search engine for Android devices, iOS devices and Macs whether users are using Safari or Chrome. It’s hard to get people to switch
3rd issue: any money that OpenAI makes off search ads, I’m sure Microsoft is going to want there cut. ChatGPT uses Bing
4th issue: OpenAIs costs are a lot higher than Google and they probably won’t be able to command a premium in ads. Google has its own search engine, its own servers, its own “GPUs” [sic],
5th: see #4. It costs OpenAI a lot more per ChatGPT request to serve a result than it costs Google. LLM search has a higher marginal cost.
There’s a couple of things going on but put simply - when there is no real lock in, humans enjoy variety. Until one firm creates a superior product with lock in, only those who are generating cash flows will survive.
OAI does not fit that description as of today.
And among them the overwhelming majority of companies in the sectors died. Out of the 2000ish car-related companies that existed in 1925 only 3 survived to today. And none of those 3 ended up a particularly good long term investment.
If we consider "AI" to be the current LLM and ImageGen bubble, I'm not sure we can say that.
We were all wowed that we could write a brief prompt and get 5,000 lines of React code or an anatomically questionable deepfake of Legally Distinct Chris Hemsworth dancing in a tutu. But once we got past the initial wow, we had to look at the finished product and it's usually not that great. AI as a research tool will spit back complete garbage with a straight face. AI images/video require a lot of manual cleanup to hold up to anything but the most transient scrutiny. AI text has such distinct tones that it's become a joke. AI code isn't better than good human-developed code and is prone to its own unique fault patterns.
It can deliver a lot of mediocrity in a hurry, but how much of that do we really need? I'd hope some of the post-bubble reckoning comes in the form of "if we don't have AI to do it (vendor failures or pricing-to-actual-cost makes it unaffordable), did we really need it in the first place?" I don't need 25 chatbots summarizing things I already read or pleading to "help with my writing" when I know what I want to say.
The issue is that generation of error-prone content is indeed not very valuable. It can be useful in software engineering, but I'd put it way below the infamous 10x increase in productivity.
Summarizing stuff is probably useful, too, but its usefulness depends on you sitting between many different communication channels and being constantly swamped in input. (Is that why CEOs love it?)
Generally, LLMs are great translators with a (very) lossly compressed knowledge DB attached. I think they're great user Interfaces, and they can help streamline buerocracy (instead of getting rid of it) but they will not help getting down the cost of production of tangible items. They won't solve housing.
My best bet is in medicine. Here, all the areas that LLMs excell at meet. A slightly distopian future cuts the expensive personal doctors and replaces them with (few) nurses and many devices and medicine controlled by a medical agent.
Imagine a trillion dollars (frankly it might be more, we'll see) shoved into clean energy generation and huge upgrades to our distribution.
With a bubble burst all we'd be left with is a modern grid and so much clean energy we could accelerate our move off fossil fuels.
Plus a lot of extra compute, that's less clear of a long term value.
Alas.
I think this conflates together a lot of different types of AI investment - the application layer vs the model layer vs the cloud layer vs the chip layer.
It's entirely possible that it's hard to generate an economic profit at the model layer, but that doesn't mean that there can't be great returns from the other layers (and a lot of VC money is focused on the application layer).
One doesn't need tens of billions for them.
I don't know why people always imply that "the bubble will burst" means that "literally all Ai will die out and nothing will remain that is of use". The Dotcom bubble didn't kill the internet. But it was a bubble and it burst nonetheless, with ramifications that spanned decades.
All it really means when you believe a bubble will pop is "this asset is over-valued and it will soon, rapidly deflate in value to something more sustainable" . And that's a good thing long term, despite the rampant destruction such a crash will cause for the next few years.
I don't expect AGI or Super intelligence to take that long but I do think it'll happen in private labs now. There's an AI business model (pay per token) that folks can use also.
I appreciate the optimism for what would be the biggest achievement (and possibly disaster) in human history. I wish other technologies like curing cancer, Alzheimer's, solving world hunger and peace would have similar timelines.
- take your data
- make a model
- sell it back to you
Eventually all of the available data will have been squeezed for all it's worth the only way to differentiate oneself as an AI company will be to propel your users to new heights so that there's new stuff to learn. That growth will be slower, but I think it'll bear more meaningful fruit.
I'm not sure if today's investors are patient enough to see us through to that phase in any kind of a controlled manner, so I expect a bumpy ride in the interim.
When you look at models that were built for a specific purpose, closely intertwined with experts who care about that purpose, they absolutely propel communities to new heights. Consider the impact of alphafold, it won a Nobel prize, proteomics is forever changed.
The issue is that that's not currently the business model that's aimed at most of us. We have to have a race to the bottom first. We can have nice things later, if we're lucky, once a certain sort of investor goes broke and a different sort takes the helm. It's stupid, but its a stupidity that predates AI by a long shot.
We know that the model training on the model training on the model leads to model collapse...
As a loan officer in Japan who remembers the 1989 bubble, I see the same pattern. In the traditional "Shinise" world I work with, Cash is Oxygen. You hoard it to survive the inevitable crash. For OpenAI, Cash is Rocket Fuel. They are burning it all to reach "escape velocity" (AGI) before gravity kicks in.
In 1989, we also bet that land prices would outrun gravity forever. But usually, Physics (and Debt) wins in the end. When the railway bubble bursts, only those with "Oxygen" will survive.
To be honest, in 1989, I was just a child. I didn't drink the champagne. But as a banker today, I am the one cleaning up the broken glass. So I can tell you about 1989 from the perspective of a "Survivor's Loan Officer."
I see two realities every day.
One is the "Zombie" companies. Many SMEs here still list Golf Club Memberships on their books at 1989 prices. Today, they are worth maybe 1/20th of that value. Technically, these companies are insolvent, but they keep the "Ghost of 1989" on the books, hoping to one day write it off as a tax loss. It is a lie that has lasted 30 years.
But the real estate is even worse. I often visit apartment buildings built during the bubble. They are decaying, and tenants have fled to newer, modern buildings. The owner cannot sell the land because demolition costs hundreds of thousands of dollars—more than the land is worth.
The owner is now 70 years old. His family has drifted apart. He lives alone in one of the empty units, acting as the caretaker of his own ruin.
The bubble isn't just a graph in a history book. It is an old man trapped in a concrete box he built with "easy money." That is why I fear the "Cash Burn" of AI. When the fuel runs out, the wreckage doesn't just disappear. Someone has to live in it.
But in my experience as a banker, the ones left in the wreckage are rarely the ones who drank the champagne. It is usually the ones who were hired to clean the glasses.
I hope history proves me wrong this time.
For OpenAI, cash is oxygen too; they're burning it all to reach escape velocity. They could use it to weather the upcoming storm, but I don't think they will.
It is a magnificent gamble. If they reach escape velocity (AGI), they own the future. But if they run out of fuel mid-air, gravity is unforgiving.
As a loan officer, I prefer businesses that don't need to leave the atmosphere to survive.
The cost of entry is far beyond extraordinary. You're acting like anybody can gain entry, when the exact opposite is the case. The door is closing right now. Just try to compete with OpenAI, let's see you calculate the price of attempting it. Scale it to 300, 500, 800 million users.
Why aren't there a dozen more Anthropics, given the valuation in question (and potential IPO)? Because it'll cost you tens of billions of dollars just to try to keep up. Nobody will give you that money. You can't get the GPUs, you can't get the engineers, you can't get the dollars, you can't build the datacenters. Hell, you can't even get the RAM these days, nor can you afford it.
Google & Co are capturing the market and will monetize it with advertising. They will generate trillions of dollars in revenue over the coming 10-15 years by doing so.
The barrier to entry is the same one that exists in search: it'll cost you well over one hundred billion dollars to try to be in the game at the level that Gemini will be at circa 2026-2027, for just five years.
Please, inform me of where you plan to get that one hundred billion dollars just to try to keep up. Even Anthropic is going to struggle to stay in the competition when the music (funding bubble) stops.
There are maybe a dozen or so companies in existence that can realistically try to compete with the likes of Gemini or GPT.
Apparently the DeepSeek folks managed that feat. Even with the high initial barriers to entry you're talking about, there will always be ways to compete by specializing in some underserved niche and growing from there. Competition seems to be alive and well.
They only lasted a couple of decades as the main transportation method. I'd say the internal combustion engine was a lot more transformative.
For me, I think that, the possible winners will be close to fully funded up front and the losers will be trying to turn debt into profit and fail.
The rest of us self hoster types are hoping for a massive glut of GPUs and RAM to be dumped in a global fire sale. We are patient and have all those free offerings to play with for now to keep us going and even the subs are so far somewhat reasonable but we will flee in droves as soon as you try to ratchet up the price.
It's a bit unfortunate but we are waiting for a lot of large meme companies to die. Soz!
Google was built on the shoulders of a lot of infrastructure tech developed by former search engine giants. Unfortunately the equity markets decided to devalue those giants instead of applaud them for their contributions to society.
Ranking was Google's 5% contribution to it. They stood on the shoulders of people who invented physical server and datacenter infrastructure, Unix/Linux, file systems, databases, error correction, distributed computing, the entire internet infrastructure, modern Ethernet, all kinds of stuff.
Everyone stood on the shoulders of file systems and databases, ethernet (and firewalls and netscreens, ...) Well, maybe a few stood on the shoulder of PHP.
Google did in fact pretty much figure out how to scale large number of servers (their racking, datacenters, clustering, global file systems etc) before most others did. I believe it was their ability to run the search engine cheap enough that enabled them to grow while largely retaining profitability early on.
Eventually the curves cross. Eventually the computer you can get for, say, $2000, becomes able to run the best models in existence.
The only way this doesn’t happen is if models do not asymptote or if computers stop getting cheaper per unit compute and storage.
This wouldn’t mean everyone would actually do this. Only sophisticated or privacy conscious people would. But what it would mean is that AI is cheap and commodity and there is no moat in just making or running models or in owning the best infrastructure for them.
It is not a railroad and the railroads did not explode in a bubble (OK a few early engines did explode but that is engineering). I think LLM driven investments in massive DCs is ill advised.
AI feels like a solution looking for a problem. Especially with 90% of consumer facing products. Were people asking for better chatbots, or to quickly deepfake some video scene? I think the bubble popping will re-reveal some incredible backend tools in tech, medical, and (eventually) robotics. But I don't think this is otherwise solving the problems they marketed on.
The problem is increasing profits by replacing paid labor with something "good enough".
> There's no evidence of a technological moat or a competitive advantage in any of these companies.
I disagree based on personal experience. OpenAI is a step above in usefulness. Codex and GPT 5.2 Pro have no peers right now. I'm happy to pay them $200/month.I don't use my Google Pro subscription much. Gemini 3.0 Pro spends 1/10th of the time thinking compared to GPT 5.2 Thinking and outputs a worse answer or ignores my prompt. Similar story with Deepseek.
The public benchmarks tell a different story which is where I believe the sentiment online comes from, but I am going to trust my experience, because my experience can't be benchmaxxed.
I find codex & 5.2 Pro next to useless and nothing holds a candle to Opus 4.5 in terms of utility or quality.
There's probably something in how varied human brains and thought processes are. You and I likely think through problems in some fundamentally different way that leads to us favouring different models that more closely align with ourselves.
No one seems to ever talk about that though and instead we get these black and white statements about how our personally preferred model is the only obvious choice and company XYZ is clearly superior to all the competition.
Personally I find GPT 5.2 to be nearly useless for my use case (which is not coding).
But Gemini will put me in my place. Sometimes I ask my question to Gemini because I don’t trust ChatGPT’s affirmations.
Truthfully I just use both.
1. Glazes me 2. Lists a variety of assumptions (some can be useful / interesting)
Answers the question
At least this way I don't spend a day pursuing an idea the wrong way because ChatGPT never pointed out something obvious.
There’s also no real moat with all the major models converging to be “good enough” for nearly all use cases. Far beyond a typical race to the bottom.
Those like Google with other products will just add AI features and everyone else trying to make AI their product will just get completely crushed financially.
We just don't know who will win in which area yet. It doesn't mean there is no moat.
Maybe the new more efficient models made it better for Claude users but that was my experience a couple months ago.
For professional usage though, Calude Code is so much ahead of Antigravity that it didn't even make sense to make a formal comparison. That, even when using the same model (Opus).
OpenAI says they're very profitable on inference.
Great, but they need to burn billions on advertising, freemeium, and mostly R&D for new models.
Search was even easier to switch. At least ChatGPT has memory.
Most chat apps are the same as Whatsapp. All of them are free too.
"Ask ChaGPT" is the equivalent to "google it" in 2025.
The comparison with WhatsApp feels like trolling. WhatsApp has a network effect...
Claude has 1% based on this: https://gs.statcounter.com/ai-chatbot-market-share
Consumers overwhelmingly use ChatGPT over Claude. ChatGPT dominance has not wavered.
There is tons of money to be made at the application layer, and VCs will start looking at that once the infrastructure layer collapses.
Here's a blog post I wrote about that: https://parsnip.substack.com/p/models-arent-moats
OpenAI challenging Google search is a winner takes all situation, not to mention the vast amounts of user data.
On the other hand, us lesser mortals can leverage AI like a commoditized service to build applications with it.
The problem is, they can't find the moat, despite searching very hard, whatever you bake into your AI, your competitors will be able to replicate in few months. This is why OpenAI is striking deal with Disney, because copyright provides such moat.
Will they really be able to replicate the quality while spending significantly less in compute investment? If not then the moat is still how much capital you can acquire for burning on training?
Been saying this since the 2016 Alice case. Apple jumped into content production in 2017. They saw the long term value of copyright interests.
https://arstechnica.com/information-technology/2017/08/apple...
Alice changed things such that code monkeys algorithms were not patentable (except in some narrow cases where true runtime novelty can be established.) Since the transformers paper, the potential of self authoring content was obvious to those who can afford to think about things rather than hustle all day.
Apple wants to sell AI in an aluminum box while VCs need to prop up data center agrarianism; they need people to believe their server farms are essential.
Not an Apple fanboy but in this case, am rooting for their "your hardware, your model" aspirations.
Altman, Thiel, the VC model of make the serfs tend their server fields, their control of foundation models, is a gross feeling. It comes with the most religious like sense of fealty to political hierarchy and social structure that only exists as hallucination in the dying generations. The 50+ year old crowd cannot generationally churn fast enough.
Plus moving all that data about is expensive. Keeping things in the datacenter is means its faster and easier to secure.
But really, so has everyone else. There's two "races" for AI - creating models, and finding a consumer use case for them. Apple just isn't competing in creating models similar to the likes of OpenAI or Google. They also haven't really done much with using AI technology to deliver 'revolutionary' general purpose user-facing features using LLMs, but neither has anyone else beyond chat bots.
I'm not convinced ChatGPT as a consumer product can sustain current valuations, and everyone is still clamouring to find another way to present this tech to consumers.
Good lord, expressing that kind of sentiment does not make for a useful and engaging conversation here on hacker news.
Studio Ghibli, Sora app. Go viral, juice numbers then turn the knobs down on copyrighted material. Atlas I believe was a less successful than they would've hoped for.
And because of too frequent version bumps that are sometimes released as an answer to Google's launch, rather than a meaningful improvement - I believe they're also having harder time going viral that way
Overall OpenAI throws stuff at the wall and see what sticks. Most of it doesn't and gets (semi) abandoned. But some of it does and it makes for better consumer product than Gemini
It seems to have worked well so far, though I'm sceptical it will be enough for long
Going viral as a billion dollar company spending upward of 1T is still not sustainable. You can't pay off a trillion dollars on "engagement". The entire advertising industry is "only" worth 1T as is: https://www.investors.com/news/advertising-industry-to-hit-1...
Normal people are already getting tired of AI Slop
(The obvious well-paying market would be erotic / furry / porn, but it's too toxic to publicly touch, at least in the US.)
As for photo/video very large number of people use it for friends and family (turn photo into creative/funny video, change photo, etc.).
Also I would think photoshop-like features are coming more and more in chatgpt and alike. For example, “take my poorly-lit photo and make it look professional and suitable for linkedin profile”
If Gemini can create or edit an image, chatgpt needs to be able to do this too. Who wants to copy&paste prompts between ai agents?
Also if you want to have more semantics, you add image, video and audio to your model. It gets smarter because of it.
OpenAI is also relevant bigger than antropic and is known as a generic 'helper'. Antropic probably saw the benefits of being more focused on developer which allows it to succeed longer in the game for the amount of money they have.
An AI!
The specialist vs generalist debate is still open. And for complex problems, sure, having a model that runs on a small galaxy may be worth it. But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.
> But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.
not an expert by any means, but wouldn't smaller but highly refined models also output more reproducible results?intuitively it sounds akin to the unix model...
I think you are confusing generation with analysis. As far I am aware your model does not need to be good at generating images to be able to decode an image.
Now there are all sorts of tricks to get the output of this to be good, and maybe they shouldn't be spending time and resources on this. But the core capability is shared.
I think that hasn't been the case since DeepDream?
I think it's important to OpenAI to support as many use-cases as possible. Right now, the experience that most people have with ChatGPT is through small revenue individual accounts. Individual subscriptions with individual needs, but modest budgets.
The bigger money is in enterprise and corporate accounts. To land these accounts, OpenAI will need to provide coverage across as many use-cases as they can so that they can operate as a one-stop AI provider. If a company needs to use OpenAI for chat, Anthropic for coding, and Google for video, what's the point? If Google's chat and coding is "good enough" and you need to have video generation, then that company is going to go with Google for everything. For the end-game I think OpenAI is playing for, they will need to be competitive in all modalities of AI.
It'll just end up spreading itself too thin and be second or third best at everything.
The 500lb gorilla in the room is Google. They have endless money and maybe even more importantly they have endless hardware. OpenAI are going to have an increasingly hard time competing with them.
That Gemini 3 is crushing it right now isn't the problem. It's Gemini 4 or 5 that will likely leave them in the dust for the general use case, meanwhile specialist models will eat what remains of their lunch.
[1] https://arxiv.org/pdf/2509.20328
[2] https://deepmind.google/blog/genie-3-a-new-frontier-for-worl...
The entertainment industry is by far the easiest way to tap into global discretionary income.
I use it several times a day just to change text in image form to text form so you can search it and the like.
It's built into chrome but they move the hidden icon about regularly to confuse you. This month you click the url and it appears underneath, helpfully labeled "Ask Google about this page" so as to give you little idea it's Google Lens.
This really is the critical bit. A year ago, the spin was "ChatGPT AI results are better than search, why would you use Google?", now it's "Search result AI is just as good as ChatGPT, why bother?".
When they were disruptive, it was enough to be different to believe that they'd win. Now they need to actually be better. And... they kinda aren't, really? I mean, lots of people like them! But for Regular Janes at the keyboard, who cares? Just type your search and see what it says.
It is far behind, and GPT hasn't exactly stopped growing either. Weekly Active Users, Monthly visits...Gemini is nowhere near. They're comfortably second, but second is still well below first.
>ai overviews in search are super popular and staggeringly more used than any other ai-based product out there
Is it ? How would you even know ? It's a forced feature you can not opt out of or not use. I ignore AI overviews, but would still count as a 'user' to you.
Search Traffic: https://x.com/Similarweb/status/2003078223135990246
Gemini - 1.4b visits - +14.4% MoM
Yeah, ChatGPT is still more popular, but this does not show Gemini struggling exactly.
https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...
But on the contrary, Nano Banana is very good, so I don't know. And in the end, I'm pretty confident Google will be the AI race winner, because they got the engineers, they tech background and the money. Unless Google Adsense die, they can continue the race forever.
Gemini is built into Android and Google search. People may not be going to gemini.google.com, but that does not mean adoption is low.
If they can achieve that they will cut off a key source of blood supply to MSFT+OAI. There is not much money in the consumer market segment from subscribers and entering the ad-business is going to be a lot tougher than people think.
https://searchengineland.com/nearly-all-chatgpt-users-visit-...
But even more importantly, it obviously isn’t losing money from advertisers to ChatGPT. You can look at their quarterly results.
But you cannot use it with an API key.
If you're on a workspace account, you can't have normal individual plan.
You have to have the team plan with $100/month or nothing.
Google's product management tier is beyond me.
> With Chrome being the largest browser by market share, that's a powerful de facto default.
where art thou anti-trust enforcement...Absolutely no one besides ChromeOS users are forced to use Chrome.
>whereas OpenAI has a clear opportunity with advertising.
Personally, having "a clear opportunity with advertising" feels like a last ditch effort for a company that promised the moon in solving all the hard problems in the world.
1. Google books, which they legally scanned. No dubious training sets for them. They also regularly scrape the entire internet. And they have YouTube. Easy access to the best training data, all legally.
2. Direct access to the biggest search index. When you ask ChatGPT to search for something it is basically just doing what we do but a bit faster. Google can be much smarter, and because it has direct access it's also faster. Search is a huge use case of these services.
3. They have existing services like Android, Gmail, Google Maps, Photos, Assistant/Home etc. that they can integrate into their AI.
The difference in model capability seems to be marginal at best, or even in Google's favour.
OpenAI has "it's not Google" going for it, and also AI brand recognition (everyone knows what ChatGPT is). Tbh I doubt that will be enough.
In my view Google is uniquely well positioned because, contrary to the others, it controls most of the raw materials for Ai.
Just a case of too many companies have skin in OpenAI's game for it to be allowed to fail now.
For all we know, they could be accumulating capital to weather an AI winter.
It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.
No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o.
I’m sure all these AI labs have extensive data gathering, cleanup, and validation processes for new data they train the model on.
Or at least I hope they don’t just download the current state of the web on the day they need to start training the new model and cross their fingers.
It said: OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome.
However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high.
If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too.
[1] https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...
I'd love a blog or coffee table book of "where are they now" for the director level folks who do dumb shit like this.
This isn't really accurate.
Firstly, GPT4.5 was a new training run, and it is unclear how many other failed training runs they did.
Secondly "all subsequent models are routing systems and prompt chains built on top of 4" is completely wrong. The models after gpt4o were all post-trained differently using reinforcement learning. That is a substantial expense.
Finally, it seems like GPT5.2 is a new training run - or at least the training cut off date is different. Even if they didn't do a full run it must have been a very large run.
At the very least they made GPT 4.5, which was pretty clearly trained from scratch. It was possibly what they wanted GPT-5 to be but they made a wrong scaling prediction, people simply weren't ready to pay that much money.
Their investors surely do (absent outrageous fraud).
> For all we know, they could be accumulating capital to weather an AI winter.
If they were, their investors would be freaking out (or complicit in the resulting fraud). This seems unlikely. In point of fact it seems like they're playing commodities market-cornering games[1] with their excess cash, which implies strongly that they know how to spend it even if they don't have anything useful to spend it on.
[1] Again c.f. fraud
Right, this is nonsense. Even if investors wanted to be complicit in fraud, it's an insane investment. "Give us money so we can survive the AI winter" is a pitch you might try with the government, but a profit-motivated investor will... probably not actually laugh in your face, but tell you they'll call you and laugh about you later.
I know sama says they aren’t trying to train new models, but he’s also a known liar and would definitely try to spin systemic failure.
Doubtful. This would be the very antithesis of the Silicon Valley way.
I use it in conjunction with Claude. I’ve gotten pretty good results using both of them in tandem.
However on a principal basis I prefer to self host, I wonder if an advantage of OpenAI imploding wouldn’t generate basement level prices of useful chips? Ideally I want to run my LLM and train it on my data.
A lot of people now reach for ChatGPT by default instead of Google, even with the AI summaries. I wonder whether they just prefer the interface of the chat apps to Google that can be a bit cluttered in comparison.
I’m one of those people, and the reason for that is that Google’s AI summaries are awful more times than not. With ChatGPT I can (kind of) set how much “thinking” to do for each query and guide the model into producing better results via prompting.
(Adjacent to this is how crazy it was that Meta were accused of torrenting ebooks. Did they need them for the underlying knowledge? I can’t imagine they needed them for natural langauge examples.)
I know this is the latest catastrophizion meme for AI companies, but what is it even supposed to mean? OpenAI failing wouldn’t mean AI disappears and all of their customers go bankrupt, too. It’s not like a bank. If OpenAI became insolvent or declared bankruptcy, their intellectual property wouldn’t disappear or become useless. Someone would purchase it and run it again under a new company. We also have multiple AI companies and switching costs are not that high for customers, although some adjustment is necessary when changing models.
I don’t even know what people think this is supposed to mean. The US government gives them money for something to prevent them from filing for bankruptcy? The analogy to bank bailouts doesn’t hold.
If you look at the financial crisis, the US government decided to bail out AIG, after passing on Bear Sterns, because big banks like Goldman Sachs and Morgan Stanley (and even Jack Welch's General Electric) all had huge counterparty risk with AIG.
Someone else put it succintly.
"When A million dollar company fails, it's their problem. When a billion dollar company fails, it's our problem"
In essence, there's so much investment in AI that it's a significant part of the US GDP. If AI falters, that is something that the entire stock market will feel, and by effect, all Americans. No matter how detached from tech they are. In other words, the potential for the another great depression.
In that regard, the government wants to avoid that. So they will at least give a small bailout to lessen the crash. But more likely (as seen with the Great Financial Crisis), they will likely supply billions upon billions to prop up companies that by all business logic deserved to fail. Because the alternative would be too politically damaging to tolerate.
----
That's the theory. These all aren't certain and there are arguments to suggest that a crash in AI wouldn't be as bad as any of the aforementioned crashes. But that's what people mean by "become too big to fail and get bailed out".
The stock market isn't rational, its a room full of people talking loudly, and moving to various tables.
All it takes is someone outside the room to shout something that triggers panic, and most of the people in the room will run for the exit.
And that's ignoring the dominoes of other AI firms being pulled out of because OpenAi falters.
If they aren't dumb, why are they investing in MSFT now then if it's a bubble that's doomed to fail? And even in the worst case scenario, a 10-15% decline in the S&P 500 won't trigger the next Great Depression. (Keep in mind that we already had a ~20% drawdown in public equities during the interest rate hikes of 2022/2023 and the economy remained pretty robust throughout.)
>And even in the worst case scenario, a 10-15% decline in the S&P 500 won't trigger the next Great Depression
Only if you believe the 10% decline won't domino and that the S&P500 is secluded from the rest of the global economy. I wish I shared your optimism.
> and the economy remained pretty robust throughout.
Yeah and we voted the person who orchestrated that out. We don't have the money to pump trillions back in a 2nd time in such a short time. Something's gonna give, and soon.
So your hypothesis is that a 10% decline in the S&P 500 will trigger the next Great Depression, i.e. years of negative GDP growth and unemployment? I agree that it could cause a slight economic slowdown, but I don't think AI and tech stocks are a large enough part of the economy to cause a Great Depression-style catastrophe.
An expected outcome from a AI blowout is the uncertainty and everyone holding onto their assets and credit recalls plus interest rate hikes.
During the great depression it wasn't the stock market collapse that caused it as much as it was the credit crunch that followed. Prior to the blowout people literally bought stocks on credit.
Yup. I won't say it's the only factor, nor biggest. But I'm focusing on this topic and not 40+ years of government economic abandonment of the working class. It's the straw that will break the camel's back.
That happened a long time ago! Microsoft already owns the model weights!
Yes but with all stock growth being in AI companies it would tank the market for one. Secondly, all of those dollars they are using are backed by creditors who would have a default. short of another TARP (likely IMO, the US NEEDS to keep pumping AI to compete with China) .... it could scare investors off too..
Plus with the growth in AI effecting the overall makeup of the stockmarket, something like this hurts every Americans 401k
Citation is needed
It’s going to crash, guaranteed
What a silly calculation.
OpenAI’s customer base is global. Using US population as the customer base is deliberately missing the big picture. The world population is more than 20X larger than the US population.
It’s also obvious that they’re selling heavily to businesses, not consumers. It’s not reasonable to expect consumers to drive demand for these services.
I'd be willing to bet that, like many US websites, OpenAI's users are at lest 60% American. Just because there's 20x more people out there doesn't mean they have the same exposure to American products.
For instance, China is an obvious one. So that's 35%+ of the population already mostly out of consideration.
>It’s also obvious that they’re selling heavily to businesses, not consumers.
I don't think a few thousand companies can outspend 200m users paying $200 a month. I won't call it a "mathematical impossibility", but the math also isn't math-ing here.
Since when is English everyone's primary language?
If it happens in the next 3 years, tho, and Altman promises enough pork to the man, it could happen.
Not that I have an opinion one way or another regarding whether or not they'd be bailed out, but this particular argument doesn't really seem to fit the current political landscape.
Some players have to play, like google, some players want to play like USA vs. China.
Besides that, chatting with an LLM is very very convincing. Normal non technical people can see what 'this thing' can already do and as long as the progress is continuing as fast as it currently is, its still a very easy to sell future.
I don't think you have the faintest clue of what you're talking about right now. Google authored the transformer architecture, the basis of every GPT model OpenAI has shipped. They aren't obligated to play any more than OpenAI is, they do it because they get results. The same cannot be said of OpenAI.
MS Office has about 345 million active users. Those are paying subscriptions. IMHO that's roughly the totally addressable market for OpenAI for non coding users. Coding users is another few 20-30 million.
If OpenAI can convert double digit percentages of those onto 20$ and 50$ per month subscriptions by delivering good enough AI that works well, they should be raking in cash by the billions per month adding up to close to the projected 2030 cash burn per year. That would be just subscription revenue. There is also going to be API revenue. And those expensive models used for video and other media creation are going to be indispensable for media and advertising companies which is yet more revenue.
The office market at 20$/month is worth about 82 billion per year in subscription revenue. Add maybe a few premium tiers to that at 50$/month and 100$/month and that 2030 130 billion per year in cash burn suddenly seems quite reasonable.
I've been quite impressed with Codex in the last few months. I only pay 20$/month for that currently. If that goes up, I won't loose sleep over it as it is valuable enough to me. Most programmers I know are on some paid subscription to that, Anthropic's Claude, or similar. Quite a few spend quite a bit more than that. My Chat GPT Plus subscription feels like really good value to me currently.
Agentic tooling for business users is currently severely lacking in capability. Most of the tools are crap. You can get models to generate text. But forget about getting them to format that text correctly in a word processor. I'm constantly fixing bullets, headings and what not in Google docs for my AI assisted writings. Gemini is close to ff-ing useless both with the text and the formatting.
But I've seen enough technology demos of what is possible to know that this is mostly a UX and software development problem, not a model quality problem. It seems companies are holding back from fully integrating things mainly for liability reasons (I suspect). But unlocking AI value like that is where the money is. Something similarly useful as codex for business usage with full access to your mail, drive, spread sheets, slides, word processors, CRMs, and whatever other tools you use running in YOLO mode (which is how I use codex in a virtual machine currently, --yolo). That would replace a shit ton of manual drudgery for me. It would be valuable to me and lots of other users. Valuable as in "please take my money".
Currently doing stuff like this is a very scary thing to do because it might make expensive/embarrassing mistakes. I do it for code because I can contain the risk to the vm. It actually seems to be pretty well behaved. The vm is just there to make me feel good. It could do all sorts of crazy shit. It mostly just does what I ask it to. Clearly the security model around this needs work and instrumentation. That's not a model training problem though.
Something like this for business usage is going to be the next step in agent powered utility that people will pay for at MS office levels of numbers of users and revenue. Google and MS could do it technically but they have huge legal exposure via their existing SAAS contracts and they seem scared shitless of their own lawyers. OpenAI doing something aggressive in this space in the next year or so is what I'm expecting to happen.
Anyway, the bubble predictors seem to be ignoring the revenue potential here. Could it go wrong for OpenAI? Sure. If somebody else shows up and takes most of the revenue. But I think we're past the point where that revenue is not looking very realistic. Five years is a long time for them to get to 130 billion per year in revenue. Chat GPT did not exist five years ago. OpenAI can mess this up by letting somebody else take most of that revenue. The question is who? Google, maybe but I'm underwhelmed so far. MS, seems to want to but unable to. Apple is flailing. Anthropic seems increasingly like an also ran.
There is a hardware cost bubble though. I'm betting OpenAI will get a lot more bang for its buck in terms of hardware by 2030. It won't be NVidia taking most of that revenue. They'll have competition and enter a race to the bottom in terms of hardware cost. If OpenAI burning 130 billion per year, it will probably be getting a lot more compute for it than currently projected. IMHO that's a reasonable cost level given the total addressable market for them. They should be raking in hundreds of billions by then.
There is a hardware cost bubble though. I'm betting OpenAI will get a lot more bang for its buck in terms of hardware by 2030. It won't be NVidia taking most of that revenue.
Whoever has the most compute will ultimately be the winner. This is why these companies are projecting hundreds of billions in infrastructure spend.With more compute, you can train better models, serve them to more users, serve them faster. The more users, the more compute you can buy. It's a run away cycle. We're seeing only 3 (4 if you count Meta) frontier LLM providers left in the US market.
Nvidia's margins might come down by 2030. It won't stay in the 70s. But the overall market can expand quicker than Nvidia's profits shrink so that they can be more profitable in 2030 despite lower market share.
They need a better marketing strategy.
Why would you want my money to be used to build datacenter that won’t benefit me ? I might use a LLM once a month, many people never use it.
Let the one who use it pay for it.
No chance they're going to take risks to share that hardware with anyone given what it does.
The scaled down version of El Capitan is used for non-classified workloads, some of which are proprietary, like drug simulation. It is called Tuolumne. Not long ago, it was nevertheless still a top ten supercomputer.
Like OP, I also don't see why a government supercomputer does it better than hyperscalers, coreweave, neoclouds, et al, who have put in a ton of capital as even compared to government. For loads where institutional continuity is extremely important, like weather -- and maybe one day, a public LLM model or three -- maybe. But we're not there yet, and there's so much competition in LLM infrastructure that it's quite likely some of these entrants will be bag holders, not a world of juicy margins at all...rather, playing chicken with negative gross margins.
these things constitute public goods that benefit the individual regardless of participation.
Uncanny really.
What is the justification for considering data centers capable of running LLMs to be a public good?
There are many counter examples of things many people use but are still private. Clothing stores, restaurants and grocery stores, farms, home appliance factories, cell phone factories, laundromats and more.
Why not an LLM datacenter if it also offers information? You could say it's the public library of the future maybe.
This is not at all true of generative AI.
OpenAI ask for 1m GPUs for a month, Anthropic ask for 2m, the government data center only has 500,000, and a new startup wants 750,000 as well.
Do you hand them out to the most convincing pitch? Hopefully not to the biggest donor to your campaign.
Now the most successful AI lab is the one that's best at pitching the government for additional resources.
UPDATE: See comment below for the answer to this question: https://news.ycombinator.com/item?id=46438390#46439067
It would still likely devolve into most-money-wins, but it is not an insurmountable political obstacle to arrange some sort of sharing.
Edit: I meant to say over subscribed, not over provisioned. There are far more jobs in the queue than can be handled at once
https://www.ornl.gov/news/doe-incite-program-seeks-2026-prop...
> The Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program has announced the 2026 Call for Proposals, inviting researchers to apply for access to some of the world’s most powerful high-performance computing systems.
> The proposal submission window runs from April 11 to June 16, 2025, offering an opportunity for scientific teams to secure substantial computational resources for large-scale research projects in fields such as scientific modeling, simulation, data analytics and artificial intelligence. [...]
> Individual awards typically range from 500,000 to 1,000,000 node-hours on Aurora and Frontier and 100,000 to 250,000 node-hours on Polaris, with the possibility of larger allocations for exceptional proposals. [...]
> The selection process involves a rigorous peer review, assessing both scientific merit and computational readiness. Awards will be announced in November 2025, with access to resources beginning in 2026.
Not sure OpenAI/Anthropic etc would be OK with a six month gap between application and getting access to the resources, but this does indeed demonstrate that government issued super-computing resources is a previously solved problem.
In theory it makes the process more transparent and fair, although slower. That calculus has been changing as of late, perhaps for both good and bad. See for example the Pentagon's latest support of drone startups run by twenty-year-olds.
The question of public and private distinctions in these various schemes are very interesting and imo, underexplored. Especially when you consider how these private LLMs are trained on public data.
people have no idea about how big the military and defense budgets worldwide are next to any other example of a public budget.
throw as many pie charts out as you want; people just can't see the astronomical difference in budgets.
I think it's based on how the thing works; a good defense works until it doesn't -- the other systems/budgets in place have a bit more of a graceful failure. This concept produces an irrationality in people that produces windfalls of cash availability.
I see no argument why the government would jump into a hype cycle and start building infra that speculative startups are interested in. Why would they take on that risk compared to private investors, and how would they decide to back that over mammoth cloning infra or whatever other startups are doing?
Hmm, what about member-owned coöperatives? Like what we have for stock exchanges.
Everything is happening exactly as it should. If the "bubble" "pops", that's just the economic laws doing what they naturally do.
The government has better things to do. Geopolitics, trade, transportation, resources, public health, consumer safety, jobs, economy, defense, regulatory activities, etc.
I am not saying OpenAI is Amazon but am saying I have seen this before where masses are going “oh business is bad, losses are huge, where is path to profitability…”
I do know that in the late aughts, people were writing stories about how Amazon was a charity run on behalf of the American consumer by the finance industry.
That being said, if I was Sam Altman I'd also be stocking up on yachts, mansions and gold plated toilets while the books are still private. If there's $10bn a year in outgoings no one's going to notice a million here and there.
2026: US AI companies pump stocks -> market correction -> taxpayer bailout
Mark my words. OpenAI will be bailed out by US taxpayers.
Banks get bailed out because if confidence in the banking system disappears and everyone tries to withdraw their money at once, the whole economy seizes up. And whoever is Treasury Secretary (usually an ex Wall Street person) is happy to do it.
I don't see OpenAI having the same argument about systemic risk or the same deep ties into government.
Banks needed bailout to keep lending money. Auto industry needed one to keep employing lot of people. AI doesn't employ that many.
I just don't believe bailout can happen before it is too late for it to be effective in saving the market.
The same can happen now on the side of private credit that gradually offloads its junk to insurance companies (again):
As a result, private credit is on the rise as an investment option to compensate for this slowdown in traditional LBO (Figure 2, panel 2), and PE companies are actively growing the private credit side of their business by influencing the companies they control to help finance these operations. Life insurers are among these companies. For instance, KKR’s acquisition of 60 percent of Global Atlantic (a US life insurer) in 2020 cost KKR approximately $3billion.
https://www.imf.org/en/Publications/global-financial-stabili...
Their cost to serve each request is roughly 3 orders of magnitude higher than conventional web sites.
While it is clear people see value in the product, we only know they see value at today’s subsidized prices. It is possible that inference prices will continue their rapid decline. Or it is possible that OAI will need to raise prices and consumers will be willing to pay more for the value.
And the backstop on asset prices at the expense of the currency's purchasing power.
The reason people are so skeptical is that OpenAI is applying the standard startup justification for big spending to a business model where it doesn't seem to apply.
No, inference is really cheap today, and people saying otherwise simply have no idea what they are talking about. Inference is not expensive.
> Even at $200 a month for ChatGPT Pro, the service is struggling to turn a profit, OpenAI CEO Sam Altman lamented on the platform formerly known as Twitter Sunday. "Insane thing: We are currently losing money on OpenAI Pro subscriptions!" he wrote in a post. The problem? Well according to @Sama, "people use it much more than we expected."
Altman also said 4 months ago:
Most of what we're building out at this point is the inference [...] We're profitable on inference. If we didn't pay for training, we'd be a very profitable company.
https://simonwillison.net/2025/Aug/17/sam-altman/a spot on the iOS home screen? yes.
infrastructure to serve LLM requests? no.
good LLM answers? no.
the economist can't tell the difference between scarcity and real scarcity.
it is extremely rare to buy a spot on the iOS home screen, and the price for that is only going up - think of the trend of values of tiktok, whatsapp and instagram. that's actually scarce.
that is what openai "owns." you're right, #5 app. you look at someone's home screen, and the things on it are owned by 8 companies, 7 of which are the 7 biggest public companies in the world, and the 8th is openai.
whereas infrastructure does in fact get cheaper. so does energy. they make numerous mistakes - you can't forecast retail prices Azure is "charging" openai for inference. but also, NVIDIA participates in a cartel. GPUs aren't actually scarce, you don't actually need the highest process nodes at TSMC, etc. etc. the law can break up cartels, and people can steal semiconductor process knowledge.
but nobody can just go and "create" more spots on the iOS home screen. do you see?
I see Google doing to OpenAI today what Microsoft did to Netscape back then, using their dominant position across multiple channels (browser, search, Android) to leverage their way ahead of the first mover.
A small anecdote: when ChatGPT went down a few months ago, a lot of young people (especially students) just waited for it to come back up. They didn't even think about using an alternative.
This "moat" that OpenAI has is really weak
GPT5.2 Codex is the best coding model right now in benchmarks. I use it exclusively now.
I'd love to see the rationale that OpenAI (not "AI" everywhere) is more valuable than chocolate globally.
... so crash early 2026?
Even as an enormous chocolate lover (in all three senses) who eats chocolate several times a week, I'd probably choose AI instead.
OpenAI has alternatives, but also I do spend more money on OpenAI than I do on chocolate currently.
Maybe instead of the chocolate market, look at the global washing machine market of $65 billion.
I’d rather give up both AI and chocolate than my washing machine.
Is it necessary to a point you want to make?
You can just point to behavior of a given entity, such as to conclude it's untrustworthy, without the problematic area of armchair psychoanalysis.
But it might mean that LLMs don't really improve much from where they are today, since there won't be the billions of dollars to throw at training for small incremental improvements that consumers mostly don't care to pay anything for.
What's interesting is the strategic positioning. They need to maintain leadership while somehow finding a sustainable business model. The API pricing already feels like it's in a race to the bottom as competition intensifies.
For startups building on top of LLM APIs, this should be a wake-up call about vendor lock-in risks. If OpenAI has to dramatically change their pricing or pivot their business model to survive, a lot of downstream products could be impacted. Diversifying across multiple model providers isn't just good engineering - it's business risk management.
AI is at 1% of total US GDP right now.
We have 6x more to go.