My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else. There are no network effects for sure, but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere. Understandable that it would be hard to get majority of these free users to pay for anything, and hence, advertising seems a good bet. You couldn't have thought of a more contextual way of plugging in a paid product.
I think OpenAI has better chance to winning on the consumer side than everyone else. Of course, would that much up against hundreds of billions of dollars in capex remains to be seen.
I might have sessions I revisit over a few weeks, but nothing longer than that. The conversations feel as ephemeral as the code produced. Some tiny fractions of it might persist long term, but most of it is already forgotten and replaced by lunch time.
Cultural defaults seem unchangeable but then suddenly everyone knows, that's everyone knows, that OpenAI is passé.
OpenAI has a real chance to blow their lead, ending up in a hellish no-man's land by trying to please everyone: Not cool enough for normies, not safe enough for business, not radical enough for techies. Pick a lane or perish.
Not owning their own infrastructure, and being propped up by financial / valuation tricks are more red flags.
Being a first mover doesn't guarantee getting to the golden goose, remember MySpace.
Literally every industry has examples of businesses that don't excel at anything and still do well enough to carry on. In fact, in most industries, it's actually hard to see any business that's clearly leading on any specific front because as soon as it becomes an obvious factor in gaining market share the competing businesses focus on that area as well.
MySpace, ICQ, Altavista, Dropbox, Yahoo, BlackBerry, Xerox Alto, Altair 8800, CP/M, WordStar, VisiCalc, the list is very long.
and thousands on conversation on these apps that can't be easily moved elsewhere.
This obstacle looks familiar.Except these aren't conversations in the traditional sense. Yes, there's the history of prompts and responses exchanged. But the threads don't build on each other - there's no cross-conversational memory, such as you'd have in a human relationship. Even within a conversation it's mostly stateless, sending the full context history each time as input.
So there's no real data or network effect moat - the moat is all in model quality (which is an extremely competitive race) and harness quality (same). I just don't think there's any real switching cost here.
I use OpenAI a lot on the paid plan via the UI. It now knows absolutely loads about me and seems to have a massive amount of cross conversational memory. It's really getting very close to what you'd expect from a human conversation in this regard.
Sure the model itself is still stateless, and if you use the API then what you say is true.
But they are doing so much unseen summarisation and longer context building behind the scenes in the webapp, what you see in the current conversation history is just a fraction of what is getting sent to the model.
This would feel like a switching cost for people who use the system that way.
I think you're underestimating how fickle consumers are, and how much their choices are based on fashion and emotion. A couple more of these, and OpenAI will find itself relegated to the kids' table with Grok and Perplexity. https://www.technologyreview.com/2025/08/15/1121900/gpt4o-gr...
Google and Apple just need to push their AI assistants hard enough, and most of the moat OpenAI has will be gone.
In comparison, Claude's name is very bad, it just doesn't sound right and people might mishear me when I say it. I never say "Claude" when talking to other, especially non-technical people, and instead say "ChatGPT" even though I am using Claude exclusively.
Google has another problem - they advertise their models as separate products. There is Gemini and there is Nano Banana, also Nano Banana Pro. But they are all somehow under the same product which is still called Gemini. I understand the distinction but I am sure many non-technical people find it confusing.
I don't know but around here common people all say "Chatty" nowadays, and also most people if writing the correct name fail to spell "gpt" right quite often in chat.
They intentionally chose a more bland sounding name, as, I assume, they wanted to emphasise the "safe" nature compared to their competitors.
As more information comes out about openai, people may choose to move to for other reasons, such as
- Openai adding ads
- Openai's president donating millions to a MAGA PAC
- Openai getting closer to the US military whilst anthropic standing their ground and rejecting them.
- Openai's recent products not being at the top of the benchmarks
The choice is yours.
Anectode: My aunt was talking about how she had a conversation with ChatGPT about how bad OpenAI was and the AI said "we need regulations", and that seemed to satisfy her somehow.
For myself, I use LLMs daily and I would even say a lot on some days and I _did_ pay the 20€/mo subscription for ChatGPT, but with the latest model I cannot justify that anymore.
4o was amazingly good even if it had some parasocial issues with some people, it actually did what I expect an LLM to do. Now the quality of the 5.whatever has gone drastically down. It no longer searches web for things it doesn't know, but instead guesses.
Even worse is the tone it uses; "Let's look at this calmly" and other repeated sentences are just off putting and make the conversation feel like the LLM thinks I am about to kill myself constantly and that is not what I want from my LLM.
Don't underestimate advertising. Noone pays for Facebook or Google search. Yet the ad business with a couple billion users seems profitable enough to fund frontier LLM research and inference infrastructure as a side-gig in these companies. Google only rushed out AI overview because they saw ChatGPT eating their market share in information retrieval and Zuck is literally panicking about the fact that users share more personal details with OpenAI than on his doomscrolling attention sinks.
However, I believe an ad it still influences you subconsciously as long as it is in your sight line.
I wouldn't be surprised if there is a lot of investigation into subtly slipping advertising in the LLM responses the way Korean dramas have product placement right in the storyline (Subway, bbq chicken, beverages, makeup, etc).
You can't click on the budweiser logo when watching super bowl ad. But if you sit in your chatgpt window all day then it's probably worth it for advertisers to expect to build familiarity with brands they advertise.
They could succeed where Alexa failed. A free user can even bring in more than a paid user if you look at some platforms like spotify, where apparently there is a large chunk of free users generating more income through ads than if they would pay
I was researching CAVA ( due to the crazy earnigs announcement yesterday ) and it was displaying some nice links to the website, all suffixed with ?utm=chatgpt
So, it has begun!
So I'm curious to understand: What are the discussions like that people go back to and would lose if they moved to another platform?
Do you have the memory feature disabled? I have the feeling this in particular is doing absolutely loads behind the scene, e.g summarising all conversations and adding additional hidden context to every request.
I can start a new chat in the UI right now, ask it what my job is, what my current project is, how many kids I have, what car I drive etc. It'll know the answer already.
I think it's this conversation history - or maybe better yet if we think of it as this "relationship" - that people are saying is going to make it hard to move.
People used to suggest this about MySpace.
Sure it's 'sticky' at least a little, but it's not a moat. A moat is a show stopper like they own you.
Would you?
First I would have to walk 10 miles into town. Then I would have to locate a purveyor of goods that carried Pepsi-Cola products...
Then I reckon we would spend a fort-minute dickering over price.
And finally trudging back home with my Pepsi product in tow.
Why, I'd be lucky to accomplish this herculean task in the very same evening.
Friendster, MySpace, Facebook
Netscape, ie, chrome
Icq, aim, MSN messenger, a million other chat apps
First mover advantage doesn't last long
Very high chance that the winner in five years is a company that does not yet exist
The legislative angle taken by companies like Anthropic is that they will provide the censorship gatekeeping infrastructure to scan all user-generated content that gets posted online for "appropriateness", guaranteeing AI providers a constant firehose of novel content they can train on and get paid for the free training. AI companies will also get paid to train on videos of everyone's faces and IDs.
As for why Blackburn supports KOSA[3]:
> Asked what conservatives’ top priorities should be right now, Senator Blackburn answered, “protecting minor children from the transgender [sic] in this culture and that influence.” She then talked about how KOSA could address this problem, and named social media platforms as places “where children are being indoctrinated.”
If Anthropic, the PACs it supports and Blackburn get their way with KOSA, the end result will be that anything posted on the internet will be able to be traced back to you. Web platforms will finally be able to sell their userbases as identifiable and monetizable humans to their partners/advertisers/governments/facial recognition systems/etc. AI companies will legally enshrine themselves as the official gatekeepers and censors of the internet, and they will be paid to train on the totality of novel human creativity in real-time.
That will be their moat.
[1] https://www.cnbc.com/2026/02/12/anthropic-gives-20-million-t...
[2] https://publicfirstaction.us/news/public-first-action-and-de...
[3] https://www.them.us/story/kosa-senator-blackburn-censor-tran...
> My wife, for example, uses ChatGPT on a daily basis, but has found no reason to try anything else.
Is she paying for it? Because as we have seen repeatedly in the past, paid products whither and die when Microsoft bundles a default replacement.
You need to provide a really good reason why this time its different.
For chat apps, good enough is good enough. For something as universally useful and easy to use as ChatGPT, the bar is higher. I don't want to comment on the financial feasibility, but whatever Microsoft put out has been a complete flop even when free, making ChatGPT $8 subscription seem worth it in comparison
That was my point - a lot of superior products were eaten by poor bundled replacements.
Last I checked, copilot has more users than ChatGPT simply because users are using it from within Excel, Word, Outlook and Teams, without even knowing that they are using copilot. It's bundled into Windows.
Right now, copilot is more useful to users than ChatGPT because it is embedded into their workflows.
Ads might change that. If we know anything, nobody beats Google with ad based monetization. OAI is absolutely correct to be scared.
I just asked it to build me a searchable indexed downloaded version of all my conversations. One shot, one html page, everything exported (json files).
I’m sure I could ask Claude to import it. I don’t see the moat.
Honest question I have this issue a lot with AI claims. Nobody verifies the output.
How bad it is if put of 200+ conversations, a couple of those are not exported correctly? Not much honestly. If I verify some of those and they are ok, I would see no reason to keep verifying all of them.
it's not useless, although it used to be more useful than it is now.
That's ok, we use ChatGPT only for coding. We should be good, right? Umm, no. They already explicitly expressed the intention to take a percentage of your revenue if you shipped something with ChatGPT, so even the tech guys aren't safe.
"As intelligence moves into scientific research, drug discovery, energy systems, and financial modeling, new economic models will emerge. Licensing, IP-based agreements, and outcome-based pricing will share in the value created. That is how the internet evolved. Intelligence will follow the same path."
"Intelligence will follow the same path."
https://openai.com/index/a-business-that-scales-with-the-val...
So yes, OpenAI has the best chance to win on the consumer side than anyone else. But, that's not necessarily a good thing (and the OpenAI fanboys will hate me for pointing this out).
Wasn't there already a ruling that LLM output is not protected by copyright?
Agentic development and claw style personal assistants are where the dough is at.
OpenAI has by far the strongest brand and user base. It's not even close.
And, when it comes to the product they've been locked in the last few months it seems. The coding models are no longer behind Anthropic's and their general-use chat offering has always been up there at the top.
But why would you want to?
You can just leave them there at slowly start new conversation on another platform.
My mum, and probably nearly a billion other users, could probably imagine step 1 but not connect to step 2 beyond copy-paste. Most people are still out here sending screen shots of their phones instead of just copying a link or hitting "share" on the image.
Take ozempic as an example. The word is already part of the culture, but the company is losing badly to lly. Novo nordisk is projecting revenue DECLINE while eli lilly is still growing massively. I am not even sure people know other glp1 drugs other than ozempic. I don't even remember lilly drugs name.
I think people should not underestimate the market. It's a dynamic game where engineering intuition might not be enough
The problem with a moat in the consumer space is it depends on brand and marketing. OpenAI came into this world as a tech novelty, then an amazing tech tool, then a household name.
But… can they compete with massive consumer companies like Apple, Google, etc? In the long run?
There’s no technical reason they can’t. The question is whether they have consumer marketing in their blood. The space doesn’t have a lot of network effects, so it’s not like early Facebook where you had to be on it because everyone was.
Not saying they’ll fail, just saying it would be a significant challenge to be a hybrid frontier model / consumer product company.
I wonder what percentage of its users know what the GPT stands for, or even thought about it for a second?
chatgpt is generic (as in, no prior meaning attached, except for the few people in the world who understand what GPT stands for). It's simple - even a non-english speaker can say it easily, and doesn't require one to be native to know how to pronounce it (this is a difficult concept for a native english speaker to grok).
These features makes for a good name.
So i argue that chatGPT is indeed a good name (as good as google was).
I personally prefer claude models for all my work. If I were them I would be very worried. They are never giving us AGI and I am skeptical they are worth .5 trillion. Their cash burn is insane. Once ads and price hikes come, people will migrate to companies that can still afford to subsidize (like Google).
Plus I heard they lowered projections recently? Sam honestly comes off as a grifter.
But I have noticed that everyone seems to be using ChatGPT as the generic term for AI. They will google something and then refer to the Gemini summary as "ChatGPT says...". I tried to find out what model/version one of my friends was using when he was talking about ChatGPT and it was "the free one that comes with Android"... So Gemini.
It turned out the only reason ChatGPT was because it is free for small enough volume usage. My suggestion to see what Claude had to say instead was met with "huh, you have to pay for it?". It's not like these are people that can't afford $20 per month for a subscription, but it might be that these assistants aren't even worth that for typical "normie" use cases.
I would guess OAI has no moat or stickiness beyond what governments and private companies will do to keep it afloat through equity and circular financing. Good enough AI is all most need, and they need it at the cheapest cost basis possible with the most convenient access.
Google will probably win on most of these fronts unless a coalition is formed to actively fight google at the business/government level. But, absent that, it will win out over oai and oai will probably bleed to death trying to become profitable.. whenever that happens. You'll likely see their talent and corresponding salaries shrink massively along this journey.
Even in the context of the original quote the price is only "irrational" in the eyes of the person trying (and failing) to play the market. "But you can't do that, that doesn't make any sense!" spoken by a person who has failed to fully grasp the situation.
But you can bet there was more economic foresight going on at Google than OpenAI.
ChapGPT has become the AI verb, and in the consumer space it is not getting dethroned.
Gemini is the only real competitor to OpenAI in the consumer space: they already have the consumer eyes on their products and they have the financials to operate at a loss for years.
They are well positioned to fight for the market
OpenAI has the stickiness of MSN news or MS Teams. Your wife uses chatgpt on a daily basis but is she paying for it? If they charge her $0.99/mo will she not look at alternatives? If she gets two or three bad responses from chatgpt in a row, will she not explore alternatives to see if there is something better? Does she not use google? If she does, she is already interacting with gemini everyday via their AI overview.
OpenAI has a first-to-market advantage, not a moat as you think. they can absolutley dominate the market, if they stay on top of their game. Ebay was the main online shopping network, they had that advantage, they were even the ones that made Paypal a thing! But they're relatively little used now, better alternatives crushed them.
Amazon was the first-to-market with cloud services, they didn't get worse in any significant way, but their market share is not as great as it used to be, Azure has gained decent ground on them. 10 years ago the market share break down was 31/7/4, now it is 28/21/14 for AWS/Azure/GCP respectively.
For OpenAI to survive it needs most of the market share, if it gets only a 3rd for example, the AI industry on its own needs to be a $1T+ industry. Over the past 10 years revenue alone (not profit) for AWS has been $620B total and just made $128B in revenue (highest) last year. OpenAI needs to make in profits (not revenue) what AWS made last year in revenue by 2029 just to break even. If it manages to just break even by then, it needs to have more profits than the revenue AWS managed to attain after its entire lifetime until now. It's far easier to switch LLM models than cloud providers too!
Their only remote way of survival, I hate to say it, is by going the way of palantir and doing dirty things for governments and militaries. they need a cash-cow client that can't get anyone else like that. And even then, being US-based, I don't think outside the US any military is insane enough to use OpenAI at all due to geopolitics. Even in sectors like education, Google (via chromebooks) is more likely to form dependence than Microsoft via OpenAI since somehow they're more open to arbitrary apps due to historical anti-trust suits.
I can see a somewhat far-fetched argument being made for their survival, but only on thin-threads and excellent execution. But I can't see how they can actually survive competition. They're using the Azure strategy for market share, they're banking on AI being so ubiquitous that existing vendor-lock-in mindset will serve as a moat. They'll need to be much more profitable than AWS in like 1/5th of the time. Their product is comparable to (and literally is in Azure) one of many cloud service offerings, as oppose to an entire cloud provider, and their costs are huge similar to cloud providers like needing their own data-centers level huge, they need to overcome those costs, and on top of that have $125B> revenue in like 2 years!!
My hunch is that in five years we'll look back and see current OpenAI as something like a 1970's VAX system. Once PCs could do most of what they could, nobody wanted a VAX anymore. I have a hard time imagining that all the big players today will survive that shift. (And if that particular shift doesn't materialize, it's so early in the game; some other equally disruptive thing will.)
* even if an openweight model appears on huggingface today, exceeding SOTA, given my extensive experience with a wide variety of model sizes, I would find it highly surprising the "99% of use cases" could be expressed in <100B model.
* Meanwhile: I pulled claude to look into consumer GPU VRAM growth rates, median consumer VRAM went 1-2GB @ 2015 to ~8GB @ 2026, rougly doubles every 5 years; top-end isn't much better, just ahead 2 cycles.
* Putting aside current ram sourcing issues, it seems very unlikely even high-end prosumers will routinely have >100GB VRAM (=ability to run quantized SOTA 100b model) before ~2035-2040.
I almost wonder if we need some sort of co-op for training and another for hosted inference
Given that a lot of the R&D in China is state sponsored that also seems to be a good pawn in US-China relations.
phi4-mini-reasoning took the same prompt and bailed out because (at least according to its trace) it interpreted it as meaning "can't have a, e, i, o, or u in the name".
Local is the only inference paradigm I'm interested in, but these things have a way to go.
This kind of parlor tricks are not interesting and just because a model can list animals with or without some letters in their names doesn't mean anything especially since it isn't like the model "thinks" in English it just gives you the answer after translating it to English.
These are funny, like how you can do weird stuff with JavaScript language by combining special characters, but that doesn't really mean anything in the grand scheme of things. Like JavaScript these models despite their specific flaws still continue to deliver value to people using them.
Lots of local ai use cases I think are solvable similarly once local models get good at tool use and have the proper harness.
cat /usr/share/dict/words | print_if_mammal | grep -v 'e'
but I don't know of a good way to incorporate an LLM into a pipeline like that (I know there's a Python API). What I'm actually interested in is "is this the name of a mammal?" but I don't know of the equivalent of a quiet "batch mode" at least for ollama (and of course performance).
I guess ultimately I would want to say "write a shell utility that accepts a line from standard input and prints it to standard output if that is the name of a mammal", and then use that utility in that pipeline. Or really to have an llmfilter utility that lets you do something like
cat /usr/share/dict/words | llmfilter "is this a mammal?" | grep -v "e"
and now that I've said that I think I'll try to make one.
Rather, they use tokens that are usually combinations of 2-8 characters. You can play around with how text gets tokenized here: https://platform.openai.com/tokenizer
_____
For example, the above text I wrote has 504 characters, but 103 tokens.
(Aside, it's interesting how perceptions of these things have changed in one year: a whole article on OpenAI's future that makes no mention of AGI/ASI)
Many people say we’re at AGI already and I’m wondering why everyone hasn’t died yet.
This matters a lot to me, as I use AI as something of an ongoing project organizer, and not purely for specific prompts.
So at least for me, it would be a huge hassle to move to another platform, on par with moving from one note-taking software to another (e.g., Evernote to IA Writer.)
Everyone, it turns out. Same with Google. Same with YouTube. Same with Instagram, and the rest of the web.
Once people become dependent on ChatGPT (as they already are) watching a 30 second ad in the middle of a session will become second nature.
Google and Youtube are preinstalled everywhere. Instagrams like 10 minutes old and has a major competitor in TikTok that they had to have eliminated/captured by the US government.
People wouldnt stay with Netflix if there was a cheap, legal alternative with the same content library.
i'm just so surprised they'd use chatgpt to do this, when it's quite as easily (and perhaps faster) to use google translate.
1) the opportunities for vertical integration are huge. Anthropic originally said they didn’t want to build IDEs, then realized the pivot to Claude Code was available to them. Likewise when one of these companies can gobble up Legal, Medical, etc why would they let companies like Harvey capture the margins?
2) oss models are 6-12 months behind the frontier because of distillation. If labs close their models the gap will widen. Once vertical integration kicks off, the distillation cost becomes higher, and the benefit of opening up generic APIs becomes lower.
I can imagine worlds where things don’t turn out this way, but I think folks are generally underrating the possibilities here.
For code generation specifically, the performance level of this is going to be more than enough for this customer base. What does Anthropic do then to justify $200/mo price sticker? Better model? Just how much better? Better tools? Single company can't compete with the tools entire OSS can produce.
I would be unable to sleep if I was running OAI / Anthropic.
If METR task times double twice into the multi-day range in 12 months, then it’s plausible to me that Anthropic can charge $1k/mo or more by automating large chunks of the SWE role. (They have 10x’d their revenue every year, perhaps “value of enterprise contracts” is a better way of intuiting their growth rather than “$/seat” since each seat gets way more productive in this world-branch.)
It’s ironic, if the promise of AGI were realized, all knowledge companies, including AI companies, become worthless
The only thing that has seen massive boost are harnesses around AI. And AI companies are behind here compared to OSS.
So they (or their wholly owned subsidiary) can sell accounting services cheaper than anyone on the outside.
Regarding the diffusion/distillation time, I assume it gets harder to distill in the world where frontier labs don’t give API access to their newest models.
And so this goes back to my theory that open AI's execution is basically to get it itself in a position where the market cannot afford to have it implode. Basically, it wants to or it needs to be too big to fail. And I think we're already kind of seeing the politicization, if you will, sort of the rocket race between two superpowers or large powers on the AI front, and I think that Might be a viable strategy.
As margins collapse capex will collapse. Unfortunately valuations have become so tied to AI hype any reduction in capex will signal maybe the hype has gotten ahead of itself, meaning valuations have gotten ahead of themselves. So capex keeps escalating.
None of this takes into account the hoarding effects at play with regards to GPU acquisition. It's really a dangerous situation the industry is caught in.
Companies use to hoard talent. Now they are hoarding compute, RAM, and GPUs.
Deepseek showed that there are possibly less expensive ways to train, meaning the future eye watering expenses may not happen.
Bigger models may not scale. The future may be federations of smaller expert models. Chat GPTX doesn’t need to know everything about mental health, it just needs to recognize the the Sigmund von Shrink mental health model needs to answer some of my questions.
Very dangerous if you think about it that the product itself is the raw building block for itself.
Openai spends 1B$ on their model, releases it and instantly it gets scrapped by a million bots to build some country or company their own model.
For the humanity perspective, this doom is very optimistic. It says that these LLMs currently disrupting the platforms cannot themselves be the next platforms.
Maybe no one will have 'the ability to make people do something that they don't want to do' sort of power with this next stage in computing.
Sounds good to me.
From what I can see Anthropic's big bet is that they will solve computer use and be able to act as an autonomous agent. Not so sure how fast they will progress on that. OpenAI on the other hand - I have no idea what they are planning - all I'm reading is AI porn and ads.
Google seems to be lackluster at executing with Gemini but they are in the best position to win this whole thing - they have so much data (index of the web, youtube, maps) and so many ways to capitalize on the models - it's honestly shocking how bad they are at creating/monetizing AI products.
What is the network effect of Google Search?
Other factors that favor Google at scale:
- Sites often allow only the biggest search engine crawlers and block every other bot to prevent scraping. This has been going on for more than a decade and is especially true now with AI crawlers going around.
- Google search earns more per search than competitors due to their more mature ad network that they can hire lots of engineers to work on to improve ad revenues. They can also simply serve more relevant ads since their ad network is bigger.
- Google can simply share costs (e.g. index maintenance) among many more users.
I would love to dunk on this or something, but the lesson is that it's all about distribution.
Sama is really good at that, and also.. gotta give props for a lot of forward thinking like the orb, which now makes a lot of sense to me, as non-Apple/Google proof of personhood.
I think this is clearly wrong. Users provide lots of data useful for making the models better and that is already being leveraged today. It seems like network effects are likely in the future too. And they have several ways to get stickiness including memory.
I would argue chatgpt is in the top 10 products of all time with regard to product market fit.
Like, why do I STILL have to do taxes and accounting with external tools? Why doesn't OpenAI have their own tax filing service for the people?
OpenAI should just drop their API service and build everything themselves. It's exactly what they did with ChatGPT. Build thousands of things, not just a few.
Legal liability.
I hear this, but every time I look the platforms have captured another use case that the startup ecosystem built (eg images, knowledge summarization, coding, music).
The sector is already littered with the corpses of the innovators that got swallowed by the platforms’ aggressiveness to do it all.
Demo: https://chatjimmy.ai/
Many pundits think it's just a matter of scraping the internet and having a few ML scientists run ablation experiments to tune hyperparameters. That hasn't been true for over a year. The current requirements are more org-scale, more payoff from scale, more moat. The main legitimate competitive threat is adversarial distillation.
Many pundits also think that consumers don't want to pay a premium for small differences on the margin. That is very wrong-headed. I pay $200/month to a frontier lab because, even though it's only a few % higher in benchmark scores, it is 5x more useful on the margin.
Going from 85% to 90% is possibly 1/3 fewer errors or even higher, depending on the distribution of work you’re doing.
My view is that OpenAI, Anthropic and Google have a good moat. It's now an oligopolistic market with extreme barriers to entry due to needed scale. The moat will keep growing as the payoffs from scale keep growing. They have internal scale and scope economies as the breadth of synthetic data expands. The small differences between the labs now are the initial conditions that will magnify the differences later.
It wouldn't be surprising to also see consolidation of the industry in the next 2 years which makes it even more difficult to compete, as 2 or 3 winners gobble up everyone and solidify their leads.
When people worry about frontier lab's moat, they point to open weights models, which is really a commentary that these models have zero cost to replicate (like all software). But I think the era of open weights competition cannot be sustained, it's a temporary phenomenon tied to the middle-ground scale we're in where labs can still do that affordably. The absolute end of this will be the end-game of nation state backed competition.
Personally I only see Google (Gemini), X (Grok) and the Chinese models having a chances to still be alive in 1-2 years.
Also, I liked Anthropic because they were focused a lot on safety, but after the Pentagon stuff, it seems like they dropped their focus on safety.
Big customers may buy but won't give them logos, people who are offended by Musk's worldview won't pay them either. You don't do well with a toxic brand: just look at Ye having to buy full page apologies ads to try and sell a record.
For me, the choice is ChatGPT, not for its Codex or other fancy tooling - just the chat. Not that Claude Code or Cowork is less important. Not that I like Codex over Claude Code.
First off, nonetheless open publishing stuff. Everything would have been trade secrets.
Next off no interoperable json apis instead binary APIs that are hard to integrate with and therefore sticky. Once you spent 3 or 4 months getting your MCP server setup, no way would you ever try to change to a different vendor!
The number of investors was much smaller so odds are you wouldn't have seen these crazy high salaries and you wouldn't have people running off to different companies left and right. (I know, .com boom, but the .com boom never saw 500k cash salaries...)
Imagine if Google hadn't published any papers about transformers or the attention paper had been an internal memo or heck just word2vec was only an internal library.
It has all been a net good for technological progress but not that good for the companies involved.
Obviously the costs have come down but if IBM felt like burning 100 Billion in 2012 I'm pretty sure they could have a similarly impressive chat bot. Just not sure how they would have ever recouped the revenue.
Though with some types of models (specifically voice) it has been discovered that a smaller high quality dataset is better than a giant dataset filled with errors.
The WH has said it hasn't approved any sales, but it's not clear China is buying, and it seem they are making good progress on their huawei ascend chips. If China is basiclly at parity on the full stack (silicon, framework, training, model), and it starts open weighting frontier models at $0.xx/M tokens, then yeah, moat issues all around one would imagine? Not surprised to see Anthropic complaining like this: https://www.anthropic.com/news/detecting-and-preventing-dist... - but I don't know how you go back from it at this point?
I've never believed in Nvidia's moat, and it seems OpenAI's moat (research) has gone and surprisingly is no longer a priority for them.
To me it seems like the most obvious thing to do. More efficient models both make up for whatever you lost by using cheaper hardware and let you do more with the hardware you have than the competition can. By comparison the ever-growing-model strategy is a dead end.
it seem they are making good progress on their huawei ascend chips
This is interesting to me. I thought that the reason for deepseek delay was because of the insistence ( by the politicians) to use huawei chip[0]. But that was last year August.Anything changes in between?
[0]: https://www.reuters.com/world/china/deepseeks-launch-new-ai-...
(^edit, I don't know for certain entirely is accurate - edit again, found a chinese source saying their image model is end to end ascend, or at least, domestic: https://zhuanlan.zhihu.com/p/1994775762516080044 & https://www.guancha.cn/economy/2026_02_12_806895.shtml)
They've already found a better route. Buy it elsewhere e.g. in Singapore. Train their models there using Nvidia hardware.
Ship the result and fine tune back in China.
So "China" is and has always been buying it. No difference. The politics can keep raging.
That being said...
> The one place where OpenAI does have a clear lead today is in the user base: it has 8-900m users. The trouble is, there’re only ‘weekly active’ users: the vast majority even of people who already know what this is and know how to use it have not made it a daily habit. Only 5% of ChatGPT users are paying, and even US teens are much more likely to use this a few times a week or less than they are to use it multiple time a day.
This really props up the whole argument, because the author goes on to say that OpenAI's users are not really engaged. But is "only" 5% of users paying of a 8-900M user base really so inconsequential? What percentage of Meta's users are paying? Google's? I would be curious to see the author dig deeper here, because I am skeptical that this is really as bad as the author suggests.
Moving on to another section:
> If the next step is those new experiences, who does that, and why would it be OpenAI? The entire tech industry is trying to invent the second step of generative AI experiences - how can you plan for it to be you? How do you compete with this chart - with every entrepreneur in Silicon Valley?
Er, are any of these startups training foundation models? No? Then maybe that is how you compete? I suppose the author would say that the foundation model isn't doing much for OpenAI's engagement metrics (and therefore revenue), but I am not sure I agree there.
Still, really good article. I think it really crystalizes the anti-OpenAI argument and it gives me a lot of interesting things to think about.
The advertiser based business model for those companies makes your question/thought process here problematic for me. Historically speaking Google and "Meta" (Facebook) were primarily advertising provider companies. They provided billboards (space and time on the web page in front of an end-user) to people who were willing to buy tht space and time on the billboard. The "free access" end-users would always end up seeing said billboards, which is how they ended up "paying" for the service.
So most of Meta/Google end-users were "paying" users. They were being subsidised by the advertising customers paying for the end-users (who were forced to view adverts). The end-users paid with interruption to the service by an advert. [0]
In that context it feels a little like you're comparing apples to dave's left foot, as OpenAI hasn't had that with advertising ............ historically [1].
--
[0]: yes ad-blockers, yes more diverse revenue income streams over the years like with phones, yes this is simplified yadayada
[1]: excluding government etc. ~bailouts~ investments as not the same as advertising subsidies, but you could argue it's doing the same thing
But honestly, if OpenAI can't figure out ads given all their data and ability, they deserve to fail. :P
What I'm uncertain about is how much the ability of Google to set defaults matters.
Setting Gemini as the "AI" on phones, automatically integrated with all "daily" services could matter a lot. They have a platform ready to go and are pushing hard to make themselves really attractive. All while being very profitable.
Apple on the other hand will be in a strong position to negotiate a good deal with competitors to OAI and my suspicion is that "good enough AI" is all most people need.
And of course there is the financial reality that OpenAI does not only need profits, but profits on an enormous scale. Just being successful would mean they missed the mark.
My personal guess is that Microsoft will fully buy them at some point in the future but I'm not, confidence enough to bet any money on it.
The difference is in the unit economics. OpenAI has to spend massively per free user it serves. The others you mentioned have SaaS economics where the marginal cost of onboarding and serving each non-paying user is essentially zero while also gaining money from these free users via advertising. Hence, the free users are actually a net positive rather than an endless money sink.
Keep also in mind that AI has always been, and will always be, a commodity. The moment you start forcing people to convert into paying customers is the moment they jump ship at scale.
Just something to keep in mind.
Anthropic is in favor with developers and generally tech people, while OpenAi / Gemini are more commonly used by regular folks. And Grok, well, you know…
We have yet to see who’s winning in the “creative space”, probably OpenAI.
As these positionings cristallize, each company is likely going to double down on their user’s communities, like Apple did when specifically targeting creative/artsy people, instead of cranking general models that aren’t significantly better at anything.