- Revenue sharing from drug discovery (called out by OpenAI CFO): Why would a pharma company give away the upside to a commoditized intelligence layer? Why would OpenAI have a more compelling story than Google Deep Mind, which has serious accolades in this space?
- Media generation for ads and other content: For ads, OpenAI is facing off against Google, Meta and Amazon, all of which have existing relationships with advertisers. For the foreseeable future, AI content will be a major discount product compared to humans. OpenAI will not get to charge $1M for an ad like a production company does. So the TAM of ad production (~$50B) shrinks below $1B because AI deflates prices so much.
- Other agent use cases: OpenAI doesnt have a surface to build these on. Google has chrome, Microsoft has office, Apple has OS's. The other use cases like coding will be a low-margin competition between model providers until some of them throw in the towel. The players with the best cash position win - and thats not OAI.
I think the place that they could win is retail (also called out by OAI CFO). They made deals with Etsy and other small retailers. I was fixing my guitar the other day and would have instantly bought the tools it had suggested that I would need. The problem is that they have to win against Amazon here, and there is zero chance of a partnership for obvious reasons.
- random user: hey chatgpt, I need a new mechanical keyboard, buy me one - openai will get money for mechanical keyboard vendors to be on top of gpt's agent list
the ad business will shift from trying to hack google to hack gpt
And OpenAI doesn't have as much product insight as the retailers so they have to rely on the retailer to choose which is the "best" mechanical keyboard for this person. And at that point, pretty much all of the shopping value is being provided by the retailer rather than ChatGPT, so why would they get much money?
There's a market for this but its not going to be trivial for OpenAI to win it. And it probably wont be a cashflow monster like AdWords or Amazon.
Why? Amazon advertises heavily on Google search, why wouldn't they do the same with OAI?
On the other hand, Historically Amazon didnt compete with Google (until GCP). They do compete with Microsoft, which is pretty closely aligned with OAI. They also have large investments in Anthropic.
Even if OpenAI did win here, would it be a profit monster like Google Adwords? Adwords had the auction model which meant that certain categories were hugely lucrative for Google. Can a chatbot do the same? If I know that the product I buy is simply auctioned off to the highest bidder, what's the point of using an agent to help me shop? There has to be a pretext of the agent actually looking out for my best interest, otherwise I would just use search. Nobody expects adwords to look out for their best interest. They are always free to skip the ads section if they choose.
It will be hard for ChatGPT to implement an auction model since it will be different for each product category. Hiring a lawyer will probably have a different interaction from buying groceries. On Google+AdWords, its all just search results and ads.
If there is no auction, then all of this is WAY less profitable than the Google model. So once again - not going to save OAI from negative margins.
In general, Big Tech will never allow itself to be just the backend to a service where another company controls the frontend and the relationship to the customer. That's how you get commoditized and ultimately replaced.
Examples: you cannot get a streaming box with universal search ("which streaming service has show X? Just hit play and go"): the streaming services staunchly refuse to provide the APIs to do so. Nor is there interoperability across messaging apps to let users supply their own frontend clients. AI and MCP will go much the same way, it will be locked down as soon as it presents a business model threat.
What it does have is very high convenience (I'm signed in already, and I know the checkout process by muscle memory). To be fair it also has excellent customer support, but I'm not sure I would go out of my way just for that (I return a handful of purchases a year out of 100+).
These go away with 'agentic commerce', at least in theory, because the agent/MCP/API does this for the user.
The other advantage it has is excellent logistics, but that's more of a benefit for Amazon than the user IMO. Lots of small ecommerce sites can have 'excellent' logistics, because they are much smaller. The only unique thing Amazon has in the UK at least is same day delivery, but I believe they lose a fortune on that and really try and push you away from it. This may vary where you are but in general next day delivery works great in the UK from most sites (DPD/RM Tracked 24). Gets a bit hairy with 'economy' delivery from Evri or Yodel tho.
have you seen Amazon's "Rufus"? It's hilariously useless.
I'd argue -- for now. Maybe it's an incentive/urgency thing. At the moment, Amazon isn't seeing ChatGPT do the buying of goods bypassing Amazon's own search. I expect Rufus to drastically improve especially given that Amazon has an AWS offering of LLM(s) [0].
it hasn't exactly taken off, and i don't think OpenAI has addressed any of the problems that prevented amazon's version from being a success. and that was without taking advertiser money to choose which product to sell you, amazon was happy to just make a sale. if the product choices the AI shopping assistant makes are driven by advertiser dollars instead of product quality, i really don't expect consumers to accept it.
I don't know much about this, but I'd have thought it was the lack of display or ability to critique the choices Alexa makes. But ChatGPT doesn't have that problem because you can see and "discuss" the buying decisions.
If I, a consumer, want to buy a car, I need to do research. Where do I go? Online, across many websites. I talk to my friends. I talk to my coworkers.
Where do I NOT go? To the car salesman, and ask him for help. Because of course he will lie - he's a car salesman, he wants to sell cars that he sells.
Even with Google we see this being the case. Nobody is clicking the Google ads at the top because they know those are ads, not research. They only do it accidentally, which is evidenced by Google making it more difficult over time to tell what is or is not an ad.
I think you're evaluation of how many people click on Google ads and for what reasons is quite off. I'm sure you and most of the people in your circle are like that, but that's not how the vast majority of internet users behave. Google isn't generating $200 billion annually of accidental clicks.
I could see a majority of non tech (non HN) people take that deal.
Also, I would never discuss something I am buying with an LLM, the moment advertising starts being used to influence its output it will be the same as discussing the product with the product page (which of course is only positive) and ignoring negative reviews.
If OpenAI is accepting money from advertisers to push products then ChatGPT is just a salesman. You won't be having "discussions" you'll be actively sold stuff at all times. What an awful yet banal dystopia.
Their real moonshot should be search and ads. They're already taking big chunks from Google, they're just not monetizing at all (yet).
I already use chatgpt constantly for product research.
It's a huge market but who will it be a profitable business for?
Likely a company or multiple who own some sort of platform that people are already on, so not OpenAI.
What they have right now is the strong ChatGPT brand and that does mean a lot. But how long will it last?
They're not the technology leader anymore, and that spells a lot of trouble.
They are at a stage where they need to dominate the market and then leverage the data that gives them, plus the brand, plus the tech advantage to establish a durable near monopoly, but it looks like it's not working.
It's a bit as if in 1999 3 equally strong Google competitors had popped up, with some pulling ahead.
It was a total failure. I know lots of people-- both technical and non-technical-- who have Alexa devices, and not one of them has ever bought anything with it. You can read various comments from Amazon insiders confirming that the rate of buying things with Alexa was close to zero. And why not? It's the shittiest possible way to shop, like buying a lottery ticket except where the RNG is knowingly gamed. This is why Amazon is writing off Alexa entirely.
I've commented to this effect before, but "what if people could shop sight unseen" is a PM fantasy, not a thing anybody actually wants. LLMs might be useful for helping with research and comparison shopping, but the "one-click [or one-prompt] buying" workflow is not gonna happen.
Or purchases where I know exactly what I want but don’t want to search and add to cart manually: “buy a new 3 foot USB-C braided cable from Anker”.
But this is exactly what Amazon can't do. Basically all Crocs (crocss, croks, crox) on Amazon are counterfeit, and they don't even have a record of which ones they pulled out of the bin last year to send to you so they can try to grab the same countefeits; and the company that listed them a year ago is probably on their fifty-second name change since then, and the "Satan's anus green" that you chose because it was half the price of the other colors is now "Satin Annux Green" at 2x the price of the other colors…
I will point out that these companies have existing relationships with advertisers because they have massive, sticky userbases and advanced targeting tools. The average consumer is absolutely using ChatGPT for personal use, and maybe Copilot at work if applicable. And they're using Google's AI by proxy when they perform searches.
If OpenAI were to roll out advertising tooling, I have no doubt advertisers would flock there to try it out.
Additionally, the other thing I think OpenAI leads in is Product. Google is amazing at creating technologies and awful at creating products. I think OpenAI can be positioned to win based off of that alone.
In my experience the "average consumer" isn't doing anything with ChatGPT except maybe play with it for a little bit before getting bored. They actively avoid AI when the apps and products they use try to shove it down their throats and they search the internet and ask their tech savvy family members for ways to disable AI in their stuff when they see it nag at them about using it.
Inevitably, AI ends up being used by people in some ways (like the AI reply at the top of every google search) but almost never because the average consumer asked for that or wanted it. It's a toy when they want to use it, and annoying when they don't but are forced to.
As another commenter stated, ChatGPT has over 700 million WAU. There are only 4.4 million SWEs in the US. I think it’s caught on
I agree that Google isn't great at creating products anymore, but I'm not sure that OpenAI is. We've seen relatively simple products by them (a chat app, a short-form video app, various web interfaces) but we haven't seen anything as complex as some of Google's bigger products (Gmail, Docs, Maps, etc).
If OpenAI hits jackpot with a "simple" product, it could be easily replicated by a bigger company in the way Meta quickly copied Stories from Snapchat or TikTok to make Reels. It's already happened with Chat; the LLM is hard to compete against but the actual product, a web/app chat interface, was quickly copied by other companies with LLMs.
OpenAI would need to make something very complex and hard to copy to give it a solid head start they could really build a moat around— something like Google Maps, which took Apple years to replicate (and other companies won't even try to) or the iPhone, which was years ahead at launch. I just don't think we've seen OpenAI prove it has the capacity to build a product like that yet.
IG reels never became as popular as Tiktok and did basically nothing nothing to peel users away from Tiktok. For a long time, it was a meme that IG reels were just copy pasted tiktok content. Similarly, Meta's LLMs are used so little they honestly don't even register, despite being stuffed into everything they own, apps with billions and billions of users. Gemini is doing well but it's still a very very distant 2nd, despite being automatically downloaded and nudged in android phones, a platform with billions of users. Microsoft is by far the biggest player in consumer laptops, with edge and bing being the default options. So why can't they come even close to chrome and google ?
Time and time again, we've been shown. You can copy all you want, you can even shove it into the faces of your billions of users and find use for it. Doesn't mean you'll beat the market leader. You'll rarely beat market leaders just by copying them.
ChatGPT is the 5th most visited site on the planet. No other Consumer LLM service is remotely close, regardless of how many billions of users the entrenched players are shoving their copies into.
Trying to embed themsleves into every enterprise workflow and taking a cut from it seems much more likely than them trying to invent the next killer app. ChatGPT is just the marketing arm which keeps them front of mind.
I am not sure I follow. They "give it away", because they have to. They have to pay any of the model companies. What do DeepMind's accolades matter if it's commoditized, as you propose?
AI resources will remain scarce for the foreseeable future: I have to literally wait multiple Minutes to get an answer for semi-hard coding problems. The current demand is the delta between this, and the few milliseconds that it could take if supply was there. I suspect the tension will grow. Why would there not be multiple companies positioned to capture value? Assuming that any of them can turn demand into profit, that seems to be the most likely story right now.
If OpenAI wants anything more valuable than selling tokens, they will need to offer something valuable and differentiated. Right now they are not differentiated in the space at all. Look up "OpenAI Biotech" - anything that they've built themselves?
If any company will have a new product that biotech companies will pay top dollar for, its Google. Deepmind has been in biology (proteins) for almost a decade and they it has subsidiaries like Isomorphic Labs that are bringing products to market.
This has never been the difficult/expensive part of drug development.
Generally, I think only penny stock pharma cares at all to deal with IP with any kind of baggage instead of having already forgotten it in the backlog.
It's like pretending sulphuric acid manufacturers would get the right to demand a portion of drug company profits.
Why would a commoditized intelligence company give away the upside to a commoditized silicon company?
It still amazes me that Nvidia is worth so much, they're just one slice of the value chain, from mining, through chip fabrication, through to chip IP, through to technology stack, training, inference, and product integration.
I understand the reasons why, it's mostly lock in with CUDA and isn't really about unique chips, and I think the market sentiment on this is changing, but still it's crazy to me.
Correct. They certainly could. An OpenAI alternative to g suite and MS Office would be a good start (integrated with the chatgpt mobile and web presence), but would also be a huge engineering effort.
ChatGPT app is their Chrome. Large consumer base using chat on daily basis can expand to prosumer and to enterprise. They build an emotional connection to their customers that has the vibe of iPhone.
The customers are also price sensitive and are for many use-cases largely fine with last-generation model performance if that means they are cheaper. With that, there is little moat for the model creators, forcing a race to the bottom for model licensing, and the biggest chunk of the profit being captured by the cloud providers.
The article mostly focuses on ChatGPT uses, but hard to say if ChatGPT is going to be the main revenue driver. It could be! Also unclear if the underlying report is underconsidering the other products.
It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me. There will be challenges in capturing it and challenges with user trust, but it seems super promising because it will likely be harder to block and has a lot of intent context that should make it like search advertising++. And for context, search advertising is 40% of digital ad revenue.
Seems like the error bars have to be pretty big on these estimates.
Meanwhile, Google would be perfectly fine. They can just integrate whatever improvements the actually existing AI models offer into their other products.
They can also run AI as a loss leader like with Antigravity.
Meanwhile, OpenAI looks like they're fumbling with that immediately controversial statement about allowing NSFW after adult verification, and that strange AI social network which mostly led to Sora memes outside of it.
I think they're going to need to do better. As for coding tools, Anthropic is an ever stronger contender there, if they weren't pressured from Google already.
OpenAI is still de facto the market leader in terms of selling tokens.
"zero moat" - it's a big enough moat that only maybe four companies in the world have that level of capability, they have the strongest global brand awareness and direct user base, they have some tooling and integrations which are relatively unique etc..
'Cloud' is a bigger business than AI at least today, and what is 'AWS moat'? When AWS started out, they had 0 reach into Enterprise while Google and AWS had infinity capital and integration with business and they still lost.
There's a lot of talk of this tech as though it's a commodity, it really isn't.
The evidence is in the context of the article aka this is an extraordinary expensive market to compete in. Their lack of deep pockets may be the problem, less so than everything else.
This should be an existential concern for AI market as a whole, much like Oil companies before highway project buildout as the only entities able to afford to build toll roads. Did we want Exxon owning all of the Highways 'because free market'?
Even more than Chips, the costs are energy and other issues, for which Chinese government has a national strategy which is absolutely already impacting the AI market. If they're able to build out 10x data centres at offer 1/10th the price at least for all the non-Frontier LLM, and some right at the Frontier, well, that would be bad in the geopolitical sense.
If OpenAI eliminated their free tier today, how many customers would actually stick around instead is going to Google's free AI? It's way easier to swap out a model. I use multiple models every day until the free frontier tokens run out, then I switch.
That said, idk why Claude seems to be the only one that does decent agents, but that's not exactly a moat; it's just product superiority. Google and OAI offer the same exact product (albeit at a slightly lower level of quality) and switching is effortless.
Models have to significantly outperform on some metric in order to even justify looking at it.
Even for smaller 'entrenchements' like individual developers - Gemeni 3 had our attention for all of 7 days, now that Opus 4.5 is out, well, none of my colleagues are talking abut G3 anymore. I mean, it's a great model, but not 'good enough' yet.
I use that as an example to illustrate broader dynamics.
Open AI, Anthropic and Google are the primary participants here, with Grok possibly playing a role, and of course all of the Chinese models being an unknown quantity because they're exceptional in different ways.
That means that none of these products can ever have a high profit margin. They have to keep margins razor thin at best (deeply negative at present) to stay relevant. In order to achieve the kinds of margins that real moats provide, these labs need major research breakthroughs. And we haven't had any of those since Attention is All You Need.
Good gosh, no, for comprehensive systems it's considerably more complicated than that. There's a lot of bespoke tuning, caching works completely differently etc..
"That means that none of these products can ever have a high profit margin."
No, it doesn't. Most cloud providers operate on a 'basis' of commodity (linux, storage, networking) with proprietary elements, similar to LLMs.
There doesn't need to be any 'breakthroughs' to find broad use cases.
The issue right now is the enormous underlying cost of training and inference - that's the qualifying characteristic that makes this landscape different.
I think the issue here isn't really that it's "hard to switch" it's that it's easier yet to wait 1 more week to see what your current provider is cooking up.
But if any of them start lagging for a few months I'm sure a lot of folks will jump ship.
OpenAI loses money on free users and paying the absurdly high salaries that they've chosen to offer.
We actually don't this yet because the useful life of the capital assets (mainly NVIDIA GPUs) isn't really well understood yet. This is being hotly debated by wall st analysts for this exact reason.
https://www.cnbc.com/2025/11/14/ai-gpu-depreciation-coreweav...
The most applicable benchmarks right now are in software, and devs will not switch from Claude Code or Codex to Antigravity, it's not even a complete product.
This again highlights quite well the arbitrary nature of supposed 'leads' and what that actually means in terms of product penetration.
And it's not easy to 'copy' these models or integrations.
And the gemini app will come preloaded on any android phone, who else can say the same?
It works quite well here, and my phone came with a year of free Gemini Pro, so I don't currently see a reason to pay extra.
If AWS' was still just EC2, and S3 then I would argue they had very little moat indeed.
Now, when it comes to Generative AI models, we will need to see where the dust settles. But open-weight alternatives have shown that you can get a decent level of performance on consumer grade hardware.
Training AI is absolutely a task that needs deep pockets, and heavy scale. If we settle into a world where improvements are iterative, the tooling is largely interoperable... Then OpenAI are going to have to start finding ways of making money that are not providing API access to a model. They will have to build a moat. And that moat may well be a deep set of integrations, and an ecosystem that makes moving away hard, as it arguably is with the cloud.
What are you basing this on? None of their investor-oriented marketing says this.
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
Note that it doesn't say: "Our mission is to maximize shareholder value, and we develop AI systems to do that".
> In order to achieve our mission, we will conduct our business with the following Code of Ethics in mind:
> Obey the law.
> Take care of our members.
> Take care of our employees.
> Respect our suppliers.
> If we do these four things throughout our organization, then we will achieve our ultimate goal, which is to reward our shareholders.
https://customerservice.costco.com/app/answers/answer_view/a...
To be fair, that's a mission statement paired with a succinct code of ethics.
"OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity."
and
"We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome."
See:
(1) https://blog.samaltman.com/the-gentle-singularity (June, 2025) - "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be."
- " It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year."
(2) https://blog.samaltman.com/three-observations (Feb, 2025) - "Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity."
- "In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today."
(3) https://blog.samaltman.com/reflections (Jan, 2025) - "We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history"
- "We are now confident we know how to build AGI as we have traditionally understood it."
(4) https://ia.samaltman.com/ (Sep, 2024) - "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there."
(5) https://blog.samaltman.com/the-merge (Dec, 2017) - "A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species). Most guesses seem to be between 2025 and 2075."
(I omitted about as many essays. The hype is strong in this one.)
If this is what users actually want.
AI not getting much better from here is probably in their best interest even.
It’s just good enough to create the slop their users love to post and engage with. The tools for advertisers are pretty good and just need better products around current models.
And without new training costs “everyone” says inference is profitable now, so they can keep all the slopgen tools around for users after the bubble.
Right now the media is riding the wave of TPUs they for some reason didn’t know existed last week. But Google and meta have the most to gain from AI not having any more massive leaps towards agi.
I think this needs to be said again.
Also, not only do we not know if AGI is possible, but generally speaking, it doesn't bring much value if it is.
At that point we're talking about up-ending 10,000 years of human society and economics, assuming that the AGI doesn't decide humans are too dangerous to keep around and have the ability to wipe us out.
If I'm a worker or business owner, I don't need AGI. I need something that gets x task done with a y increase in efficiency. Most models today can do that provided the right training for the person using the model.
The SV obsession with AGI is more of a self-important Frankenstein-meets-Pascal's Wager proposition than it is a value proposition. It needs to end.
It might be hard, it might be difficult, but it is definitely possible. Us humans are the evidence for that.
And despite all that, humans are still just made of dirt.
Even if we can get silicon to do some of these tricks, that'd require multiple breakthroughs, and it wouldn't be cost-competitive with humans for quite a while.
I would even think it's possible that building brain-equivalent structures that consume the same power, and can do all the stuff for the same amount of resources, is a so far out science fiction proposition, that we can't even give a prediction as to when it will happen, and for practical purposes, biological intelligences will have an insurmountable advantage for even the furthest foreseeable future once you consider the economics of humans vs machines.
No we become dirt. I guess we are made of wood and computers are made of sand.
Humans tend to vastly underestimate scale and complexity.
There is absolutely a moat. OpenAI is going to have a staggering amount of data on its users. People tell ChatGPT everything and it probably won't be limited to what people directly tell ChatGPT.
I think the future is something like how everyone built their website with Google Analytics. Everyone will use OpenAI because they will have a ton of context on their users that will make your chatbot better. It's a self perpetuating cycle because OpenAI will have the users to refine their product against.
Ed Zitron has a bias and a narrative differing from OpenAI's bias and narrative: https://www.wheresyoured.at/oai_docs/
Your article has 5 billion in inference cost vs 4.5 billion in revenue. That's within the range of becoming profitable.
I'm not super bullish on "AI" in general (despite, or maybe because of working in this space the last few years), but strongly agree that the advertising revenue that LLM providers will capture can be potentially huge.
Even if LLMs never deliver on their big technical promises, I know so many casual users of LLMs that basically have replaced their own thought process with "AI". But this is an insane opportunity for marketing/advertising that stands to be a much of a sea change in the space as Google was (if not more so).
People trust LLMs with tons of personal information, and then also trust it to advise them. Give this behavior a few more years to continue to normalize and product recommendations from AI will be as trusted as those from a close friends. This is the holy grail of marketing.
I was having dinner with some friends and one asked "Why doesn't Claude link to Amazon when recommending a book? Couldn't they make a ton in affiliate links?" My response was that I suspect Anthropic would rather pass on that easy revenue to build trust so that one day they can recommend and sell the book to you.
And, because everything about LLMs is closed and private, I suspect we won't even know when this is happening. There's a world where you ask an LLM for a recipe, it provides all the ingredients for your meal from paid sponsors, then schedules to have them delivered to your door bypassing Amazon all together.
All of this can be achieved with just adding layers on to what AI already is today.
The "holy grail" of the AI business model is to build a feeling of trust and security with their product and then turn around to try and gouge you on hemmorrhoid cream and the like?
We really need to stop the worship of mustache twirling exploitation
I mean, look at all this "alignment" research. I think the people working in this space sincerely believe they are protecting humanity of a "misaligned" AGI, but I also strongly believe the people paying for this research want to figure out how to make sure we can keep LLMs aligned with the interests of advertisers.
Meta put so much money into the Metaverse because they were looking for the next space that would be like the iPhone ecosystem: one of total control (but ideally better). Already people are using LLMs for more and more mundane tasks, I can easily imagine a world where an LLM is the interface for interacting online world rather than a web browser (isn't that what we want with all these "agents"?) People already have AI lovers, have AI telling them that they are gods, having people connecting with them on a deeper level than they should. You believe Sam Altman doesn't realize the potential for exploitation here is unbounded?
What AI represents is where a single company control every piece of information fed to you and has also established deep trust with you. All the benefits of running a social media company (unlimited free content creation, social trust) with none of the draw backs (having to manage and pay content creators).
In both cases, LLMs gave me examples that were generally famous, but very tangentially related to the subject at hand (at times, ChatGPT was reaching or straight up made up stuff).
I don't know why it has this bias, but it certainly does.
The ideal here will be a multi tiered approach where the LLM first identifies that a book should be recommended, a traditional recommendation system chooses the best book for the user (from a bank of books that are part of an ads campaign), and then finally the LLM weaving that into the final response by prompt suggestion. All of this is individually well tested for efficacy within the social media industry.
I'll probably get comments calling this dystopian but I'm just addressing the claim that LLMs don't do good recommendations right now, which is not fundamental to the chatbot system.
Rec systems are in use right now everywhere, and they're not exactly mindblowing in practice. If we take my example of books with certain plotlines, it would need some super-high quality feature extraction from books (which would be even more valuable imo, than having better algorithms working on worse data). LLMs can certainly help with that, but that's just one domain.
And that would be a bespoke solution for just books, which would, if worked, would work with a standard search bar, no LLM needed in the final product.
We would need people to solve every domain for recommendation, whereas a group of knowledgeable humans can give you great tips on every domain they're familiar with on what to read, watch, buy to fix your leaky roof, etc.
So in essence, what you suggest would amount to giving up on LLMs (except as helpers for data curation and feature extraction) and going back to things we know work.
Yeah, I don't like that estimate. It's either way too low, or much too high. Like, I've seen no sign of OpenAI building an ads team or product, which they'd need to do soon if it's going to contribute meaningful revenue by 2030.
Is that role not exactly what you mention?
There are a bunch of people from FB at OpenAI, so they could staff an adtech team internally I think, but I also think they might not be looking at ads yet, with having "higher" ambitions (at least not the typical ads machine ala FB/Google). Also if they really needed to monetize, I bet they could wire up Meta ads platform to buy on ChatGPT, saving themselves a decade of building a solid buying platform for marketers.
Well they have Fidji, so she could definitely recruit enough people to make it work.
> with having "higher" ambitions (at least not the typical ads machine ala FB/Google)
Everyone has higher ambitions till the bills come due. Instagram was once going to only have thoughtfully artisan brand content and now it's just DR (like every other place on the Internet).
> At least the description is not at all about building an adtech platform inside OpenAI, it's about optimizing their marketing spend (which being a big brand, makes sense).
The job description has both, suggesting that they're hedging their bets. They want someone to build attribution systems which is both wildly, wildly ambitious and not necessary unless they want to sell ads.
> I bet they could wire up Meta ads platform to buy on ChatGPT, saving themselves a decade of building a solid buying platform for marketers.
Wouldn't work. The Meta ads system is so tuned for feed based ranking that I suspect they wouldn't gain much from this approach.
Directly from posting: “building the technical infrastructure behind OpenAI’s paid marketing platform”
I do think that this seems odd, looks like they're hiring an IC to build some of this stuff, which seems odd as I would have expected them to be hiring multiple teams.
That being said, the earliest they could start making decent money from this is 2028, and if we don't see them hire a real sales team by next March then it's more likely to be 2030 or so.
> Your role will include projects such as developing campaign management tools, integrating with major ad platforms, building real-time attribution and reporting pipelines, and enabling experimentation frameworks to optimize our objectives.
You just haven't been paying attention. They hired Fidji Simo to lead applications in may, she led monetization/ads at facebook for a decade and have been staffing up aggressively with pros.
Reading between the lines in interview with wired last week[0], they're about to go all in with ads across the board, not just the free version. Start with free, expand everywhere. The monetization opportunities in chatgpt are going to make what google offers with adwords look quaint, and every CMO/performance marketer is going to go in head first. 2% is tiny IMO.
[0] - https://archive.is/n4DxY
I think that ads are definitely a plausible way to make money, but it's legally required that they be clearly marked as such, and inline ads in the responses are at least 1-2 versions away.
The other option is either top ads or bottom ads. It's not clear to me if this will actually work (the precedents in messaging apps are not encouraging) but LLM chat boxes may be perceived differently.
And just because you have a good ad product doesn't mean you'll get loads of budget. You also need targeting options, brand safety, attribution and a massive sales team. It's a lot of work and I still maintain it will take till 2030 at least.
If you think of openAI like a new google, as in a new category-defining primary channel for consumers to search and discover products. Well, 2% does seem pretty low.
Or about 30% of the global advertising spend circa 2024.
I wonder if there is an upper bound on what portion of the economy can be advertising. At some point it must become saturated. People can only consume so much marketing.
What might stand from comparison is google introduced a good product people wanted to use and innovative approach to marketing at the time which was unobtrusive. Product drive the traffic. It was quite a bit before Google figured it all out though.
Maybe they're thinking they can build a universal store with search over every store? Like a "Google Shopping" type experience?
2% is optimistic in my opinion.
Response: Book a car at <totally not an ad> and it will be waiting for you at arrival terminal, drive to Napoli and stay at <totally not an ad> with an amazing view. There's an amazing <totally not an ad> place that serves grandma's favorite carbonara! Do you want me to make the bookings with a totally not fake 20% discount?
Yes, but the reason why people are turning to chatgpt is because the time to actual info that _I want_ is much much lower.
The point of advertising is to displace the thing that you actually want with something they are paying the company to promote.
You can handwave about personalization, but do you want adtech people having access to your life's context?
What are you actually saying? You're already using chatbots that are embedding non-disclosed paid endorsements? And you like that?
> ad placement is actually easier for chat
Can you point to, I don't know, anything to back this up?
I guess we'll just put that in the "Cost of Goods Sold" bucket.
What are you imagining they run afoul of?
https://www.ftc.gov/business-guidance/resources/ftcs-endorse...
https://www.ecfr.gov/current/title-16/chapter-I/subchapter-B...
edit: to be clear, I am saying that in the absence of clear disclosures, that would run afoul of current FTC rules. And historically they have been quick to react to novel ways of misleading consumers.
Do you have at least a rough idea how many current product recommendations are influenced grok "musk is the bestest at everything" style?
Every source I know (hard to link on mobile) shows Google Search to make up 50+% of their ad revenue, and there has been extensive reporting over the years on Google's struggle to diversify away from that.
At least with an ad it's obvious a separate company is involved. If you do all the payment through OpenAI it seems to leave them open to liability.
Booking, airbnb, rentalcars, etc all seem to be doing pretty fine regulatory wise.
Travel sites, VPNs and insurance all pay quite handsomely (compared to say amazon links on cooking sites)
Everything else aside, it's simply not worth it for them to try to skirt these rules because the majority of their users (or Google's) simply don't care if something is paid placement or not, provided it meets their needs.
In other words, if they want to put ads into chat, they just need to be perceived as well aligned to Trump to avoid any actual punishment.
But all the search companies have their own AI so how would OAI make money in this sector?
1. Paid ads - ChatGPT could offer paid listings at the top of its answers, just like Google does when it provides a results page. Not all people will necessarily leave Google/Gemini for future search queries, but some of the money that used to go to Google/Bing could now go to OpenAI.
2. Behavioral targeting based on past ChatGPT queries. If you have been asking about headache remedies, you might see ads for painkillers - both within ChatGPT and as display ads across the web.
3. Affiliate / commission revenue - if you've asked for product recommendations, at least some might be affiliate links.
The revenue from the above likely wouldn't cover all costs based on their current expenditure. But it would help a bit - particularly for monetizing free users.
Plus, I'm sure there will be new advertising models that emerge in time. If an advertiser could say "I can offer $30 per new customer" and let AI figure out how to get them and send a bill, that's very different to someone setting up an ad campaign - which involves everything from audience selection and creative, to bid management and conversion rate optimization.
It's not that OpenAI hasn't created something impressive, it just came at to high a price. We're talking space program money, but without all the neat technologies that came along as a result. OpenAI more or less develop ONE technology, no related product or technologies are spun out of the program. To top it all off, the thing they built, apparently not that hard to replicate.
Maybe users will employ LLMs to block ads? There's a problem in that local LLMs are less powerful and so would have a hard time blocking stealth ads crafted from a more powerful LLM, and would also add latency (remote LLMs add latency too, but the user may not want to pay double for that)
Perplexity actually did search with references linked to websites they could relate in a graph and even that only made them like $27k.
I think the problem is on Facebook and Google you can build an actual graph because content is a thing (a url, video link etc). It will be much harder to I think convert my philosophical musings into active insights.
Even here the idea that it’s as simple as “just sell ads” is utterly laughable and yet it’s literally the mechanism by which most of the internet operates.
They benefit from slowing and attacking OpenAI because there's no clear purpose for these centralized media platforms except as feeds for AI, and even then, social media and independents are higher quality sources and filters. Independents are often making more money doing their own journalism directly than the 9 to 5 office drones the big outlets are running. Print media has been on the decline for almost 3 decades now, and AI is just the latest asteroid impact, so they're desperate to stay relevant and profitable.
They're not dead yet, and they're using lawsuits and backroom deals to insert themselves into the ecosystem wherever they can.
This stuff boils down to heavily biased industry propaganda, subtly propping up their allies, overtly bashing and degrading their opponents. Maybe this will be the decade the old media institutions finally wither up and die. New media already captures more than 90% of the available attention in the market. There will be one last feeding frenzy as they bilk the boomers as hard as possible, but boomers are on their last hurrah, and they'll be the last generation for whom TV ads are meaningfully relevant.
Newspapers, broadcast TV, and radio are dead, long live the media. I, for one, welcome our new AI overlords.
If you account for the current trajectory of model capabilities, bare-minimum competence and good faith on behalf of OpenAI and cloud compute providers, then it's nowhere near a money pit or shenanigan, it's typical VC medium to high risk investment plays.
At some point they'll pull back the free stuff and the compute they're burning to attract and retain free users, they'll also dial in costs and tweak their profit per token figure. A whole lot of money is being spent right now as marketing by providing free or subsidized access to ChatGPT.
If they wanted to maximize exposure, then dial in costs, they could be profitable with no funding shortfalls by 2030 if they pivot, dial back available free access, aggressively promote paid tiers and product integrations.
This doesn't even take into account the shopping assistant/adtech deals, just ongoing research trajectories, assumed improved efficiencies, and some pegged performance level presumed to be "good enough" at the baseline.
They're in maximum overdrive expansion mode, staying relatively nimble, and they've got the overall lead in AI, for now. I don't much care for Sam Altman on a personal level, but he is a very savvy and ruthless player of the VC game, with some of the best ever players of those games as his mentors and allies. I have a default presumption of competence and skillful maneuvering when it comes to OpenAI.
When an article like this FT piece comes out and makes assumptions of negligence and incompetence and projects the current state of affairs out 5 years in order to paint a negative picture, then I have to take FT and their biases and motivations into account.
The FT article is painting a worst case scenario based on the premise "what if everyone involved behaved like irresponsible morons and didn't do anything well or correctly!" Turns out, things would go very badly in that case.
ChatGPT was released less than 3 years ago. I think predicting what's going to happen in even 1 year is way beyond the capabilities of FT prognosticators, let alone 5 years. We're not in a regime where Bryce Elder, finance and markets journalist, is capable or qualified to make predictions that will be sensible over any significant period of time. Even the CEOs of the big labs aren't in a position to say where we'll be in 5 years. I'd start getting really skeptical when people start going past 2 years, across the board, for almost anything at this point.
Things are going to get weird, and the rate at which things get weird will increase even faster than our ability to notice the weirdness.
FT’s argument is, essentially, “we’re in a bubble and OpenAI raised too much and may not make it out.”
Neither of us knows which is more correct. But it is certainly at least a very real possibility that the FT is more correct. Just like the Internet was a great “game changer” and “bubble maker,” so are LLMs/AI.
I think it’s quite obvious we’re in a bubble right now. At some point, those pop.
The question becomes: is OpenAI AOL? Or Yahoo? Or is it Google?
I don’t think anybody is arguing it’s Pets.com.
AI can both be a transformative technology and the economics may also not make sense.
This cannot all be about advertising. They are selling a global paradigm shift not a fraction of low conversion rate eyeballs. If they start claiming advertising is a big part of their revenue stream then we will know that AI has reached a dead end.
There is your multi-bn $ revenue stream.
and "200 billion, when your revenue is 12, is the market you are targeting actually big enough to support that"
The team also assumes LLM companies will capture 2 per cent of the digital advertising market in revenue, from slightly more than zero currently.
This seems quite low. Meta has 3.5 billion users and projected ~$200b revenue in 2025. ChatGPT is at ~1 billion so far. By 2030, let's just stay ChatGPT reaches 2 billion years or 57% of Meta's current users.
I'd like to think that OpenAI's digital ad revenue should reach 10% by 2030 an then accelerate from there. In my opinion, the data that ChatGPT has on a user is better than the inferred user data from Instagram/FB usage. I think ChatGPT can build a better advertisement profile of each user than Meta can which can lead to better ad targeting. Further more, I think ChatGPT can really create a novel advertisement platform such as learning about sponsored products directly via chat. I'm already asking ChatGPT about potential products and services everyday like medicine, travel, gadgets, etc.I think people are severely underestimating ChatGPT as a way to make money other than subscriptions. I also think people are underestimating the branding power ChatGPT has already. All my friends have ChatGPT on their phone. None of them except me has Gemini or Claude app.
This doesn't account for OpenAI's other ambitions such as Sora app.
Hey Sam Altman or OpenAI employee, if you are reading this, I think you should buy the North American version of TikTok if the opportunity presents itself. The future of short videos will be heavily AI generated/assisted. Combine Tiktok's audience with your Sora tools and ChatGPT data and you got yourself a true Instagram competitor immediately. If the $14b sales price of US Tiktok is real, that's an absolute bargain in the grand scheme of things.
Meta makes about $200B on ads, Google makes about $235B on ads. Advertising is roughly 1.5% of the total GDP of the US and hasn't changed in 20+ years. So what you have is a big ass pie with a few players fighting for it that barely grows every year.
OpenAI has to somehow:
1. Compete directly with Google Gemini and Meta's Llama for a piece of users pie with a product that has very little differentiator (functionality and technically speaking).
2. Have to prove to advertisers that their single dollar ad purchase on OpenAI is categorically worth more than any other channel.
3. Have enough forward capital to continue purchasing capital-intense hardware purchases.
4. Having enough capital to weather any potential economic headwinds.
I know where my bet is.
We're on Hacker News. Y Combinator literally teaches their companies that they can beat incumbents using focus and speed.
My bet is on OpenAI. When they IPO, I can easily see them with $1 trillion in valuation and raise the a record amount of money in an IPO.
If Meta and Google don't see OpenAI and LLMs as an existential threat, they wouldn't invest so much. I think AI has that potential to completely disrupt Google and Meta because it fundamentally changes the way people behave. It's a paradigm shift. It isn't just playing the same game.
ChatGPT has roughly the same MAU as tiktok. I don't see why their ad business wouldn't meet or exceed what tiktok was able to do in less than 5 years.
Reuters reported that ByteDance (TikTok parent) in Q1 2025 had $48b in revenue.[0] They should surpass $200b for 2025 which would make them bigger than Meta.
In other words, Tiktok has already caught up with Instagram in terms of revenue.
[0]https://www.reuters.com/business/finance/tiktok-owner-byteda...
TikTok, or rather ByteDance, acquired Musical.ly as a competitor to absorb the user base and jump start their network. Their also have been a lot of short-form video platforms before (e. G., Vine) and during TikToks growth (Instagram reels, YT Shorts).
You'll probably argue that this time it's different but no one knows what's different until it's already changed.
With consumers right now? Sure, but so does WhatsApp and IG, both Meta properties. Meta and Google also WAY better brand power with advertisers. So there's that.
> I can easily see them with $1 trillion in valuation and raise the a record amount of money in an IPO.
They have agreements of roughly in $1.5T infra spend (and that doesn't include their own S&M and R&D spend) for the next 5 years. They have to have a combined amount of cashflow to cover that $1.5T (mix of income, debt financing, and stock financing) + all their other spending. The CFO admitted that they may need to bail out data centers to cover this to stay solvent in the long run.
> Y Combinator literally teaches their companies that they can beat incumbents using focus and speed.
YC is literally not God when it comes to advice, so this point is moot. Meta and Google didn't come out of YC and yet still beat incumbents.
With consumers right now? Sure, but so does WhatsApp and IG, both Meta properties. Meta and Google also WAY better brand power with advertisers. So there's that.
With AI. OpenAI/ChatGPT is synonymous with AI. People say "ask ChatGPT" the same way people say "Google it". They have agreements of roughly in $1.5T infra spend (and that doesn't include their own S&M and R&D spend) for the next 5 years. They have to have a combined amount of cashflow to cover that $1.5T (mix of income, debt financing, and stock financing) + all their other spending. The CFO admitted that they may need to bail out data centers to cover this to stay solvent in the long run.
I'm sure their $1.5t infrastructure commitments are based on hitting certain goals. Their comment about government support for data center is isn't a call for a bailout and taken out of context/exaggerated by mass media. YC is literally not God when it comes to advice, so this point is moot. Meta and Google didn't come out of YC and yet still beat incumbents.
Yes but Google also beat the incumbents in Yahoo, AOL. People thought no way back in 2000 as well. Heck, Google wanted to sell itself to Yahoo.I think people who say they're only doing this for investors is wrong. I think management at Google and Meta truly think they're f'ed if they don't get AI right.
Meta has WhatsApp, Instagram, and Facebook to account for that.
OpenAI has ChatGPT (not a social platform).
It seems to me you're comparing apples and oranges here.
OpenAI has ChatGPT (not a social platform).
You didn't state reasons why not being a social platform matters here.Anyways, check this out: https://openai.com/index/group-chats-in-chatgpt/
It seems to me you're comparing apples and oranges here.
I don't think so. 1 billion users and a clear intention to deliver ads with an immense amount of data on users. That's a clear threat to both Meta and Google.PS. That's why Meta and Google are all in on AI. OpenAI is an existential threat to both in my humble opinion.
There's nothing to pointlessly waste your time on. You open it to do a thing, you either do the thing or get frustrated or leave. Social networks are designed to waste your time even when they outlive their usefulness, therefore they can serve you more ads.
You could argue Google is the same as ChatGPT in that regard, but that's why Google has Adsense in almost any search result you click on.
As for your group chats feature argument, anyone can make a social network, that's the easy part. Getting friend groups to switch is the more difficult part.
> PS. That's why Meta and Google are all in on AI. OpenAI is an existential threat to both in my humble opinion.
They're all in on AI because that's what their investors want them to do to "not be left behind". Meta was all in Metaverse. And on a cryptocurrency before that (Diem). And on Free Basics before that. The fact that none of those succeeded didn't hurt them at all precisely because they had an infinite money glitch known as ads.
They can afford to waste amounts of money equivalent to a yearly budget of a small country, ChatGPT can't.
Like Google Search, this does not really matter. Fact is, chatgpt is the 5th most visited site on the planet every month. And it happened in about 3 years. 'Nothing to waste your time on?' Completely irrelevant.
Any idiot off the street can be the most used website on Earth. Easy - go to my website, and I give you free stuff. So why am I not a billionaire? Because that's a dumbass business model and that won't go anywhere.
The idea that if you just "flood the market" you can be successful is a crock of shit, and I think we're all starting to realize it. It's not difficult, or impressive, or laborious to provide something people want. It's difficult to do it in a way that makes money.
You might say - but what about Spotify? What about Uber? Those companies are not successful. They are just barely profitable, after investment on the order of decades. We don't actually know if a service like Spotify even works long term. It sounds fantastic - pay ten bucks or whatever and get all the music you want.
But has anyone taken a step back and asked - hmm - how do we make money off of this? Because obviously that is not the cost of music, right? And we don't own any of the capital, right? And we don't actually make a product, right, we're just a middle man?
ChatGPT is in a similar predicament. The value of ChatGPT is not the ChatGPT, it's what ChatGPT produces. It's a middle man, operating at massive losses, with absolutely no path towards profitability.
Spotify and Uber are aggregators with high marginal costs that they do not control. Spotify has to pay labels for every stream; Uber has to pay drivers for every ride. They cannot scale their way out of those costs because they don't own the underlying asset (the music or the labor).
OpenAI is not a middleman; they own the factory. They are "manufacturing" intelligence. Their primary costs are compute and energy. Unlike human labor (Uber) or IP licensing (Spotify), the cost of compute is on a strong deflationary curve. Inference costs have dropped orders of magnitudes in the last couple years while model quality has improved and costs will keep dropping. Gemini's median query costs no more than a google search. LLM inference is already cheap.
> Any idiot off the street can be the most used website on Earth. Easy - go to my website, and I give you free stuff.
If they were only burning cash to give away a free product, you’d be right. But they are reportedly at ~$4B in annualized revenue. That is not "giving away free stuff" to inflate metrics; that is the fastest-growing SaaS product in history.
You are conflating "burning cash to build infrastructure" (classic aggressive scaling, like early Amazon) with "structurally unprofitable unit economics" (MoviePass).
Open AI's unit economics are fine. Inference is cheap enough for ads to be viable enough for profitability as a business today. The costs this article is alluding to ? Open AI don't need to do any of that for tier of models and use-cases they have today. They are trying to build and be able to serve 'AGI', which they project will be orders of magnitudes more costly. If they do manage that, then none of those costs will matter. If they don't, then they can just...not do it. 'AGI' is not necessary for Open AI to be a profitable business.
The network effects matter so much more for a social platform than a chat bot. The switching costs for a user are much lower, so users can move to a different one much easier.
How sticky will chat bots prove to be in the long term? Will OpenAI be able to maintain a lead in the space in the long term, the way Google was over Bing? It's possible, but it's also pretty easy to imagine other providers being competitive and a landscape where users move between different LLMs more fluidly
But I think Open AI is not a slam dunk for Ads. Gemini and AI mode will compete for the same budget, and Google's Ad machine is polished.
I think eventually you will buy Ads for Open AI in Google's marketing platforms, just like most people buy bing ads in Google.
I'm telling it nearly everything from my work problems to health problems to love life problems to product research, traveling plans, etc.
Your intent is the immediate need: how do I fix this leaky faucet?
Your user profile (love life problems) is generally not useful there.
Google just passively collects email and browsing history, much better data for targeting ads and way less cost to run.
apart from those oranges have ~100bn a year to spend on rnd and still make a profit, where as openai doesn't
So yes, it is apples to oranges. but its reality.
Then they will have a social platform that they will continue to use to mine AI training data from + a source of ad revenue.
It's becoming social as well: https://openai.com/index/group-chats-in-chatgpt/
Do they use Google docs/sheets? Or even Google Search anywhere? Then they have Gemini integrated in some way
I will have whatever you're smoking. If a social media platform literally proves the dead internet theory, it's not making any money.
Low effort input. Low effort consumption. What a depressing vision of the suture. This is why I don't use social media.
I think advertisers are fairly stupid and maybe don't realize that most eyes on their ads aren't eyes at all, and couldn't buy a hair dryer even if they wanted, because they have no hair. How Facebook is still a desirable advertising platform is beyond me.
1. Quite a lot of companies are not publicly traded, and therefore are not reflected in the stock market. AI companies have an incentive to be publicly traded because it's all venture-capital stuff.
2. Technology in general is always going to be over-weight anyway because these are companies that tend towards "growth" (re-investing profits into future expansion, offering to buy back stocks at a higher price as a means of compensating investors, etc.) rather than "value" (compensating investors by paying out an explicit cash dividend on shares). This tends to push their P/E multiples higher.
3. Publicly traded companies, and thus stocks, generally are valued based on speculation about future cash flows, not according to current holdings.
4. The companies that you have to add up in order to come to a figure like "one third of the stock market", are doing a lot of things outside of AI. People still play video games, and they still do GPU-accelerated data analysis with conventional techniques. People still want their computer to include an operating system, and still use their social media to talk to accounts that they know are operated by people they know in real life.
5. The term "AI" is now used as if it exclusively referred to LLMs, but other AI systems have existed for a long time and have been actually accomplishing real things in the economy.
> The economy is going to collapse
There are a great many people out there who have predicted a hundred or so out of the last seven recessions. You don't know this, and there are many reasons to doubt it.
Suppose some anti-AI deity snaps its fingers tomorrow and every LLM simply spontaneously ceases to function. It's not as if we've lost the knowledge of how to do things without LLMs. It's not as if the things we created without LLMs disappear, or anything else. We at worst, at a very conservative, scare-mongering estimate revert to that level; and things were pretty tolerable at that level. And technologies that are not LLMs have also advanced since the release of ChatGPT.
It could unwind cleanly as long as we don't let it infect the banking system too much, which it has started somewhat doing with more debt financed deals instead of equity financed and probably book values making it into bank balance sheets.
Truck driver wages are $180-280 billion annually and seems like something that will get replaced and should justify $1 trillion of the spend or more, economically. I think Tesla for instance only spends single digit billions in R&D most years though, so the spending may not be going to where the most immediate solid/lasting economic impacts will come from. I'm not sure what Waymo's spend is.
How long did Apple keep going up following the smartphone revolution?
Let AI diagnose the avg cold, now AI took over the household / local doctor for 90% of cases.
Use AI for support, now AI took over the call-center business.
AI can already code. It might not be perfect, it might not be always good but NO ONE assumed that 2025 some matrix multiplication can mimick another human being so well that you can write with it and it will produce working code at all.
thats the hype, thats the market of ai.
And in parallel we get robotic too. Only possible because of this AI thing. Because robotic with ML is so much better than whatever we had before. Now you can talk to a robot, the robot can move, the robot can plan actions. All of these robots use some type of ML in the background. Segment Anything is possible because of AI.
Thats the reason why this 'AI crap' is so hyped.
Have you ever talked to AI Support?
It’s easy to name a random industry and assume you’ll automate it with a few clicks. It’s harder to actually do it.
All of that pushes everything forward: LLMs and any other architecture to LLMs, GenAI for images, sound and video, movement for robotics, image feature detection.
Segment Anything 2 was a breakthrough in image segmentation, for example.
The latest Google Weather model is also a breakthrough.
All progress in robotics is ML driven.
I don't think any investor thinks that OpenAI will achieve AGI with an LLM. Its Data + Compute -> Some Architecture -> AI/ML Model.
if it will become the golden model which will be capable of everything or a thousand expert models or the Model of Expert model, we don't know yet.
I'll call out two of them.
Image, video, and other content generation is going to become more important and companies will be spending on that. We've seen some impressive improvements there. IMHO near term a lot of that stuff might start showing up in advertising and news content, the whole media industry is going to be a massive consumer of this stuff. And there's going to be a lot of competition for the really high quality models that can be run at scale. Five years until 2030 is a lot of time for some pretty serious improvements to land.
Another area that the article skips over is agentic tools. Those are showing a lot of promise right now. Agentic coding tools are just the tip of the iceberg here. A lot of these tools are going to be using APIs. So API revenue is a source of revenue. There are applications across the entire IT industry. SAAS, legacy software, productivity tools, etc.
Yes 207B is a lot of money. And there's no guarantee that OpenAI comes out on top "winning" all these markets of course and we can argue about how big these markets will be. But OpenAI does have a good starting position and some street credibility here. It's a big bet on revenue and potential here. But so is betting against all that and dismissing things. And there's a lot of middle ground here.
They are assuming that people _want_ automated content enough to pay for it.
Like at the moment its great because its essentially free, but paying >200billion?
Are we assuming that we can automate the equivalent of instagram, netflix/disney and whatsapp and make advertising revenue?
I think the unmentioned pivot is into robotics, which seems to have actual tangible value (ie automation of things that are hard to do now)
but still 200billion in r&d
With AI, it's different.
This would be why the meat industry “plods along” century after century. For every vegan there are 49 people who eat meat, and probably 35 people who feed their pets meat.
There is actually a huge market for animal meat and it is the vegan industry that plods along.
In fact, what even _is_ the vegan industry? Or do you mean it’s basically marketing hype for companies that combine plants in various ways to sell as food?
Industry is plant based food, certification orgs, vegan charities, consulting, books (both recipes & lifestyle advocacy), content creation, ...
Ordinary people are understandably either exhausted by or angry at AI content, because it is relentless and misleading, and because they know instinctively that the people who want this to succeed also want to destroy employment and concentrate profits in the hands of a few Silicon Valley sociopaths.
If a painting costs $20 bucks, and you can generate 1 million paintings, that doesn't mean you just made $20 million dollars. It means paintings no longer cost $20 bucks.
Essentially what they are doing is cheaper, more accessible CGI. And in the same way at it is now, you are not going to be able to tell it was used in expensive productions and will be able to see it in the cheap productions.
But to your point, if we take all of TV production world wide I doubt thats going to add up to 100 billion spent on set extension/characters/de-aging/fuckit fix it in post. And thats keeping spend at the same rate. (not to mention the recent peak spend on TV)
For openAI to make money it has to be an order of magnitude cheaper, quicker and quality, and be the lead so that people spend AI bucks with them. Rather than having a VFX company tune an opensource/weights model.
I’m assuming gen ai will supplement so instead of ten artists producing content you’ll have three orchestrating and correcting but the output will be indistinguishable from what it was before. Just now the margin is much higher.
I never think of direct to consumer sales in this context.
Also anecdotal, but I do think there's gen z anti-ai sentiment, these kids are joining a world where they will never own a house and have bleak prospects and now even art is something they can't do if AI takes off, so the market might not be there
Seems like older people love AI art, people over 50, which is a sizeable market don't get me wrong, but in 10-15 years idk
> indistinguishable from what it was before. Just now the margin is much higher.
In both music and TV/film there has been an explosion in productivity. A kid in a bedroom with a mediocre microphone can produce something that sounds 95% as good as something that took a team of ten to do at a big studio. The same with VFX. If you compare "the golden compass" from 2006 to the series in ~2020 the quality of the TV show is much higher, at a higher resolution and has more VFX shots in it.
Both music and TV, the margins have dropped precipitously.
I just cant see how making it easier will bring up margins.
Which is a huge amount, but if they only capture a piece of it, it might not amount to much compared to their spending.
It's a bit harder to replace Disney at this point, but give it 20 years. I'm bullish on AI slop on par with The Apple Dumpling Gang or Herbie Goes to Monte Carlo within a decade or so. Evidence: The rapid evolution of AI generated video in the past few years extrapolated into the future.
Instead, we got the elevation of the handmade, the verifiably human created, typified by the rise of Etsy. The last 20 years have been a boom time for artists and craftspeople.
I keep seeing AI slop and thinking that all this will do is make verifiably human created content more valuable by comparison, while generative AI content will seem lowbrow and not worth the cost to make it.
But good luck with that remaining 10% AI...
I think the reason 3D printers can't print anything is because most things are mixed media and 3D printers aren't so great at that yet. There are also issues with topology and the structural quality of 3D printed things compared to things put in a mold. And that's not entirely unlike people (who I once would have thought would know better ) oversimplifying the engineering and scientific challenges of AI for it to be human equivalent or better.
It is as if the relatively unspoken feelings about the downsides of technologies as a gateway to art have been rapidly refined to deal with AI (and of course, even the CNC and laser engraver people have common cause).
But I think it is fair to say that if they feel success, there will be a growing pushback against the use of 3D printers, eufyMake resin printing and CNC in a niche where hand tools used to be the norm.
And speaking even as someone who has niche product ideas that will be entirely 3D printed/CNC cut/engraved, I don't really disagree with it. I am mostly not that kind of creative person (putting aside experimental photography techniques) and I see no reason why they shouldn't push back.
The reality is that "craft" fairs are an odd mix of people who spent a lot of time refining their art and selling things that are the work of hours of expressive creativity and effort, and table upon table of glittery resin mould art and vinyl cut stencil output stuck on off-the-shelf products. I think AI might help people refine their feelings about this stuff they once felt bad/incorrect/unkind about excluding.
It's a bit like the way the art photography market is rediscovering things like carbon printing, photosensitive etching and experimental cyanotype, and getting a lot more choosy about inkjet-printed DSLR output.
We already put more importance on handmade goods vs. factory-made, even if the latter is cheaper and better quality. I have my doubts about humanity collectively embracing content generated from prompts by black boxes.
(disclaimer: I don't work in ad and don't know more about it than the next person)
Whether the end product (the ad) is AI-generated or not is almost irrelevant. The whole production chain will likely be AIfied: to produce one ad you need to go through many concepts, gather reference images/videos, make prototypes, iterate on all that, and probably a ton of other things that I don't know about... The final ad is 1 image/video, but there's been dozens/hundreds of other images/videos produced in this process. Whether the final ad is AI-generated or not, AI will almost certainly (for better or worse...) have a major place in the production chain.
And it makes it look/sound cheap.
I think thats the biggest issue they will face. For example, a company that uses avatars and emojis just looks cheap, because it is cheap to do.
Are you going to pay for cheap looking TV, especially when you know its shit?
But then the more important thing to remember is that news isn't expensive because of the news readers, its expensive because it costs lots to operate a news network. If you news anchors are costing millions, you have a chat show, not a news programme.
Personally, I consider TikTok very different to news networks. TikTok is also primarily vertical video. Are news networks going to do that too?
They wouldn't have off-air scandals, require insurance, pensions, teams of wardrobe and makeup artists, security details; They wouldn't need to travel. And that is just the on-air talent. You can replace thousands of tv studios all over the world with a handful of workstations and compute power.
Just from where you are pulling the data that on-air personalities are too expensive?
"The data that on-air personalities are too expensive?" It doesn't seem to me , for the purposes of this conversation, that identification of a cost center required a quantitative analysis. The cost of human talent is non-zero, presumably large enough to merit scrutiny, and unpredictable; that is sufficient, to my way of thinking. So is the cost of the equipment and infrastructure to capture and transmit video image of that human talent, and the humans who maintain and operate that infrastructure.
We've seen several decades of human cost-reduction initiatives, across multiple industries and fields of endeavor, so I'm taking that as evidence that if there is a cost that can be reduced or removed, someone somewhere is looking at doing so. Everything from assembly-line automation, to switching to email over inter-office memos and mailrooms, to the abandonment of fixed-benefit pensions, to self-service kiosks in fast-food restaurants, demonstrates that costs will be cut where they can be cut.
Tucker Carlson or Wolf Blitzer or Lester Holt might as well be cartoon characters to me. There's practically zero chance I'll ever meet them in person, especially more that we'd have any kind of real human connection. What one cares about is if they think the overall source is reliable and what kind of information (or disinformation) their orgs are pushing to the people. Having them be actual meatbags is a liability, they'll pop too much ambien one night and say some pretty terrible things on social media compared to only ever being a highly curated output of the organization. Unless they pull a Tay.
I guess if 'importance' means sentimentality, however, the "factory-made things" industry certainly makes the vast majority of the money to be made in manufacturing. Handmade is a niche. I think the argument is that whether people prefer it or not, AI-generated content will have a major place in the marketplace for content because of its natural advantages. In other words, a movie heavily generated with AI won't win any Oscars but there is a world where such things would make a lot of money and some of that money would flow to OpenAI.
I have a limited lifespan, I'm not going to use that to consume slop.
One of the candidate for mayor of New York ran a whole AI video ad campaign.
Half of the real estate listings have AI remodeled pictures.
Probably a quarter of the printed ads I see each day are AI generated.
It will only increase, but it's already here.
Its not a Blueray vs HD-DVD situation (winner takes all and competition dies on the vine) or early Google vs Yahoo Search i would posit its more like bitkeeper vs git - and will probably eveolve into a git/hg/bazaar type situation. Costs will only keep dropping and the easiest/cost effective options will get faster uptake in the backend dev world where it matters.If im integrating a LLM and the whole field can kinda do the equivalent capex/opex will push the decision.
The thing about tech is it has to scale to meet tech valuations, and there’s a reason instagram basically gives content creation tools away for free.
He congratulates Jeff Bezos for New Glenn, and rightly so, but he's absolutely silent about the Chinese robots. One has to admit that they're already ahead of the US. This comes from a German who well recognizes how far behind Germany is in comparison to the US, in regards to any digital technology.
There is no free market competition between "the west" and China, they're two completely separate markets for a reason. You're only allowed to cross that isle in a limited capacity, as soon as you get big enough to be seen as a legit threat, you're getting cut off. Works both ways, Tesla can sell in China, but they can't beat BYD. Apple can sell iPhones, but only because their market share is not big enough to matter.
They both sell only like 2-3 models at a time, always heavily preferencing one of them in marketing, and then use that to claim they have the "best selling phone in the market". Which is true, but is not the same as having that percentage of the overall market share as a brand.
Nearly every Tesla sale is a Model Y sale, and nearly every Apple sale is an iPhone 17 sale. This does not apply to other brands such as BYD, Geely, Huawei, Oppo, (Samsung, Volkswagen,) where you can walk into a store and pick between about a dozen models targeting different niches.
On top of that, Apple just started selling iPhone 17, so yes, if you Google it, you find out that they've "reached 25% of the Chinese market" and not realise that this is what happens every Q4 simply because that's when they start selling new models. That is not the same as having 25% of the overall market share at all. Overall, both Tesla and Apple are around fifth most popular brands.
... with said humanoid robots?
While it's true China's manufacturing sector can outproduce the west's right now, ubiquitous autonomous humanoid robots can level the playing field to a meaningful extent.
The LLMs will create new content, but they aren't creating new business channels in the advertising industry. As an example even once Google achieved search domination they still didn't have this. They had purchase a lot of things to make that happen like Urchin, Adscape, DoubleClick, YouTube, and a lot more.
AI collapses the value of IP across the board, because AI trends towards being the only IP, which means that the marketplace will be defined by operational efficiency, ability to build and run systems at massive global scale, access to capital, and government connections, so Microsoft, Amazon, and Google probably stay on top.
Hideous idea as it is, I fully expect they break even in 2026.
I am a bit worried about the feature where it calls the shops for inventory checking. That's the whole point of having a website. Now we are going to have expensive AI that calls other AI answering machines and no value will be added. Meanwhile, it will become even more difficult to talk to a human when necessary.
If they can figure out how to get the right kickbacks/referrals without compromising user trust and really nail the search and aggregation of data this could be a real money-maker.
Lol what a terrible idea. Why not just hand every decision you'll ever make to AI?
Nobody needs critical thinking or anything. Just have AI do it so you save $3 and 4 minutes.
This kind of task is perfect for AI in a way that doesn't take away too much from the human experience. I'll keep my art, but shopping can die off.
Not the current form of AI. I regularly use Project Farm to find the best "insert tool". In an ideal world a robot runs all of these tests in perpetuity covering every physical appliance possible (with every variation, etc.). However, current AI cannot do this. Obviously LLMs can't do this because they don't operate in the physical world.
Maybe I am deeply suboptimal, but typically this kind of decision takes me far more than 4 minutes.
If they had some direct feed of quality product information it could be interesting. But who would trust that to be impartial?
If the answer is "no because that's an ad", well, how do you know that the output from ChatGPT isn't all just products that have bought their rank in the results?
EDIT: Like, have you actually tried this? If you ask it to summarise what Reddit is saying with sources, that’s pretty much exactly what you get.
This is a complete contradiction. Once there's money involved in the recommendation you can no longer trust the recommendation. At a minimum any kind of referral means that there's strong incentive to get you to buy something instead of telling you "there are no good options that meet your criteria". But the logical next step for this kind of system is companies paying money to tilt the recommendation in their favour. Would OpenAI leave that money on the table? I can't imagine they would.
i'm trying to envision a situation in which the former doesn't cancel out the latter but i'm having a pretty hard time doing that. it seems inevitable that these LLM services will just become another way to deliver advertised content to users.
As another commenter points out, "not compromising user trust" seems at odds with "money-maker" in the long-term. Surely Google and other large tech companies have demonstrated that to you at this point? I don't understand why so many people think OpenAI or any of them will be any different?
https://www.theverge.com/news/819431/google-shopping-ai-gemi...
For the IPO itself Amazon sold 3 million shares at $18, for a raise of 54m (from the IPO alone they had enough cash to pay off every investor up to that point). In July 2001, in the heart of the dotcom crash, they raised 100m by selling equity to undisclosed investors, and of course they have been using shares as part of their compensation packages for a very long time, but that's about it for Amazon's entire equity raises.
They did raise 15b in a bond issuance a few weeks ago, their first bonds issued since 2022, with the money going to several things but mostly AI. However, since bond payoffs are very different from selling equity this is a very different play from what OAI is doing. Amazon will never pay more than a fixed amount for that money, capped upside to the bond-holders.
The reason this is different is that Amazon has largely run either a small profit or a small loss, quarter after quarter, because they take their profits and instead of recording it, putting it in a bank, or dividend-ing it to shareholders they put it into building datacenters and warehouses and software and the like. But because of that enormous cash generation they have only rarely tapped outside investors, either in bonds or equity markets. OAI is not generating near enough cash to fund their operations, so they have been selling equity in absolutely enormous quantities- they have already raised more cash pre-IPO than any company in history and outside estimates like this one from HSBC call for them to blow past the amount they've already raised. This is fundamentally very very different.
If someone else can achieve the same output as OpenAI at a similar price, they are completely toast. There is absolutely nothing tying you to ChatGPT because ChatGPT doesn't matter, only what it produces.
Amazon was in a (similar) situation, but not quite, because they offered a unique experience. But I strongly believe that if Sears just kept their catalogue for another decade, Amazon would not exist.
OpenAI is still losing money much faster than it can make it and is planning to accelerate those losses indefinitely.
I think the most likely outcome is that it turns into something like Uber, where they continue to lose money waiting for a major technological leap (truly unassisted and reliable AI in this case, fully self-driving cars in the case of Uber) and then pivot a bit to a largely unnecessary and poorly executed business model that people reluctantly use for the most part (with some eager advocates) but makes some money.
OpenAI has multiple competitors, who all build their LLMs for less money.
*If AI is just SaaS/online ads in a different form.
It is not hard to imagine Google being able to outlast OpenAI for a decade and it is hard to imagine OpenAI being able to survive for more than another couple of years given that.
Had a look and:
>In total, OpenAI aims to invest approximately $1.4 trillion in computing infrastructure – encompassing Google Cloud, Nvidia chips, and data center expansions.
Huh yeah fair. That's more than the yearly defense budget. Absurd. Though I'm sure it's not _yearly_
- [1] https://en.wikipedia.org/wiki/Military_budget_of_the_United_...
Seems crazy by most software standards, but when Bloomberg became a software only program (they stopped selling physical terminals) and people were shocked when they paid almost nothing for excel but then so much for the second tool they needed as traders.
Yet it still is priced so high and people pay.
A robot the cooks, cleans, and talks to you.
Many won't afford it so they'll maybe rent one by the day or for X number of hours.
1. Is it worth 20k to anyone? Well depends on the advantage but maybe yes. People are dropping 200-1000 a month already as ordinary devs.
2. Is there competition? Yes lots. To get to 20k one model provider needs a real killer edge that no one else can keep up. Or alternatively constraints push prices up (like memory!) but then that is not margin anymore.
One interesting thing I heard someone say about LLM’s is this could be a people’s innovation, basically something so low margin it actually provides more value to “the people” than billionaires.
Just random speculation though.
it took longer to generate a page of content or get a complete answer than a free LLM takes on an i5 CPU.
(ex: llamaGPTJ for linux CLI)
its missing live market data and news but incorporating that with something akin to openrouter would be trivial.
The trend is going the opposite way, intelligence too cheap to meter according to @sama.
"OpenAI needs to raise at least $207bn by 2030 so it can continue to lose money"
Full title: (95 chars)
"OpenAI needs to raise at least $207bn by 2030 so it can continue to lose money, HSBC estimates"
There are adjacancies in white collar work like financial analysis that they will go after. All these will capture high ARPU usage.
Consumer is not their only path to revenue but it is probably the easiest to model. The enterprise play to automate and accelerate some white collar workers is a clear target not reflected here.
Seriously though, massive recession is on the horizon, no one is going to be spending squat.
I think beyond the number of crazy assumptions (no Google taking market share in the consumer market?? only 2% of digital advertising expected to be captured by OpenAI?) it is hard to nail down which levers could move which might make this funding hole disappear.
It potentially looks like Google and OpenAI will take this new market.
An LLM, in addition to being unable to ever obtain true AGI because of the linear and singular representation of concepts and data, cannot combine multiple schemas or metadata from multiple contexts with its own training and reinforcement data.
That means it cannot truly remember and correct its mistakes. A mistake is more than the observation and correction, it is applying global changes to both your metadata and schema of the event and surrounding data.
LLMs as an AI solution is a dead end.
1. AI becomes better, causes more fear, public uproar, arms race between China/US
2. AI becomes a government project, big labs merge, major push into AI for manipulation, logistics, weapons development/control
3. ????
4. Utopia or destruction
If ChatGPT is delivering that, they should have no problem raising money.
And with near every level of hardware getting some kind of AI acceleration a lof of the uses might even be local
1. Make AGI
2. ???
3. Infinite profit!
If OpenAI goes bankrupt, what happens? People won’t be able to write their precious slop oh no and serious professionals will just switch to any other LLM provider
If they capture it fully that would only mean that OpenAI gets a portion of the engineering spend (because it is supposed to save on engineering salaries) which is a portion of the total spend which is almost always less than the total revenue.
Now estimate the total revenue of all software companies combined in the world and look at their engineering spend and a faction of that is what OpenAI can have in best case scenario.
So... Bubble.
Sure OpenAI may well be bleeding money into the 2030s, or may even go bust completely depending on how pessimistic you are, but the analysis completely skips:
- They are building their own data centers, and will be less reliant on renting compute from Microsoft and Amazon over time.
- Once the AI bubble has subsided costs for GPU purchases and rentals will decrease significantly. Plus there will be more advancements and competition in the space (e.g. Google TPUs) and Nvidia will no longer be able to name their own price.
- We will write more efficient software for training and inference.
- Once user growth is tapped out OpenAI will no longer need to have the overly generous free tier that they do today. And if they decide to turn up the advertising faucet these users could bring in a ton more revenue than in the projection. Thinking that every AI company combined will capture only 2% of the total digital advertising market is ridiculous. AI apps are already challenging social media for scrolling time.
Basically, the entire space is evolving so rapidly that it's pointless to make a projection with the assumption that the landscape isn't going to change from here on.
this is super overblown. what their executive said was that eventually the scale of compute required is so large, that it requires not only investing in new DCs, but new fabs, power plants, etc, which can only happen if there is implicit government support to guarantee 10+ year investment horizons required for the lower level of capital investment. that is not controversial at all and has nothing to do with OpenAI specifically being too big to fail.
We're barely scratching the surface of the utility of LLMs with today's models. They aren't more pervasive because of their costs today, but what happens if they drop another order of magnitude with the current capabilities?
What does that even mean?
And, because AI is currently what prevents the US economy from being in a recession (at least that what some people speculate), the US economy will stumble, which means that everyone else will to.
If OpenAI crashes, for example funding stops, they go broke, fall behind, nobody buys anything, then all the money they invested for data centers or demand they created for NVIDIA chips and compute collapses. That creates surplus of hardware, causes lots of construction/buildout / stockup orders to get cancelled, and the whole thing ripples as suppliers and construction and data center providers etc etc suddenly lose a ton of anticipated profits.
Share prices drop as people dump to protect their portfolios, anticipating dips in the prices because share prices will drop as people dump to protect their portfolios (I'm not kidding).
Given that the big 7 AI companies are basically _all_ of the market growth lately, it doesn't even take a serious panic / paranoia episode to see the market itself stagnate or significantly regress, as people pull from anything AI related, and then pull from the market itself anticipating the market will fall.
It's a fairly standard playbook at this point.
But what you're describing is about keeping the AI bubble from popping. Can a bubble really be too big too fail?
Everything’s for sale, and they will amplify whatever == revenue.
So, they need to "just" get 20% of the market from Google to break even...
it's pretty sobering to think that the so-called harbingers of SkyNet AGI have to fall back to mafia-era revenue streams like vice to convince shareholders that their money wasn't wasted.
I hope the porn and gambling aren't turned on, but they will be. Probably under spin off companies to shield the brand, but using the tech.
two out of three of those require me to want to do them for them to affect my life.
Living in AImaginationland must be nice. C’mon kids, let’s all sing the AImagination song: AImagination, AImagination, AIiiimagination, …
> People crying about the revenue gap constantly forget that OpenAI still hasn't turned on the ads, porn, and gambling
But quite the opposite, HSBC assumed that they will have a virtual global monopoly on AI, and even under those projections they will still need to take on hundreds of billions of debt. I'm sure if they get there getting access to that debt will be easier than I'm assuming currently.
This is truly the most stupid timeline.
We already have those at home without OpenAI.
Also, the competition would be ruthless.
But yeah, maybe they should give up because of the imaginary obstacles brought up here, lol.
>potential new entrants into the space on the AI front
Are these "potential new entrants into the space on the AI front" in the room with us right now?
By selling a dollar for 90 cents. The trick is what to do when the venture capital runs out.
There's already so much AI porn right now. There's no reason to believe that OpenAI will be any better or able to command higher margins than their competitors. They also have the problem of being under a ton of public and regulatory scrutiny so the highest paying clientele, the people with niche unsavory fetishizes, won't be able to be serviced by them.
And that's ignoring the fact that getting into this space at all is pure HN speculation and unlikely to ever happen anyway.
Ads & referrals are already in the works, and people are generally tolerant of those. But, as with any company, appearances matter. ChatGPT will definitely lose users at the slightest possibility of having non-sanitized content served to more morally sensible groups.
They are spending more money than they are bringing in. This means they are losing money.
I have been talking to AI a lot about what portfolios will survive that crash. :)
Gold, treasuries, small cap value.
https://portfoliocharts.com/2021/12/16/three-secret-ingredie...
I think I'll add in some Bitcoin to round it out. Bitcoin and gold seem to take turns going now and be inversely correlated often.
https://www.theblock.co/data/crypto-markets/prices/btc-pears...
Compared with .com bubble. most .com dies, and the one who survived are GOOGLE, AMAZON, Tencent etc.
Right now I use a Chinese vibe code plan, really good value.
OpenAI has many plausible monetization avenues. (Whether it will execute on them -- a fair question. But acting like they just don't exist is low IQ.)
Take ONE example: online shopping.
Online shopping is $6-7 trillion per year, growing 7% annually.
Suppose ChatGPT captures 10% of that with a 5% fee. (Marketplaces like Amazon and Walmart charge 15%. Ebay is 5%.) That's $200B over five years.
Meanwhile ungodly amounts of money are being used so some boomer can generate a AI video of a baby riding a puppy.
If their business isn't sustainable they should go bankrupt, and close shop.
The CEOs making big calls across the economy have already negotiated golden parachutes in the event of their failure.
The financiers and lawyers getting a chunk of each bond deal they close have every incentive to raise more than what's actually needed. Investment funds flush with ZIRP dollars have every incentive to plow it back into investments to show that "the money is at work".
There's also a hidden opportunity cost in regards to hype cycles. So much energy, attention and money flows into the hype, while other businesses or entire sectors get overlooked and underappreciated.
In terms of business, it is not sustainable:
https://www.youtube.com/watch?v=t-8TDOFqkQA
The hype-cycle is nothing new =3
"Memoirs of extraordinary popular delusions and the madness of crowds" (Charles Mackay, 1852)
In unconstrained capitalism, a small group of powerful individuals use corporate control of government for their personal benefit.
Clearly, these outcomes are completely different.
Their stock prices might go down but they're not going down.
What is this saying? Is this sarcastic?
I don't know anybody with a Microsoft 365 subscription.
I suppose the cloud storage is nice, but you can do much better; Google gives you double that for the same price ($99/yr).
Nuh uh. I get 6TB of OneDrive for $100/yr*. Granted, it's across 6 accounts.
For $100, I get 2TB of Google One.
Edit: OK, so they upped it to $130/yr.
I don’t either, but the lesson to be learned is that you live in a bubble. Microsoft makes many millions of dollars a year off of those subscriptions. Just because the bubble of people you interact with doesn’t mean that those people don’t exist.
OK no that was a lie. I know one person that has a Microsoft 365 subscription. I hate it when he sends me word or outlook document links or whatever the hell. Just use Google docs, dude.