> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.
So, is this OpenAI announcing they're strapped for cash?
What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."
> Ads will be the last way I chose to do that
The implication is that they've exhausted all other options.
> So, is this OpenAI announcing they're strapped for cash?
It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.
All this means is: we have a free offering that we can't figure out another way to monetize right now.
We can each draw our own conclusions about what that might mean for the state of their business, but all of the other inferences (ha) in this thread are conjecture.
I don't see how that changes the analysis.
> All this means is: we have a free offering that we can't figure out another way to monetize right now.
And they're doing something they significantly don't want to do to monetize it.
Either they fully changed their mind, or the money is somewhat important, or they're utterly crazy.
The first is unlikely, the last is unlikely, the middle one is enough for a casual "strapped for cash".
It's a very minor conjecture. Actions aren't taken for no reason.
(For all I know they are strapped for cash, to be clear; I just don't think the quote says that.)
(I'm not sure how much deeper HN threads can nest.)
The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.
You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.
Some brands are okay with impressions.. you can build trust in your product be advertising it for weeks/months and when the user does make a purchase that brand is on the mind.
So why chase this negligible revenue?
Dang.
> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.
It’s not that OpenAI is trying to raise revenues that bothers me, it’s how they are doing things that said was desperate just a couple years ago.
GH: system32miro/ai-ads-engine
Seeing how google has been fighting SEO for ages, what's going to happen when companies figure out how to inject ads into the model?
We haven't yet seen the problem of adversarial content in play, I think.
https://chatgpt.com/g/g-juO9gDE6l-covert-advertiser
One of the most interesting things is when it starts pitching a product and you start interrogating it about why it picked that product. I haven't used it in probably a year so it may not do the same thing now, but back then it 100% lied consistently and without any speck of remorse. It was rather eye opening.
Edit: Tried again, it didn't lie this time lol - https://chatgpt.com/share/69f16aa4-c008-83ea-92b3-51f16ca77d...
Have the model generate keywords from the query, then inject guidance from matching advertisers into the context window
q: How do I make a new React app?
a: Vercel makes it easier to get your project running fast ⓘ
Some other choices would be:
...
ⓘ This part of the response was sponsored by Vercel
LLMs are essentially unregulated. I don't believe they have any legal disclosure obligation in America.
Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.
Once the ads are injected directly into the main response is when things get interesting.
This would be where you post-process the LLM response with a second LLM to remove the ad..
Super easy. Barely an inconvenience.
Is this really how bias works?
A writes email with chatgpt to B.
B sees big blob of text and summarizes email with chatgpt.
Adding an LLM in the middle is just the next step.
Remember when we got upset that Google was putting ads into image search [1]?
[1] http://www.ryanspoon.com/blog/2008/12/14/google-image-search... 2008
Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.
same thing could've been said for search results, so at least that part is still "safe".
Ad technology is really old. They're just going to use the same proven tech that has a track record of creating billionaires: intersperse content with sponsored blocks.
The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.
Even a cut on every sale on site + sub rev not close.
!! That is literally the definition of legally-binding fiduciary resonsibility for publicly-traded corporations. There are exceptions (PBCs, B-Corps) but they're rare.
Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.
It feels like we’ve been in the golden age and the window is coming to a close
Let the enshitification begin, I guess
e.g. colleges pay for institutional subscriptions
I really think the future is local compute. Or at least self hosted models.
`Error: "The following domains are not accessible to our user agent: ['reddit.com']."`
I’ve been building a harness the past few months and supports them all out of the box with an API key.
Then there are middle size ones which require multiple gpus which are like gpts latest flagships.
Then there is kimi 2.6 which is a monster that is beating opus in some benchmarks. https://www.reddit.com/r/LocalLLaMA/comments/1sr8p49/kimi_k2...
It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment
128GB of RAM? Sure, the early to mid 4s releases, except maybe 4o. And on an M5 Max, about the same speed.
I wouldn't really bother under 64GB (meaning 32GB or less) except for entertainment value (chats, summaries, tasky read-only agent things).