I think this is a disconnect between people who think that large companies are static entities with established products vs. large companies that still operate like a startup and are trying to grow. When you're building your business from $0 in revenue, you don't know what will work! You try different things, you [launch over and over again](https://www.ycombinator.com/library/6i-how-to-launch-again-a...)...all in hopes of something that works, sticks, and starts to grow.
In every example here, I see OpenAI trying something new, hoping it will grow, and shutting it down after it doesn't. Sora is the pre-eminent example of this. They make news, but you don't talk about the things they launch that successfully grow!
OpenAI isn't shutting down Codex or ChatGPT, because those were launches that they did that actually worked! When you go look at the tweets and communication from OpenAI employees when ChatGPT launched, nobody was sure that it would work. But it did. And if they hadn't launched, we would have never known how valuable it was.
All that is to say...you don't know what will work until you launch. Most things fail, and it's correct to shut them down. But focusing on the products that haven't worked instead of the products that have gets you more clicks, but actually depresses innovation by making future launches less likely.
We get it. They say that stuff to raise money, make sales and keep the party going. But don't expect too much sympathy when the strategy falters a bit.
OpenAI also burned a lot of goodwill by pretending to be a nonprofit foundation focused on the betterment of mankind and then executing one of the most spectacular rugpulls in modern history. So yeah, people will be giving them a hard time even if it turns out that the valuation is justified.
Why is this on the list? Like... what? How about including GPT 3.5 and GPT 2 here too?
Nothing similar happened when the earlier, presumably worse versions were discontinued.
He seems to be trying to take almost a "venture studio" approach by throwing shit at the wall, but the problem with these things is always that the "internal startups" are "founded" by people who don't have enough incentive or control over their product to perform as well as an actual startup, and are distracted by internal politics. And frankly, it may also be that the really good founders will just do their own startup vs working on a quasi-startup inside a large org so there's some selection bias as well.
If anything 4.5 being abandoned so they could sell India a $3 a month subscription was the first crack in The Box
For a brief moment I regretted wasting any time of my life on anything but ML research. But I guess the bigger they come…
The AI industry increasingly looks in scramble mode to keep the hype going as those storm clouds of financial and business reality get darker and darker on the horizon.
The thing that isn't normal is the degree of experimentation relative to company valuation. Normally once a company reaches $700 B+ valuation, they've figured out their product and monetization strategy. ChatGPT is clearly still iterating heavily on that - not normal for a company that size.
The Apple II went on sale on June 10th, 1977. Visicalc went on sale October 17th, 1979- 860 days separate the two. ChatGPT was opened to the public on November 30th, 2022, which was 1219 days ago- almost 50% more time has elapsed than between the Apple II and Visicalc.
If the end result of this is "certain classes of white collar workers are 10-25% more productive" (which is the best results I can extrapolate from what I've seen so far) then it's really hard to imagine how OpenAI can return a profit to their investors.
1: https://en.wikipedia.org/wiki/VisiCalc#Killer_app is pretty much the normal narrative on Visicalc and its importance to the Personal Computer.
If we take this as face value, and say that the absolute best case scenario is there are literally no other uses for AI but helping programmers program faster, given 4.4 million software devs, with an average cost to the company of $200,000 (working off the US here, including benefits/levels/whatever should be close), those 4.4 million devs with 20% productivity would save roughly 176 billion dollars a year.
Some companies will cut jobs, some will expand features, but that's the gist. And it's hard not to see the magnitude of improvement that's come in just 3 years, though if that leads to a 'moat' is yet to be seen.
Thinking... Thinking... Tim Berners-Lee proposing HTTP in 1989 is kinda like the original Attention is All You Need paper, I guess? Netscape 1.0 release in December 1994 is ChatGPT 1.0? And then Amazon.com opened up to the public in July 1995 and then IPO'd in May 1997 (after raising less than 10 million dollars in two funding rounds). But once again we have the business side of these previous cycles moving much faster than this one.
This is really the first meteoric rise in tech I've seen / am experiencing first hand.
Amazon is perhaps a counter-example to your point, though, to be fair. It seems to me they did a lot of spaghetti throwing while making accounting losses for a good number of years. Granted, they did it on OpenAI's dining budget.
AI is so many orders of magnitude more complex that the comparison is not really useful.
It is entirely plausible to me that there are great technologies that are impossible to reach via the normal means of VC/investor financed capitalism. I certainly have encountered market failures requiring extremely patient money (usually in the form of government subsidies) to produce a useful product that eventually does have market value. That has worked many times in the past. But so far generative AI has not had that, and looking at my non-technology friends, I very much doubt that there would be much support among them for government subsidies of AI companies. AI companies have made too many people unhappy, served as too much of a punching bag, to be in a good position politically for that.
More and more companies will start operating on the correct reward/risk curve or else getting crushed by firms who do. OpenAI has forced Google, Apple, Meta out of their comfort zone because they know OpenAI will eat their lunch
I suppose Meta's recent comfort zone was simply a stupid bet on VR, so sure, maybe one part of the comment isn't confusing.
I don't understand what you think you're seeing.
However all of the major privately held AI players are struggling to paint a business and financial picture that doesn’t look “terrible” at best and “verge of market moving implosion” at worst.
For now the only thing keeping this all alive is more and more irrational cash being thrown on the pile in the faint hope that something stops the implosion from happening.
Correct. As compared to other AI companies. Tangible product, specific market segment and stable user base.
But whether it is worth a trillion dollars (like some of the peers are pretending to be) is yet to be seen. A lot of companies are using Anthropic products, but whether the spend is worth it, is also yet to be seen. A more realistic end state for Anthropic would be that they’d enterprise customers, with limited but steady spend due to Anthropic finally having to stop subsidizing tokens and a valuation in around $200-350B.
But between their token curtailment and time of day restrictions, and some of the clues in the code leak (regex for sentiment, telling the public client to be "brief") it seems like they are facing some capacity issues.
Im guessing that the accountants at all the AI incumbents drink heavily.
There's a lot more money in being Google -> consumer ads, or Amazon -> consumer ads, or Meta -> consumer ads, than there is in being Anthropic -> enterprise.
Just take a look at the enterprise. Amazon's ad business alone is already a better business than Oracle or SAP or Salesforce, with superior margins, and it's growing faster too.
And of course everybody knows the Google & Meta ad monsters.
The only question remaining is who is going to extract all those LLM ad dollars, how will that break out. Right now it's Gemini and GPT in the obvious lead, with Anthropic in third, and Meta & Grok nowhere to be found (permanent situation for those).
This seems like ... not the situation we are in. LLMs are great for coding now but their text generation capabilities aren't exactly capturing the masses or replacing their jobs yet. People are already tired of the deluge of fake content on the internet, it's not going to drive a second revolution in web ads.
The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.
Whats interesting to me as well as much as companies are pushing AI adoption, i have started to hear AI token spend limits enforced across a few companies, so its not entirely clear that b2b can make them profitable yet either.
If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.
It’s not like “best” has won any other b2b arms race in the past.
Gemini is the best deal too. For $20: you get multiple quotas per day across the products (web, CLI, antigravity, AI Studio) 2tb of cloud storage, and you can family share the plan.
Further they have their own TPUs, datacenters, etc on which to run their models.
Plus existing data they've squirreled away over the preceding 30 years from books, web, etc.
Just seems like a lot of efficiencies if its going to come down to cost.
And in that reality one can’t just magically spend a bunch more on some fancy new thing, especially when said fancy new thing isn’t retuning value. So “token limits” and cost controls on B2B is entirely expected here.
I think this is the key element. Either they can't measure the value, or it's far far lower than anyone wants to believe, or both.
I think the problem is less that it makes some coding tasks XX% faster, but that the end to end of a SWEs roles tasks is only improved by some much smaller Y%.
If a CTO sets $10k/year spend limits on $500k SWEs.. they must not believe any of the hype.
Expert systems were amazing. They were not cost effective.
There might be another bitter lesson to be had here, and unless the accountants start talking we're not gonna know any time soon.
Fuller integration into the user's life will bring ever more ad opportunities (and it doesn't matter if the HN base hates that notion, it's going to happen regardless). That'll happen over the next decade gradually.
Shopping, home management, tasks (taxes, accounting, lifestyle, reminders, homework, work work, 800 other things), travel (obvious), advice & general conversation (already there), search (being consumed now), gaming (next 3-5 years to start), full at-work integration (gradual spread across all industries, with more narrow expertise), digital world building (10-15+ years out for mass user adoption). And on the list goes. It's pretty much anything the user can or does touch in life.
We already have the tech for that, why hasn't it happened? People are revolted by the AI results in Google. AI isn't going to make people use their computers more. It's not opening up a new consumer market. This is just making each search infinitely more expensive.
The latest "Thinking" version gets it reliably right but spent about 3 minutes coming up with the answer that 10 seconds of googling answers.
So I don't believe we are currently in a situation where LLMs are an effective replacement for search engines.
And what do you think this'll do for future LLM models that need to train on new content if web page traffic collapses?
I think Google has several ai products with search features?
Which one in your experience "seems correct"?
I'm fascinated because I've never found any LLM to be particularly error free at search.
Google could do it in 2000 because their search was legitimately so much better, and also because their ads were comparatively more relevant and unobtrusive than modern ads. In comparison, LLMs are relatively similar in performance unless you're picky enough that you're probably already paying and thus wouldn't be in the ad-supported tier.
That said, I wonder if ads are even lucrative enough to move the needle relative to how much training costs are increasing with each generation.
The first AI to insert blatant ads will be dumped for some other model overnight. Look at the Copilot "backlash" over their "product announcements".
And yet every attempt to extract even minimal ad revenue has been canned to date as something nobody wants with AI providers retreating in failure.
I don’t doubt that there’s “some” ad revenue to be had but there’s little evidence that ads are going to save the day here.
GoTo.com -> Google -> $$$
rules are simple, if you have Xbn or XXXm users on your system, you will make big bank in ads eventually
Basically all their revenue is ad revenue and not too bad
The masses will have no say in the matter. Just as they had no say in the matter with Google's ads getting ever more intrusive, or cable prices previously, or streaming prices going perpetually higher in the present, or YouTube ads, or anything else. Consumers will have no say in the matter, they'll take it and that's that.
With only three relevant competitors (maybe Mistral in Europe), there will be nowhere to flee the deployment of ads.
You can say the same about AWS and then prove the b2b case instead of ad case as well
Google's ad business remains far larger and more profitable than AWS. And the advertising segment is drastically larger than the segment AWS is in. Just Google + Meta = nearing $600 billion in ad sales. Amazon will soon have their own $100 billion in ad sales.
At some point someone needs to add value to the real economy, not just take an ad tax off the top.
Billions in projected revenue is nothing but hype/cope. Google and Meta got their edge because their product was offered for "free" to the masses.
If they want to out-ad those companies to the tune of billions, I'll go with the least annoying. OpenAI hasn't earned any loyalty.
I'm just a user, and in my experience Claude has been consistently crap compared to ChatGPT/Codex.
I use both side-by-side, and have paid for a ChatGPT subscription every month for around 1 year, but only 2 months for Claude; once last year, and again since last month.
Everything from the sign up, the sign in, the payment, the UI, the UX, gosh, just sucks on Claude.
And the AI itself: SO. MUCH. "OoPs you're right! I was mistaken" BACKTRACKING! It's downright DANGEROUS to listen to it! God I can post screenshots of working on the same project and the same prompts with both agents and prove how worse Claude is.
Of course this comment will be downvoted by Anthropic's paid PR machine, because there's no way actual users who have tried both products would be so in favor of Claude.
Sure, it couldn't possibly be that others have had a different experience. It couldn't even be that some people think OpenAI is nearly as gross as Palantir. It's that they're shills.
High-end analysis.
OpenAI has stagnated technologically, and is a financial zombie, but that's not true for every part of the industry. Once these early movers flame out, there will be more stability with Google, Microsoft, and AWS.
Welcome to dot com 2.0
the silicon valley shuffle, tried & true
Forbes "sites" are just the bastard love child of LinkedIn, Medium, and Substack, and should be treated with the respect that deserves.
Usually company "experiments" are typically hush hush, not blasted on every corporate media channel as a means to boost your company holdings.
For some reason, he does not look like a man whom I would trust with my money, but it appears that there are enough rich investors who disagree.
I mean, even Andresson-Horowitz was taking NFT's seriously as though they weren't a scam only a few years ago (https://a16z.com/the-nft-starter-pack-tools-for-anyone-to-an...).
These people are also looking (and funding) quantum computing companies as though quantum computing is right around the corner after AGI.
They need to cool their jets. AI is certainly a worthwhile and super important development, but it's still possible to go overboard with it.
WTF is that supposed to mean? I'm sorry, maybe I'm being dense. I can't figure out what "look around corners" is supposed to mean. "Think outside the box," I guess? Why "look around corners?"
I mean, maybe I do get it. Altman has a weird face that looks like you can't predict where his eyes are based on where his head is. "Shifty," one might say. But I doubt that's what Iger meant.
It's dumb. It's dumb corporate speak. I'm so sick of this kind of stuff getting a pass. We used to bully people over using the word "synergy." Let's make america anti-corporate-weasel again.
To be very clear, I think it's completely stupid.
One of the challenges here is that a lot of folks simply weren’t around then and haven’t seen what happens when everything implodes overnight. Those that have experienced it know what that looks like and know it will happen again.
Whereas this is a very weird bubble where it creates big pumps in some equity prices but apart from a tiny number of people who are directly involved in AI research etc. it's not created any jobs, in fact by creating uncertainty it's probably caused fewer jobs to be created.
What that means for labour market dynamics when it pops I really don't know
In fact "pure" bubbles where the focus item is of literally no value (tulips, NFTs) are quite rare. Much more common are the bubbles based on an actual real transformative innovation (canals, railways, radio, internet, LLMs).
Railways did absolutely transform how travel worked in the UK, while simultaneously almost everyone who invested in them lost their shirts
Now imagine an entire economy working like that. Like say, LLM's are good enough to run entire companies but you don't get to run a company because you are good at it. LLM's can perfectly manage employee schedules but the real job is more like marriage counseling or group therapy. Somewhere along the road we forgot which jobs make the economy go. They are probably the ones with the lowest salaries as those lack the effort of conjuring the job into existence.
Humanity needs obvious things cloths, food, housing, transportation etc but that isn't where the money is. The people cooking the books have the money and they are looking for something like a book cooking book. The market for openAI will be in lying convincingly for the benefit of the investor. Reality must be auctioned off like domain names or search engine placements. Altman is really the perfect guy for the job no one wants. ha-ha
Alternatively we could humble ourselves, ask the Chinese how reality works and attempt to steal their fu. It's just a thought.