Assertion assertion assertion wishful thinking assertion.
Show, don't tell. Show us that we're wrong and this isn't a VC black hole. The CEO of Enron as late as September 2001 could've called every critic a sad dark loser with nobody challenging him publicly. Jim Cramer famously yelled anyone pulling their money from Bear Sterns in 2008 was "silly, do not be silly" exactly 8 days before their collapse and a -92% stock drop. And of course in COVID, calling everyone paranoid and sensationalist about some mythical new flu was popular in December 2019 and gone by March 2020.
Based on what? The RAM requirements alone are extraordinary.
No, running large models on shared, dedicated hosted hardware at full utilization is going to be vastly more cost-efficient for the foreseeable future.
I must say that the largest dedicated hosted hardware providers now, like Amazon or Google, to a large extent do not produce the software they are offering as a hosted solution (like Linux, Postgres, Redis, Python, Node, etc). Similarly I'm not sure if the producers of the frontier models are going to keep their lead as the service providers for the most widely used models. They would need to have quite a bit of an edge above open-weights models.
Also, models are given very sensitive data to process. For large organizations, the shared dedicated hardware may look like a few (dozens of) racks in a datacenter, rented by a particular company and not shared with any other tenants.
I wish this was true but it is not. And I am working on open source models so if anything, I would have a bias towards agreeing with you.
Frontier closed models (GPT/Claude) are gaining distance to everybody else. Even Google, once the king.
Your claim is a meme coming from benchmark results and sadly a lot of models are benchmaxxed. Llama 4, and most notably the Grok 3 drama with a lot of layoffs. And Chinese big tech... well they have some cultural issues.
"Qwen's base models live in a very exam-heavy basin - distinct from other base models like llama/gemma. Shown below are the embeddings from randomly sampled rollouts from ambiguous initial words like "The" and "A":"
https://xcancel.com/N8Programs/status/2044408755790508113
---
But thank god at least we have DeepSeek. They keep releasing good models in spite of being so seriously resource constrained. Punching well above their weight. But they are not just 6 months behind, either.
I've got a 128GB strix halo staying warm at home, it has nothing on top models with big budget. It's good supplement to low end plans for offloading grunt work / initial triage
Thanks for suggestion tho, tool by antirez is always going to pique interest, I'll check it out when I'm finally home again
Tho says Metal / CUDA, so doesn't seem friendly to Linux AMD system
They’re still pricey, the world is still scaling up memory production, and a lot of code isn’t yet built for AMD, but we went from the Wright’s brothers first airplane to jet engines in 27 years.
I’m not sure “it’s only a few years away” but we are sure moving there fast.
Nitpick: more like 36 years, from Wright Flyer in 1903 to Heinkel 178 in 1939. Still quite impressive.
Cynically: it’s become an executive-level gpu measuring contest. If you’re not making huge commitments on data centers, you can’t be a serious player.
Realistically: It’s a mix of the two. The recent Claude caps for agentic usage suggest that demand exceeded their immediate compute supply. That they can alleviate it with additional capacity from the existing and small-ish xAI facility suggests that either demand may not be rising quite as fast as anticipated, that they’re okay in the short term until more capacity comes online, or a mix of both.
Open questions:
1. At what price point does demand fall, and are the frontier providers overall profitable before that price point?
2. At what price/performance point do on-prem local models make more sense than cloud models?
The print shop can’t replicate the practicality of local printing and I can’t replicate their scale of investment. Both coexist perfectly.
That is only true right now because hundreds of billions of dollars are being burned by these AI companies to try to win market share. If you paid what it actually cost, your comment would likely be very different.
They have to keep getting better to stay ahead of each other and open weight.
Which means it's the opposite of a timebomb, the article has it completely backwards, tokens at current level of reasoning will continue to get cheaper.
I'm not sure 'local' will be the end state, as hardware needs are high. But certainly competitive forces tend to push profit margins toward zero.
Extended discussion on this topic:
https://corecursive.com/the-pre-training-wall-and-the-treadm...
I seriously doubt it. Scaling is already strained (don't buy into the "exponential" hype). And, in any case, the competition will be against the frontier models that will exist in two years.
The big question I'd be asking if I was investing in one of the big players is if those changes are "it can do 99% instead of 97% of the tasks a user will throw at it" (at which point going local and taking back cost control/ownership makes a lot of sense, especially for companies) OR "it will fully replace a human with better output"?
I already don't need Opus for a lot of my tasks and choose instead faster/cheaper ones.
The former is a company that's gonna be trying to sell mainframes against the PC. The latter is a company that is in potentially huge demand, assuming the replaced humans end up with other ways of getting money to still be able to buy stuff in the first place. ;)
But even if scaling plateaus for the frontier models, maybe distillation will improve to the point where smaller more manageable models can reach the same plateau. That would be great for local.
We are only 2-4 years away from consumer grade immutable-weight ASICs.
The issue is the very huge amount of DRAM and high bandwidth these model require.
You might be interested in the tiny tape out project, which guides you through the process of getting your own design etched on silicon. If you only need larger features and not the next gen single digit nanometer stuff, you may not be so supply constrained.
Also, how many companies will just buy an M6/M7 MacBook Pro with 32GB+ of RAM in a couple of years and get “free” AI along with the workstation they were going to buy anyway?
Boss is happy, very happy. We're rolling it out more widely now.
But this is the future.
Local models never reach the % utilization that cloud providers have (80%+), and they’re always going to be much better than local models for this reason.
It’s not unreasonable to suppose that in 2 years time an opus 5 quality model will be etched into silicon for high performance local inference. Then you just upgrade your model every 2-3 years by upgrading your hardware.
It's a measure of a very thin sort of "value/$" that excludes a lot of other things that could be of value to a business, like control, predictability, and availability.
Thin clients have been going away for a long time. The trend has been to continue to push higher levels of compute into ever-smaller and ever-more-portable devices.
Unless there isn't some important breakthrough in hw production or in models architecture, it's quite the opposite: bigger, more expensive and more energy-intensive hw is needed today compared to 1 or 2 years ago.
And how many tokens would that buy?
Eventually, we'll see. Frontier models still need some pretty serious hardware which will slowly come down in cost. Smaller models are becoming more capable, which will presumably continue to improve.
I think there's still a pretty big gap, though. Claude estimates Opus 4.6 and GLM-5 need about 1.5Ti VRAM. It puts gpt-5.5 around 3-6Ti of VRAM.
That's 8x Nvidia H200 @ ~$30k USD each. Still need some big efficiency improvements and big hardware cost reduction.
It would cost me $300 in normal deepseek v4 pricing (non discounted) PER DAY, but I get it all for $500 worth of subscriptions.
Not even when that site calls itself "market" to create plausible deniality.
“”” The subsidy era is not winding down gracefully. It is showing cracks everywhere. … the question is not whether they got a good deal. The question is how long that deal survives. … A developer running three or four concurrent coding agents is not consuming 3x or 4x the tokens of a chat conversation. It is an order of magnitude more … These are not experiments anymore. They are load-bearing workflows. … That is not a rounding error. That is a line item that needs its own budget code. “””
One can at least hope.
It's just "intellectual" botox.
Could be just ESL, it's hard to close the proficient to native gap.
Maybe it's different if you are doing technical/commercial writing, but for social media where you are writing for fun, and to express yourself, it'd be odd to let AI be your voice unless you realize your own writing is very poor.
A lot of people post for clout, so something that can skip the difficult process of becoming a good writer (and original thinker) is more than enough. They can churn out think pieces about any topic at an unlimited pace, basically.
It doesn’t add much to the world, but they get a lot of traction (which I cannot understand, given the quality of content.) And that’s what matters to them.
I think if you gave most people the choice between (a) being a thoughtful and original writer (b) being seen as a thoughtful and original writer, the vast majority choose (b). Especially when it is zero effort.
Now they write "competent" blog posts on LinkedIn that seem 100% AI slop. Some are employed at AWS, too.
I'm not a native English speaker as I'm sure my writing shows. My point is that I'd rather read genuine posts full of grammar errors instead of slop.
I'm not sure that free tier will necessarily continue forever though, unless there is a way to monetize it (presumably by advertising, or by selling data they've gleaned about the user), or perhaps if there is no privacy and the provider is treating you as a source of free data. Right now we're still in the market-share grabbing "never mind the profits, count the users" stage.
Github Copilot moves to usage-based billing in two weeks.[1]
1. https://github.blog/news-insights/company-news/github-copilo...
It’s as jarring as getting halfway into a well written article, clicking a link to a source, and getting rickrolled.
It’s all you can do to not let it distract you from the fact that in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer's table.
“Load-bearing” is a new one for me though, yuck.
LLMs are just parroting relevant documents they've assimilated.
In the first few years after electric motors became a thing, one could have said the same thing. We would have just gone back to steam. If you tried to "do without them" now, society would collapse.
So the question is not if we can do without them now, it's if we can do without them in 5 to 10 years (or however long it takes for them to be fully integrated)
Just how "early stage" is that, and how much more integration does this "new technology" need to be?
Based on the way Claude has felt the last few weeks, I'd say we're about 3-6 months away from full AGI. At that point we can start truly replacing white collar workers in earnest and begin deep integration.
They maybe running at loss after all the salaries and stock comp, but tokens are in profit now.
Yes, sure, right now it is ... but that's NOT how it got here.
There are trillions invested to recoup and at most billions in sales. It doesn't add up to tokens making a profit any time soon.
But if all the AI companies stopped training new models, they would all instantly become profitable (and stick around)
The thing that makes them unprofitable, is having to compete (which means training models). If / when enough companies exit the market, the cost to compete goes down and you end up in an equilibrium
Eh, the AI companies still have lots of datacentres. For the guys who funded with equity, they could collapse down to just running those as utilities. (For the guys who funded with debt, they'd have to restructure.)
From the customer's perspective, this situation shouldn't result in a cost spike. (Consolidation, on the other hand, would. But that's a separate argument from the one the article attemptes to make.)
But if there's no more competition, there's no more incentive to keep prices low, which will also be reflected in pricing.
But this isn't "a ticking time bomb for enterprise." It's an issue for the AI companies' investors.
But within that big pie, the "IT-related" investments grew 15.7% whereas non-IT actually shrank 2.0%.
It's like selling dope, once they're addicted, a dealer could turn the screw on them
If things don't end up working out a lot of people have already been (and in the future will be) paid. It's the investors that will lose out, not the subscriber.
Obviously I, like basically everyone else here, don't have access to Open AI or Anthropic books so it's just guessing based on public available evidences, but "tokens aren't being sold at a loss" does not imply there is any profit.
And, even if there is some profit, it needs to be big enough to at least pay back the capex spendings and finance the next model iteration.
How many times bigger could Opus be than GLM or Kimi, it’s certainly not proportional to the price
it’s highly unlikely OpenAI/Anthropic are not making decent amounts of money from inference.
Based on what? Why are we all whispering about how profitable all this is? It is the absolute last thing these firms would keep secret.Nobody is whispering about anything. Everyone is loudly assuming what's convenient for their thesis. Even if you have access to the books, the accounting isn't straightforward–there are yet insufficient data for a meaningful answer.
> It is the absolute last thing these firms would keep secret
If you find an optimisation strategy that you don't think your competitors have, you absolutely keep your margins secret for as long as possible. Knowing something is possible is the first step to making it so.
It’s unlikely that Claude is proportionally that bigger and more expensive to serve so profit margins on inference must be pretty decent
Even if they are “profitable” how many Uber drivers are “profitable” because they aren’t correctly calculating asset depreciation. Maybe these guys are doing the same thing.
Maybe it’s a lot of people who already had GPUs for crypto mining, and they’ve moved over to this, so that if they need to grow and buy new GPUs the costs would dramatically grow.
To an extent maybe, but that market is almost entirely commoditized already. Besides Cerebras and maybe Groq (which already charge a slight premium) all the other providers are more less interchangeable.
> Maybe it’s a lot of people who already had GPUs for crypto mining
I’m not sure the type of GPUs that were most popular for crypto are at all useful for LLMs?
They might be sold at-compute-cost, but that of course ignores training, salaries, and everything else.
So the frontier model companies might have crazy valuations and they might never reach that. But that might not mean they are actually unprofitable.
You can also do everything metered. There are multiple ways to buy.
What happened two months ago?
Meanwhile datacenters put out more pollution and use more electricity than all the plane rides Bill Gates took with Epstein combined, for business meetings of course.
Those same companies are getting sweetheart deals with the frontier AI labs in the hope that infrastructure costs go down enough in the future to invert profitability, but it's still a risky position for them to be in. (Having their own infrastructure gives the bigcos huge leverage, even if it's only 80% as good as frontier.)
And some parts of most publicly traded ones.
If it’s not a bootstrapped company with a single offering, there’s a highly likely something there doing is at a loss in the name of growth (and even there, the loss might come in the form of deferred compensation)
Perhaps OpenRouter can be used as a benchmark for commodity cost to serve AI. I keep hearing it's better value than Claude, which suggests to me that either Anthropic is especially inefficient for some reason, or they're turning a profit on inference. They could be losing money on training, but I suspect that's just part of the cost of staying a leading lab. If any single one goes under due to debt etc. then companies can just switch?
It's clearly llm-spew in its mannersims, making me wonder if there were any nuggets of wisdom in its core or if it in entirety is part of some llm-driven blog spam project?
--You lose control over their "salary"
--You lose control over their "schedule"
--Your company becomes reliant on another party that does not share your interests or values, and can stop working for you on a whim for any reason
But AI is definitely good and trade unions are definitely bad, apparently...
That's the same as human workers. In both cases there are contracts/money to help align interests
Many companies use models deployed on Azure/Bedrock etc are already paying based on usage (often with discounts).
Remember that enthusiasts leaning on API keys and large enterprises are the exception, not the norm, and even some large customers may lean on subscriptions for at-scale adoption and wait for teams to report hitting usage caps before buying more token buckets. Subscriptions are predictable, reliable, and above all else a contractable way to acquire service.
Truth be told, this has been my red flag in orgs and with peers elsewhere for several years, now. Those orgs leaning on subscriptions are in for a nasty surprise within a year or two (like the author, I predict sooner than later), especially if those subscriptions power internal processes instead of AI buckets.
Hell, this is why I think there’s a sudden focus on the “Forward Deployed Engineer” nonsense role: helping organizations migrate from subscriptions to token buckets for processes so the bill shock doesn’t send them running away screaming.
Github Copilot has been doing this with business and enterprise seats, but that will be coming to a head very soon. I expect a fast follow after june when they re-align consumer pro and pro+ accounts.
OpenAi seems to be trying to throw tokens at clients to get lock in. So i'd be most worried about the rug pull that will come from open AI post IPO. Anthropic is already acting responsibly in this area and github copilot is attempting to remediate their insane subsidies in the next several months.
1. Training is expensive. Not just compute but getting the data, researchers salaries etc 2. You have to keep producing new models to ensure people use your inference and there seems to be no end to this. So they have to pour more billions to keep the cycle going on 3. People salary and other admin cost are not that high compared to 1 and 2.
The article's point is that if you're relying on flat fee subscriptions, a rude awakening may be coming. That seems plausible to me. Issues around token quotas are a frequent topic on HN.
Nobody is going to charge "inference price" for model usage.
If you increase the price, the value is still astronomical in comparison.
Companies need to find a way to leverage local models in tandem with frontier models to offset the costs.
It’s all about targeting specific workloads with the appropriate AI. These tools are not sentient beings they are tools that need to be properly configured to match the job at hand.
The software world is, by and large, no longer about making products with a focus on the long-term, whether that's about the customer's well being or even the company's own long-term functioning. It's about trapping people, siphoning their money, then running away after setting the building on fire. Founder McBuilder will throw away his entire userbase and tell them "lol idk good luck" about their usage needs if it means he can make an extra dollar.
This is as true for enterprise as it is for consumers. Look at all the lamenting when a liked name gets bought by venture capital or considers an IPO.
The best course of action is to take advantage of subsidy for awhile, but not integrate is so deeply one can’t retreat. You’ll still have full productivity, just be cognizant of the reality of the situation.
Hopefully the market eventually collapses to where companies are hosting their own inference, and you simply lease a model package to run on your own (or rented ) specialty hardware.
Not necessarily. Many factors go into what models are available at enterprise level. If you look around, not many companies (everywhere around the world) use DeepSeek models even though they are significantly cheaper.
Think what you want but even when hosted in the US, at the enterprise level going all in on that would be a legal and/or political death sentence.
We need better open source/cheap but high intelligence western models that are proven to work well in agent if tooling and have strong legal agreements for enterprise to even consider it.
* People keep finding ways of cramming more intelligence into smaller models, meaning that a given hardware spec delivers more model capability over time. I remember not that long ago when cutting edge 70B parameter models could kinda-sorta-sometimes write code that worked. Versus today, when Qwen 27BA3B (1/23 of the active parameters!) is actually *fun* to vibe code with in a good harness. It’s not opus smart, but the point is you don’t need a trillion parameters to do useful things.
* Hardware will continue to improve and supply will catch up to demand, meaning that a dollar will deliver more hardware spec over time. Right now the industry is massively supply constrained, but I don’t see any reason that has to continue forever. Every vendor knows that memory quality and memory bandwidth and the new metrics of note, and I expect to start seeing products that reflect that in a few years.
I hope that one day we’ll look back on the current model of “accessing AI through provider APIs” the same way we now look back on “everyone connecting to the company mainframe.”
As the AI labs become more reliant on enterprise adoption, it makes sense to push capabilities at a cost that makes sense for businesses. Even if it prices out consumers or hobbyists.
Competitive pressure prevents a rug pull.
In a competitive race, each breakthrough gets copied or illicitly distilled or whatever. That means the frontier models are deprecating assets and the mark up tokens should get smaller and smaller.
Now bigger models are more expensive to run inference on, but today's models, or equivalent ability and size models, shouldn't go up in price.
5.5 is 4x the price, but 5.4 still exists, so its not rug pull, but a big more expensive to run and hopefully more valuable model.
Between: more efficient models - tuned for the task at hand, the ability to run those models in-house, or even at the edges, plus Google and Microsoft are well positioned to stay ambivalent as they’ve got lots of products to sell and whether or not LLMs are part of the portfolio mix is completely dependent on enterprise customer demand.
Anthropic/OpenAI have a number of aggressive downward pressures on their pricing.
What? Anthropic's costs aren't the API rate. The article never attempts to estimate that cost, which renders its thesis tautology.
1. GenAI companies are making a loss in order to gain adoption and later lock-in
2. ???
3. They're going to cash-in soon and start milking you now that business critical systems rely on GenAI
The "???" denotes a complete failure to offer compelling arguments that link 1 and 3.
https://github.blog/news-insights/company-news/github-copilo...
Colour me skeptical on that one. Unless the AI improves a lot so it makes sense to spend more.
We all know every frontier AI lab is heavily subsidizing usage, and so do all of the VCs & CEOs funding them.
But also... is this shit AI written? I'm so tired of this.
Did OpenAI switch from fixed prices per seat to usage based? This will surprise many companies I reckon.
Personally I use Claude Code, the 200 euro plan. And am a heavy user. A few weeks ago I realized that CC shows the token usage in cli, in the bottom right. Something I never cared about because I thought paying 200 euro a month will give me „unlimited“ access.
But I guess the party is slowly coming to an end? Prices are going to increase slowly? And the flatrates will be removed eventually?
Too bad, it was nice while it lasted.
Who said it was?
> Pull out the napkin. This matters.
The article wouldn't exist if you didn't think it mattered, just tell us why.
> the question is not whether they got a good deal. The question is
Who said that was the question?
> This Is Not One Company's Problem
Who said it was?
Stop telling us what thing aren't, just speak like a normal human and convey your own thoughts. It's an insult to your audience to throw constant AI slop at them.
> thousands of companies have woven AI subscriptions deep into their operations. Marketing teams draft copy through ChatGPT Plus.
Yea I bet you do..
I will continue to use it as an assistant that does the menial stuff quicker than I ever could, but it's just too early to let it do stuff that would hurt if it disappeared. Enjoy it while it lasts.