The price of intelligence is dropping fast. You can run GPT-4 level models locally today for almost nothing in compute cost. That trend is real and continuing. Just take a look at https://artificialanalysis.ai/models/gpt-4 https://artificialanalysis.ai/models/gemma-4-26b-a4b
And it does look undeniable that LLMs are genuinely useful. The "are they useful at all" question feels settled.
Where I agree is on the financial side — what the top labs are doing does look like a bubble. They're racing toward being first to AGI as if that gives them world domination. That seems delusional, and they'll need to stop or collapse. But collapse won't kill AI or LLMs. It's like saying the dot-com bubble collapse should have killed the internet because it wasn't useful. The internet was useful, dot-com bubble or not. LLMs are useful, bubble or not.
What I expect is things will get less crazy over the next few years, with or without a crash. GPT-4+ and Opus 4+ models have reached genuinely useful levels for knowledge work. And if the trend continues — from GPT-4 expensive in the cloud to Gemma 4 running locally and being smarter — we'll get GPT-5/Opus 4 level models running locally by 2028, and it will keep getting better. At that price point and with local use, AI will be where it should be.
The top labs are burning money like crazy to get there first, as if that gives them a lasting moat. It won't.
What they're trying to do is become the new Google, Apple, or Microsoft. Those companies did achieve sustained moats through speed of execution. I don't see that happening in AI right now.
Though the Anthropic moment this year did make me worry a little...