Current revenue is being generated from capex investments in the past; the most recent capex hasn’t begun to pay off at all. That’s expected.
Now, I don’t know if those investments ever will pay off and there are reasons to be skeptical. But if you assume the capex doesn’t lead to any revenue, then you’re just assuming the conclusion that the investments are bad…
This is a big reason why I'm none too happy about the rise of individual-authors-as-brands via Substack replacing traditional journalism: once people are following and paying you specifically for a specific opinion, you're locked in. A very small number of individual bloggers have a brand of "I'll say the truth no matter what" and actually mean it, but the overwhelming majority are like Ed.
There was an argument to be made that bigger companies with skin in the game were and are making a bigger deal than they should have about how useful it is, but enterprise is growing way, way beyond those companies and it can’t really be easily explained away.
Like this:
> And really, why do these capacity constraints not seem to have any effect on its revenue growth?
Is either absolute obtuseness on purpose, or completely dishonest. Obviously the answer is people find it worth the spend even with the capacity issues. What other answer could there be? To ed, he will say they are simply lying about their revenue and doing accounting tricks, which is a crazy claim to make with no evidence.
otherwise its just "fun toys" right?
This was the comment
i am talking about revnue/profit growth of ai consumers ( not producers). if they are making anything useful or improving productivity surely it would it show up in numbers right? otherwise whats the point.
usefulness isnt a feeling.
btw. top two links on hn right now are
> Appearing Productive in The Workplace
> The bottleneck was never the code
> In fact, fuck it, I’m ending this with a rant.
What is this. People pay for this?
- current CapEx will make the production side increase capacity
- advances in TPUs, NPUs, open weight and quantization will keep going at a rapid pace
- when the spending slows/stops, hardware prices will drop, hard
- most AI workloads will move to the edge (except frontier models) because the hardware is cheaper than a subscription
(and at some point there could be a crash like 2008)For example, most of my AI use lately has been running Qwen3.6-35B-A3B-UD-Q8_K_XL on a 64GB MacBook Pro with an M3 Max. It runs at ~57 tokens/s and it's mostly fine.
I do use the frontier models a bit, but only when the task is too complex for the local model.
Basic crap, like analyzing an existing codebase and bouncing ideas, making small changes, the local model is enough.
He seems to have done nothing but write hundreds of thousands of words about how much AI sucks and is doomed to fail for the past 2 years.