I don't know if I'd call myself a booster or skeptic. I'm loosely speaking all in in the office, but what would I actually spend?
On the one hand, my - I dunno, thought leader-y hat would say, why wouldn't I spend 10k/head on them. These tools are amazing, they could double someone's output.
But they also are these like infinite toys that can do anything, and you can spend a lot of money trying all the things. And does that actually provide value. Actually provide like rubber hits the road monetary value on a per-head basis. If you're really focused and good, probably? But if you're not disciplined. You can spend a bunch of money on tokens, then spend a bunch of money on the new features / services in production, and spend a lot of your colleagues time cleaning up that mess.
And this is all human stuff. This is all assuming the LLM works relatively perfectly at interpreting what you're trying to do and doing it.
So like. Does it actually provide the benefit of X dollars per engineer per year? Because it wouldn't have to, it could in fact go the opposite.
Also, average corporate: let us buy this wonderful 10k tool.
Much like stacks and stacks of badly written web frameworks made things like collapsing comments on new reddit 200 ms of JavaScript execution ( https://bvisness.me/high-level/burnitwithfire.png ) I can easily imagine people layer stuff together till token burn is beyond insane.
I mean just look at the Gastown repository. Its like literally hundreds of thousands of lines of go and md files.
If it really augments my output, sure, currently I just watch my tokens drop to 0 within 3-4 days of using it and then having to wait a month for them to reset because I wont pay for more parrot tokens. The output speeds up some small things but to my overall speeds its not noticeable a ton.
I'm fairly confident it'll improve over time though.
There are some wins from AI now but it’s kind of telling how companies like Microsoft have to trick customers into paying for it with changes to their plans because there just aren’t enough people seeing real value to the point where they’re jumping to pay more for it.
How do you measure developer productivity? Code quality? Developer happiness? As far as I know, no one in the industry can put concrete numbers to these things. This makes it basically impossible to answer the question you pose.
There are of course many factors at play here, and a substantial percentage of CEOs report a positive RoI, but the fact that a majority don't shouldn't be dismissed on the basis of this being difficult to measure.
Or the fact we are woefully unprepared for a peer conflict. We wasted how many trillions in the middle east? We cancelled how many modernization programs to fund counter insurgency programs instead?
Stock up on dry beans and rice. See if your parents have a spare room. Don’t buy anything expensive. This bubble is gonna hurt.
What is going to happen to the stock of all these big tech companies (see Google, Amazon, etc)? What is going to happen to the tech world that lies in the periphery but depends on services provided by these companies?
What is going to happen all the devs that are currently working in the AI industry once it recedes to like 25% of its current size?
Can the non-AI tech job market absorb all the soon to be laid off employees?
These are the questions in my mind when it comes to transformer tech.
Not just the tech world. Think of all the businesses below "huge enterprise" that rely on SaaS for most of their tech stack, many don't even have internal dev teams. Maybe a lot of them bought contracts with any number of "AI SaaS" tools to run their businesses.
When all those go under, all these companies have major disruptions and have to scramble to find replacements, get their data out if they even can, etc. It's going to be very messy.
Not to mention the trillions of dollars of unregulated, non-bank lending that's funding all of this build out. It's a black box, there's no visibility into true valuation and because the credit isn't public, the rot can be hidden for years until its too big to fix and finally snaps, taking the rest of the economy out with it.
Good advice regardless of what happens. Freeze dried food also great if one can afford it. Bubbles bursting aside, emergencies happen. Water treated for long term storage is also important to prepare the beans and rice. Also portable portable bunsen burner with many gas cartridges.
Though I'm more concerned about the effects of the current political climate, than the "AI" bubble popping. In the scenario of that going south, nothing will be normal for a long time.
This time around the ramifications might be larger, but it will still mostly be felt by those inside the bubble.
Personally, I would rather experience a slight discomfort from the crash followed by a normalization of hardware prices and the job market, than continue bearing the current insanity.
That's because the companies were largely public. In the dotcom era, the exit for startups was to IPO, as early as possible. When someone like pets.com was burning through cash with zero path to profitability, it was public knowledge, everyone could see it.
The AI built out is largely funded with private credit. It's a black box, we have no idea of true valuations. The mean time for a company to go public went from 4 years during dotcom to 14 years now. The rot can be hidden for a long time until the big funds go bust, with all the collateral damage that brings.
See Microsoft's recent "We don't understand how you all are not impressed by AI."
In the case of MS, you're right, Satya isn't going to fall on his own sword. They will just continue to bundle and raise prices, make it impossible to buy anything else (because you still need the other tools) and then pitch that to shareholders as success "Look how many people buy Copilot (even though it's forcefully bundled into every offering we sell("
There’s minimal risk to the decision makers. Meanwhile, every one of us peons is significantly more at risk of losing our jobs whether we could be effectively replaced with these AI tools or not because our own C-level execs decided to drink the snake oil that is the current bubble.
Personally I think AI is super useful, but at my job the amount of progress we’ve made has basically ground to a halt compared to before AI.
The reason is that the people who they chose to lead the new, most innovative “AI initiatives” were the least innovative most corporate drone-y people I’ve ever met. The kind of people who in 2025 would unironically say stuff like “we need to work on our corporate synergies”.
They never wanted innovation, they just wanted people to toe the line for as long as possible until they could jump the sinking ship.
Simple queries like: "Find a good compression library that meets the following requirements: ..." and then "write a working example that takes this data, compresses it and writes it to output buffer" are worth multiple hours I would otherwise need to spend on it.
If I wanted to ship commercial software again I would pay much more.
For a few months I used Gemini Pro, there was a period when it was better than OpenAI's model but they did something and now it's worse even though it answers faster so I cancelled my Google One subscription.
I tried Claude Code over a few weekends, it definitely can do tiny projects quickly but I work in an industry where I need to understand every line of code and basically own my projects so it's not useful at all. Also, doing anything remotely complex involves so many twists I find the net benefit negative. Also because the normal side-effects of doing something is learning, and here I feel like my skills devolve.
I also occasionally use Cerebras for quick queries, it's ultrafast.
I also do a lot of ML so use Vast.ai, Simplepod, Runpod and others - sometimes I rent GPUs for a weekend, sometimes for a couple of months, I'm very happy with the results.
So, what did you learn from that project??
I learned how andible, terraform, and docker approach dev ops and infra.
I would not be able to hand cook anything with these tools but understanding the syntax was a non goal (nor what interests me).
I must admit the idea has a lot of appeal, because there are people seeing good ROIs, so it does not seem to be the tool as much as the tool user
And it's problem specific to software engineering. Any engineering deals with it - when you manufacture physical things, for example, tolerances, safety factors, etc, are all tools to deal with reality being messy.
The company needs to have the right culture and ability to integrate leading technology, whatever it is.
I start with, AI can be really useful tool
That some orgs being bad at tool use and coordination is now on full display, this follows naturally for me
That's, I'd argue, the majority of companies though, which still spells a problem for the AI bubble.
I've been a part of enough failed ERP implementation projects to know that there's actually very few enterprises out there that collectively have their shit together and are good at implementing technology.
If AI also can't solve that problem for them, it'll just join the long list of already existing boring enterprise tech that some successful companies use to great effect, and others ignore or fail to adopt, which isn't exactly a multi-trillion dollar industry to live up to the current hype.
in the times when Agile was still just a way of work (not a mantra), adopting it showed exactly the glaring troubles in the overall (human) pipeline. But it needed quite some time - like months of work - to really show.
This seems same thing, only much faster..
Actually making some parallels might be interesting.. agile was/is also touted as "the silver bullet"..
My first job out of University was with a startup that had been recently acquired by IBM. I have never seen true agile since. Similar 1/100 actually do it really well and proper. I should see if I still have the slide deck I made to tell the rest of IBM, would make for a great blog post and bringing in this thread, chef's kiss?
The hype train must go on, and I'm sure all employees are under strict NDAs, so we may never know.
From the PwC survey:
> More than half (56%) say their company has seen neither higher revenues nor lower costs from AI, while only one in eight (12%) report both of these positive impacts.
So The Register article title is correct.
> It's a snowball effect that eventually builds bigger and bigger.
That's just wishful thinking based on zero evidence.
I never said the title is incorrect so I'm not sure what you're trying to prove.
It's not wishful thinking. I actually looked at the trends instead of a single data point to reinforce my already decided conclusion. I also read the article and followed the percentages instead of assuming the values were all absolute. This is what I mean by reading the data backwards.