- "Thousands of CEOs just admitted AI had no impact on employment or productivity..." https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-s...
- “Over 80% of companies report no productivity gains from AI…” https://www.tomshardware.com/tech-industry/artificial-intell...
But fundamentally, large shifts like this are like steering a super tanker, the effects take time to percolate through economies as large and diversified as the US. This is the Solow paradox / productivity paradox https://en.wikipedia.org/wiki/Productivity_paradox
> The term can refer to the more general disconnect between powerful computer technologies and weak productivity growthThere will be a period like we are in now where dramatic capability gain (like recent coding gains) take a while for people to adapt to, however, I think the change will be much faster. Even the speed of uptake in coding tools over the last 3 months has been faster than I predicted. I think we'll see other shifts like this in different sectors where it changes almost over a series of a few months.
the same firms "predict sizable impacts" over the next three years
late 2025 was an inflection point for a lot of companies
It's not good enough to just say oreo ceos say we need to more oreos.
There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.
Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.
They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.
So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.
I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude. It's an entirely different world from GPT-4.0. This is serious intelligence.
Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.
You're right. I can feel how far away it is and how these tools will in no way be capable of getting us there.
You can feel it coming.
You might as well be telling people to “HODL”
Yeah? So you must have a clear idea of where "there" is, and of the route from here to there?
Forgive me my skepticism, but I don't believe you. I don't believe that you actually know.
>Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel.
whether or not these companies can turn a profit - time will tell. but I am betting that our massively profitable companies (which are biggest spenders of course) perhaps know what they are doing and just maybe they should get the benefit of the doubt until they are proven wrong. but if I had to make a wager and on one side I have google, microsoft, amazon, meta... and on the other side I have bunch of AI bubble people with a bunch of time to predict a "crash" I'd put my money on the former...
It takes time for technology to show measurable impact in enormous economies. No reason why AI will be any different.
Personally I think AI is unlikely to go the way of NFTs and it shows actual promise. What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it. The Internet didn't begin as a massive black hole sucking all the light out of the room for anything else before it really started showing commensurate ROI.
Most idiots like Columbus died in obscurity.
The iPhone killer UX + App store release can be directly traced to the growth in tech in the subsequent years its release.
Today you have to be blind to not see the change that is coming.
World has its own (massive) inertia, burocracy present in businesses accounting for a big part in it.
AI itself is moving fast but not at infinite speeds. We start to have good enough tooling but it's not yet available to everyone and it still hangs on too many hacks that will need to crystalize. People have a lot of mess to sort out in their projects to start taking full advantage of AI tooling - in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it and "using ai" doesn't mean that "you can copy paste code to/from copilot 365".
As people say - something changed around Dec/Jan. We're only now going to start seeing noticable changes and changes themselves will start speeding up as well. But it all takes time.
Things are actually slowing down. And society will still see AI adding little to next years report. The costs still outweigh the benefits.
Why? It’s descriptive of the “past”. While you’re trying to predict the near/far “future” and project your assumptions. Two different things.
the change that is coming.
Everything you argue reinforces that net output was still basically zero last year. I don't see them talking about 2026 data..I have been a paid AI practitioner since 1982, so I appreciate the benefits of AI - it is just that I hate the almost religious tech belief that real AI will happen from exponential cost increases for LLM training and inference for essentially linear gains.
I get that some lazy ass people have turned vibe coding and development into what I consider an activity sort-of like mindlessly scrolling social media.
Where are they?
Are we sure that's not a misunderstanding of the terminology? Artificial diamonds, such as cubic zirconia, are not diamonds, and nobody thinks they are. 'Artificial' means it's not the real thing. When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?
Incidentally, this comment was written by AI.
With all this recent Claw stuff, it's weird that as people who should be championing the opposite due to our field of study or industry, some of us are now pushing a method of automation that is akin to robo vaccums randomly tracking dogshit across the carpet.
In my working environment, people get dressed down for repeatedly communicating incorrect information. If they do it repeatedly in an automated fashion they will be publically shamed if they are senior enough.
I have no idea what benefit a human-in-loop for sending an automatically generated emails or agent generated sdks or buliding blocks has when there is no guarentee or even a probability of correctness attached to the result. The effort for vaildating and editing a generated email can be equally or greater than manually writing a regular email let alone one of certain complexity or significance.
And what do we do to create to try to guarentee a semblance of correctness? We add another layer of automated validation performed by, you guessed it, the same crew of wacky fuzzy operators that can inject correct sounding gibberish or business workflows at any moment.
It's almost like trying to build a house of cards faster than the speed with which it is collapsing. There seems to be a morbid fascination among even the best of us with how far things can be taken until this way forward leads to some indisputable catastrophe.
Is it possible that this sort of problem will be fixed? Hypothetically, what would happen in a scenario where one of these apps can do in 1 hr the work that would take a developer a month, reliably? Or is your premise that will NEVER happen?
I have no doubt that people will use this to axe grind about they think AI is dumb in general, but I feel like that misses the point that this is mostly about data center construction contributing to GDP.
"On top of that, there is currently no reliable way to accurately measure how AI use among businesses and consumers contributes to economic growth."
No doubt people are using it work ( https://www.gallup.com/workplace/701195/frequent-workplace-c... ) the question is how much productivity results and to whom does it accrue.
Partially this is AI capability (both today and in the past), partially this is people taking time to change their tools.
Opus 4.6 is SPECIAL. nothing like other models. This is a new breed of intelligence.
I give it 18-24 months until we see a full-scale societal transformation.
And most jobs that can be automated already has been automated using traditional software.
I'm not sure if LLMs will change that or not
Having a higher-paid, qualified employee supervising multiple AIs as the human only needs to spot for mistakes - maybe.
But now we have something else happening. It's hard to find an application for something that makes a lot of mistakes. That's not the same issue. The issue then was that no one had written the software yet. Everyone knew what software needed writing. The future was obvious. Here, not so much. We can't see how to make it not make mistakes.
We have to hope someone will come up with a solution to that. Otherwise their big bets on something non-productive won't pan out the same way that the computer did, and we're all going to suffer for it.
We need to get past the hype first and let the cash grabbers crash.
After that, with a clear mind we can finally think about engineering this technology in a sane and useful way.