A popular belief these days is that investors from 2000 ultimately got it right. Truth - they simply got it wrong. They dumped tons of money into things that had no hope of justifying an ROI. They thought adoption of the technology would happen at a pace that was unprecedented or even possible. They assumed things would happen in 3 years that actually took 20. Yes - Shocker!
The article quietly ignored two better explanations: the day to day work of executives can be automated more easily (Manna vibes) and/or the execs have a vested interest in AI succeeding so they can cut headcount so they are evangelists for AI.
Medical doctors as well, officially 0%, reality ?
Also many programmers hide the truth, because it is quite difficult to justify their salary (that was priced from the pre-AI times when programming was much more difficult).
What I wonder is beyond "using" AI, is what value the companies are actually seeing. Revenue growth at both OpenAI and Anthropic are increasing rapidly at the moment, but it's not clear if individual companies are really growing their useage, or if it is everyone starting to try it out.
Personally, I have used it sparingly at work, as the lack of memory seems to make it quite difficult to use for most of my coding tasks. I see other people spending hours or even days trying to craft sub-agents and prompts, but not delivering much, if any, output above average. Any output that looks correct, but really isn't cause a number of headaches.
For the VC's, one issue is constant increase in compute. Currently it looks to me like every new release is only slightly better, but the compute and training costs increase at the same rate. The AI companies need the end users to need their product so much they can significantly increase the price to the end users. I think this is what they want to see in "adoption", such a high demand that they can see the future of increasing prices.
> at least 10% of the employees use GenAI daily.
Remember that this includes people who are forced to use it (otherwise they wouldn't meet KPIs and would expect conversations with HR)Adoption was widespread at first but seems to have hit a ceiling and stayed there for a while now. Meanwhile, there's been little evidence of major changes to net productivity or profitability where AI has been piloted. Nobody is pulling away with radical growth/efficiency for having adopted AI, and in fact the entire market of actual goods and services is mostly still just stagnating outside of the speculative investment being poured into AI itself.
Investment isn't just about making a bet on whether an company/industry will go up or down, but about making the right bet about how much it will do so over what period of time. The scale of AI investment over the last few years was making the bet that AI adoption would keep growing very very fast and would revolutionize the productivity and profitability of the firms that integrated it. That's not happening yet, which suggests the bet may have been too big or too fast, leaving a lot of investors in an increasingly uncomfortable position.
It has been my experience that technology has to perform significantly better than people do before it gets massively adopted. Self driving cars come to mind. Tesla has self driving that almost works everywhere but Waymo has self driving that really works in certain areas. Adoption rates for consumers has been much higher with Waymo (I was surrounded by 4 yesterday) and they are expanding rather rapidly. I have yet to see a self driving Tesla.
Companies are shoving AI into everything and making it intrusive into everyone's workflow. Thus they can show how "adoption" is increasing!
But adoption and engagement don't equal productive, useful results. In my experience it simply doesn't and the bottom is going to fall out on all these adoption metrics when people see the productivity gains aren't real.
The only place I've seen real utility is for coding. All other tasks, such as Gemini for document writing, produces something that's about 80% ok, and 20% errors and garbage. The work of going back through with a fine toothed comb to root out the garbage is actually more work and less productive than any simply writing the darn thing from scratch.
I fear that the future of AI driven productivity is going to push a mountain of shoddy work into the mainstream. Imagine if the loan documents for your new car had all the qualities of a spam email. It's going to be a nightmare for the administrative world to untangle what is real from the AI slop.
- Accidentally tapping the AI mode on the Google search will count as an AI search. DDG doesn't even wait for you to tap and triggers an AI response. Still counts as AI use even if you didn't mean to use
- OpenAI, Google and Microsoft have been advertising heavily (usage will naturally go up)
- Scammers using GenAI to scam increases AI usage and GenAI is GREAT for scammers
- Using AI after a meeting to get a summary is nice but not to enough to make a visible impact in a company output. Most AI usages fall in this bucket
This tech was sold as a civilisation defining. Not GPT-X but the GPT that is out now. Tech that was "ready to join the workforce" while the reality is that these tools are not reliable in the sense he implied. They are not "workers" and won't change the output of your average company in any significant way.
Sweet talking investors is easy, but walking the talk is another thing altogether. Your average business has no interest or time in supervising a worker that at random times behaves unpredictably and doesn't learn not to make mistakes when told off.
40% of companies and 10% of employees can be using AI daily, but just for a small amount of tasks, and that usage can be leveling off.
At the same time, AI can be so inefficient that servicing this small amount of usage is running providers out of capacity.
This is a bad combination because it points to the economic instability of the current system. There isn't enough value to drive higher usage and/or higher prices and even if there was, the current costs are exponentially higher.
People are captivated by good stories, and AI makes for one hell of a sci fi narrative
It's hard to separate the maybe one day plausible fictional future from the on-the-ground reality
Let's check agentic AI. Which agents do people mostly talk about? Aha - coding agents!
But yes I do use a lot more AI then I used to 6 months ago - some of them internally built - many others are sourced externally. I bet I will be using even more AI going forward.
I think it is inevitable!
Lets compare to the adoption of the internet. Mosaic was released in 1993. Businesses adopted the internet progressively during the 90s, starting slow but accelerating toward the decade's end with broad adoption of the internet as a business necessity by 2000.
Three years is a ridiculously small amount of time for businesses to make dramatic changes.
The dot-com bubble didn't form and burst because the technology or opportunity of the web wouldn't be revolutionary. It formed and burst because investment grossly outpaced how fast the technology could mature into commercial value. That's pretty much what we're seeing here.
LLM's, diffusion, etc are radical new avenues for technology and probably will have made a huge impact on society and business when we look back in 20 years, but investors desperate for high yield in an otherwise stagnating economy flooded the engine, betting as though these dramatic changes would happen immediately rather than gradually.
Unsurprisingly, to people who didn't put their chips on the table at all, this all-in bet on immediacy is proving more and more to be a losing one.
(soar - overestimated)
for example, years ago in the era of dragon naturallyspeaking, ALL computers would momentarily be using speech recognition.
and it didn't happen
but quietly speech recognition started working in the background - call trees on the phone, and other places where a strict vocabulary could help. it quietly grew and nowadays it is everywhere.
The internet famously doubled in connectivity every 100 days during its expansion era. Its usefulness was blindingly obvious - there was no need for management to send out emails warning that they were monitoring internet usage, and you'd better make sure that you were using it enough. Can you imagine!
We are at a remarkable point in tech. The least-informed people in an organization (execs) are pushing a technology onto their organizations. A jaw-droppingly enormous amount of capital is being deployed in essentially a "pushing on a rope" scenario.
Google Glass comes to mind, which died 11 years ago and XR is only just now starting to resurface.
Tablets also come to mind, pre-iPad, they more or less failed to achieve any meaningful adoption, and again sort of disappeared for a while until Apple released the iPad.
Then you have Segway as an example of innovation failure which never really returned in the same form like the others, and instead now we have e-scooters and e-bikes which fit better into existing infrastructure and cultural attitudes.
It's quite possible LLMs are just like those other examples, and the current form is not the going to be the successful form the technology takes.
[1] https://futurism.com/facebook-employees-confused-metaverse
I use coding agents often, but I don't burn all the tokens out of my Claude Max plan and ChatGPT Business plan with two seats.
Unfortunately it's the coders who are most excited to put themselves out of business with incredible code-generation facilities. The techies that remain employed will be the feature vibers with 6-figure salaries supplied by the efforts of the now-unemployed programmers. The cycle will thus continue.
https://www.genaiadoptiontracker.com/
TFA presents the most pessimistic stat it could find: daily GenAI usage at work growing from 12.1% to 12.6% in a year. (Interestingly there was a dip to 9% in Nov 2024; maybe end-of-year holidays?)
It does not mention that the same tracker also shows that overall usage (at and outside work, at least once last week) has steadily climbed from 44% to 54%. That is a 10 percentage point growth in a year. (This may also be why OpenAI reveals WAU rather than DAU; people mostly regularly use it on a weekly basis.)
Here is something even more interesting from the same authors at the St Louis Fed using the same data:
https://www.stlouisfed.org/on-the-economy/2025/nov/state-gen...
Really, read that article, it is short and a bit astounding. Money quote:
> When we feed these estimates into a standard aggregate production model, this suggests that generative AI may have increased labor productivity by up to 1.3% since the introduction of ChatGPT. This is consistent with recent estimates of aggregate labor productivity in the U.S. nonfarm business sector. For example, productivity increased at an average rate of 1.43% per year from 2015-2019, before the COVID-19 pandemic. By contrast, from the fourth quarter of 2022 through the second quarter of 2025, aggregate labor productivity increased by 2.16% on an annualized basis. Relative to its prepandemic trend, this corresponds to excess cumulative productivity growth of 1.89 percentage points since ChatGPT was publicly released. ... ...
> We stress that this correlation cannot be interpreted as causal, and that labor productivity is determined by many factors. However, the current results are suggestive that generative AI may already be noticeably affecting industry-level productivity.
This is the point.
This is what matters.
A revolutionary technology birthed in a bonfire of cash
There was a storm of hype the last couple weeks for Gemini 3 and everyone, correctly, rolled their eyes. Investors are demanding a return and it's not happening. They're just going to have to face reality at some point.
Just because they happened to be in the right place, at the right time, and idling, gets paid 10M USD+ due to stock options vetting.
Sounds like crypto^2; money is spread completely irrationally and unfairly (lucky folks who launched a ponzi get rewarded instead of jailed) and completely disconnected from actual efforts.
In the long-term this can only lead to a very unhealthy society.
Good that we won't need money anymore, thanks to AGI right ?
The next hype wants to be quantum computing, but its just not there yet - never mind the lack of real-world applications.
I thought nVidia would start promoting GPUs (whole data centers) to run classical simulations of QC to develop the applications while real hardware gets figured out.
Probably more likely though to be something novel that few took seriously before it demonstrates utility. And this is the issue for QC, we already know what it’s useful for: a handful of niche search algorithms. It’s a bit like fusion in that even if you work out the (very significant) engineering issues you’re left with something that while useful is far from transformative.
VR -> Cloud -> Crypto -> VR -> AI -> ?
> In recent earnings calls, nearly two-thirds of executives at S&P 500 companies mentioned AI. At the same time, the people actually responsible for implementing AI may not be as forward-thinking, perhaps because they are worried about the tech putting them out of a job.
Ah, those brave, forward-looking executives with their finger on the pulse of the future while their employees are just needlessly stalling adoption. Completely absent from the article is the possibility that the technology is not as revolutionary as claimed.
x="\"Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Lamarr\""
y=https://www.economist.com/finance-and-economics/2025/11/26/investors-expect-ai-use-to-soar-thats-not-happening
busybox wget -U "$x" -O 1.htm $y
firefox ./1.htmThere's no way that could work. $x expands to multiple arguments.
Been doing this for many years now. It's a short list, small enough to be contained in the local fwd proxy config
# economist.com
http-request set-header user-agent "Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Lamarr" if { hdr(host) -m end economist.com }
I don't use curl, wget, browser extensions/add-ons, etc. except in HN examples. I don't have to worry about command line arguments ilke "-A" or "-U". Proxy controls HTTP headersIt's sped the time I need to produce projects from a usual span of 4-20 days to 1-2 days with another 2-3 Testing. Of course I still bill the time it would have taken me but for a professional it can be a great improvement.
While my country will be slow to adopt, we haven't even adopted to smartphones yet - hooray Germany, it will have to adopt eventually ( in 10 years or so )
This may be a flippant comment, but it actually represents one of the reasons it is difficult to track GenAI usage and impact!
Multiple researchers have hypothesized (often based on discrepancies in data) that the gains from workers using GenAI are not necessarily propagated to their employers. E.g. any time savings may be dedicated to other professional or leisure pursuits.