Anything coming from Ayush and Ramesh should be highly scrutinized. Ramesh should stick to studying Camera Culture in the Media Lab.
I will believe a study from MIT when it comes out of CSAIL.
IMO, it's clear there is massive demand for any research that shows large positive or negative impacts of AI on the economy. The recent WSJ article about Aiden Toner-Rodgers is another great example of demand for AI impact outstripping the supply of AI impact. Obviously this thread's example is just shoddy research vs. the outright data fraud of Toner-Rodgers, but it's hard to not see the pattern.
I hope that MIT and other research institutions can figure this out...
Life is too short to read every single article, once someone cry wolf a few times, other researchers in the area will just ignore them.
From the abstract: "The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines." (emphasis mine)
The 11.7% figures is the modeled reduction in "wage value", which appears to be marketplace value of (human) work.
They know that 11.7% is WAY too precise to report. The truth is it's probably somewhere between 5-15% over the next 20 years and nobody has any idea which side of that range is correct.
Similar precision appears in other exposure studies also. E.g. This one was trending from OpenAI and Wharton a short while back: arxiv.org/pdf/2303.10130
- Even without AI most corpos could shed probably 10% of their workforce - or maybe more - and still be about as productive as they are now. Bunch of reasons why that's true, but here are two I can easily think of: (1) after the layoffs work shifts to the people who remain, who then work harder; (2) underperformers are often not let go for a long time or ever because their managers don't want to do the legwork (and the layoffs are a good opportunity to force that to happen).
- It's hard for leadership to initiate layoffs, because doing so seems like it'll make the company look weak to investors, customers, etc. So if you really want to cut costs by shedding 10%+ of your workforce and making the remaining 90% work harder, then you have to have a good story to tell for why you are doing it.
- AI makes for a good story. It's a way to achieve what you would have wanted to achieve anyway, while making it seem like you're cutting edge.
There are a ton of basically BS office jobs that could probably be replaced by AI, or in some cases just revealed as superfluous.
We need to just stop pretending we still need a 1:1 connection between employment and income and do UBI. Useless jobs help us preserve the illusions of a pre-post-industrial civilization. Instead of just paying people, we pay people to do work we don't need.
Something went wrong once. Maybe not even in your organization, but it went wrong somewhere. Someone added a process to make sure that the problem didn't happen again, because that's what well-run organizations are supposed to do.
But too often, people don't think about the cost of the procedure. People are going to have to follow this procedure every time the situation happens for the next N years. How much does that cost in peoples' time? In money? How much did the mistake cost? How often did it happen? So was the procedure a net gain or a net loss? People don't ask that, but instead the procedure gets written and becomes "industry best practice".
(And for some industries, it is! Aviation, medical, defense... some of those have really tight regulation, and they require strict procedures. But not every organization is in those worlds...)
So now you have poor corporate drones that have to run through that maze of procedures, over and over. Well, if GPT can run the maze for you, that's really tempting. It can cut your boredom and tedium, cut out a ton of meaningless work, and make you far faster.
But on the other hand, if you are the person who wrote the procedure, you think that it matters that it be done correctly. The form has to be filled out accurately, not with random gibberish, not even with correct-sounding-but-not-actually-accurate data. So you cannot allow GPT to do the procedures.
The procedure-writers and procedure-doers live in different worlds and have different goals, and GPT doesn't fix that at all.
Which is really just making a ton of people waste their time doing bullshit work. I fail to see how this is progressive.
But then I remembered how dehumanizing meaningless jobs are, and... I'm not sure how much of a win either direction is.
It predates LLMs so they werw predicting that poets and artists would be the last jobs to be automated. Which is kinda funny.
Economists' predictions about investors' wet dreams have always been a little bit whimsical.
arxiv.org/abs/2510.25137
The key takeaway buried between technical jargon is that these figures aren’t measuring workforce replacement, but task replacement. They aren’t saying AI can replace 12% of the workforce, rather that AI can replace 12% of the work performed, and its associated wage values, expected concentrations, and diverse impacts (across the lower 48). There does not seem to be a more user-friendly visual available to tinker with, at least that I could readily find on mobile.
They try to couch this conclusion at the end, stating that workforce displacement isn’t going to happen by AI so much as by decision-makers in government and enterprise. It’s entirely possible to use AI tools to amplify productivity and output and lead to smaller work weeks with better labor outcomes, but we have ample evidence that, barring appropriate carrots and sticks, enterprises will fire folks to keep the profit for themselves while governments will victim-blame the unemployed for “not being current on skills”. This creates a strong disincentive for labor to cooperate with AI, because it’s a lose-lose Prisoner’s Dilemma for them: cooperation will either result in a boost in productivity that hurts those around them through displacement and an increased workload on themselves, or cooperation results in their own replacement in the midst of a difficult job market and broader economy. Cooperation is presently the worst choice for labor, and the authors do a milquetoast job highlighting this reality - but do better than most of their predecessors, at least.
Really, it comes back to what I spoke about in 2023 when it comes to AI: the problem isn’t AI so much as a system that will hand its benefits to those of already immense wealth and means, and that is the problem that needs solving immediately.
There’s also models getting more capable (larger share of the GDP) and GDP growing more quickly due to automation of GDP activities. But even without that it’s at least a 2T/year opportunity (assuming the model is even a little accurate).
To me this validates the bull case that is being raised in private equity. The major risks are not if the market or valuations exist but whether it’ll be captured by a few major players or if open models and local inference eat away at centralization.
Organizations are glued together with interpersonal relationships and unwritten expertise so it's really hard to just drop in an AI solution - especially if it isn't reliable enough to entirely replace a person because then you need both which is more expensive.
> The most urgent story in modern tech begins not in Silicon Valley but two hundred years ago in rural England, when workers known as the Luddites rose up rather than starve at the hands of factory owners who were using automated machines to erase their livelihoods.
> The Luddites organized guerrilla raids to smash those machines—on punishment of death—and won the support of Lord Byron, enraged the Prince Regent, and inspired the birth of science fiction. This all-but-forgotten class struggle brought nineteenth-century England to its knees.
> Today, technology imperils millions of jobs, robots are crowding factory floors, and artificial intelligence will soon pervade every aspect of our economy. How will this change the way we live? And what can we do about it?
* https://www.hachettebookgroup.com/titles/brian-merchant/bloo...
* https://www.bloodinthemachine.com/p/introducing-blood-in-the...
* https://www.goodreads.com/book/show/59801798-blood-in-the-ma...
* https://read.dukeupress.edu/critical-ai/article/doi/10.1215/...
Products like v0.dev (and gemini-3 with nano banana in general) continue to get better at building website designs that don't look obviously vibe coded.
Workflows that were untouchable will now be overhauled and the productivity gains just raises the throughput ceiling.
What AI brings is the ability to bridge those communication gaps. Instead of bugging the engineer people can ask the AI for a summary of completed and ongoing work. Instead of needing so many meetings the AI can coordinate when people check in with it.
what we will probably see is AI used to build tools and automations that will optimize/remove these jobs
The real advantage AI gives is cover to change current processes. There's a million tiny tasks that could be automated and in aggregate would reduce labor needs by making labor more productive.
AI isn't a feature. Spellcheck is a feature. Templates are a feature. Search is a feature. A database of every paywalled article is a feature. AI can't do anything but it gives cover for features that do.
>The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines. Analysis shows that visible AI adoption concentrated in computing and technology (2.2% of wage value, approx $211 billion) represents only the tip of the iceberg. Technical capability extends far below the surface through cognitive automation spanning administrative, financial, and professional services (11.7%, approx $1.2 trillion). [https://arxiv.org/abs/2510.25137]
Does the author not know what displacement outcomes are?
It's possible we got 2.2% better quality software by augmenting engineers.
I expect we'll see at least 11.7% <metrixX> improvements in admin, financial, and professional services.
There is likely also a depressive affect on the labor market - there is nuance here and it would be equally disingenuous to believe there will be zero displacement (although, there is a case for more labor participation is administrative bottlenecks / cost are solved, tbd).
Either way this is like a textbook example of zero-sum minded journalist grossly misrepresenting the world.
The paper basically said:
1) AI may affect 2.2% of tech adoption, in terms of wage values,
2) but that’s only the surface. The rippled impact may be as much as 11.7% wage values.
That’s it. That’s all the index that they came up with measures, nothing else. They didn’t say there would be no displacement outcome, only that the index doesn’t quantify it. In other words, it’s the worst case scenario.
Give it a read and come back with better critics.
Last I checked, most people work a job where there is more work to do than time in the day to do it - which would be the conditions for believing that wage value index would be closely correlated with displacement.
Not only does the article title say the thing the paper says it's not saying, there is little reason to believe that the thing it says is the outcome, even if the paper wasn't explicit about not saying the thing.
Those routine functions could have been automated before LLMs.
Usually when theyre not it's due to some sort of corporate dysfunction which is not something LLMs can solve.
Tragic.
The problem is that we will eventually create tools that can and will replace labor. The Capital class is salivating over that prospect quite openly without any shame whatsoever for its consequences.
Fighting against AI is the wrong move. Instead, we should be fighting against a system that fails to provide for human necessities and victim-blames those displaced by Capital, before Capital feels AI can sufficiently displace the workforce.
only thing better than pulling numbers out of the air is being very very precise
(not)