56 pointsby tiahura15 hours ago23 comments
  • a-posteriori14 hours ago
    This is the same group (Ayush Chopra & Ramesh Raskar) that previously published the highly circulated (clickbait) article saying that 95% of AI pilots were failing based on extremely weak study design and questions that didn't even support the takeaways.

    Anything coming from Ayush and Ramesh should be highly scrutinized. Ramesh should stick to studying Camera Culture in the Media Lab.

    I will believe a study from MIT when it comes out of CSAIL.

    • zkmon14 hours ago
      Yep. Take it with some salt. Unfortunately, the quality of research is struck by sales pitch and hype mongering.
      • a-posteriori14 hours ago
        It's been really disheartening to see the impact of media / hype mongering on groups within research institutions.

        IMO, it's clear there is massive demand for any research that shows large positive or negative impacts of AI on the economy. The recent WSJ article about Aiden Toner-Rodgers is another great example of demand for AI impact outstripping the supply of AI impact. Obviously this thread's example is just shoddy research vs. the outright data fraud of Toner-Rodgers, but it's hard to not see the pattern.

        I hope that MIT and other research institutions can figure this out...

        • balaclava912 hours ago
          fascinating story. amazing how people want to believe in the AI savior.
    • sciencegeek1233 hours ago
      You should read the paper (or at least the abstract) before making personal attacks. It makes no claims about job disruption (quite the opposite actually).
    • mistrial914 hours ago
      science says rebut the sources and the thesis, not a personal attack on the authors
      • gus_massa4 hours ago
        Science says people have reputation, journal have impact index, ...

        Life is too short to read every single article, once someone cry wolf a few times, other researchers in the area will just ignore them.

  • ghkbrew14 hours ago
    This title is clickbait.

    From the abstract: "The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines." (emphasis mine)

    The 11.7% figures is the modeled reduction in "wage value", which appears to be marketplace value of (human) work.

  • iambateman14 hours ago
    The fact that these very-smart people did not include ranges is absurd.

    They know that 11.7% is WAY too precise to report. The truth is it's probably somewhere between 5-15% over the next 20 years and nobody has any idea which side of that range is correct.

    • sciencegeek123an hour ago
      Yes, agree. There should be range.

      Similar precision appears in other exposure studies also. E.g. This one was trending from OpenAI and Wharton a short while back: arxiv.org/pdf/2303.10130

  • pizlonator14 hours ago
    Here's a realistic path for how AI "replaces"/"displaces" a large chunk of the workforce:

    - Even without AI most corpos could shed probably 10% of their workforce - or maybe more - and still be about as productive as they are now. Bunch of reasons why that's true, but here are two I can easily think of: (1) after the layoffs work shifts to the people who remain, who then work harder; (2) underperformers are often not let go for a long time or ever because their managers don't want to do the legwork (and the layoffs are a good opportunity to force that to happen).

    - It's hard for leadership to initiate layoffs, because doing so seems like it'll make the company look weak to investors, customers, etc. So if you really want to cut costs by shedding 10%+ of your workforce and making the remaining 90% work harder, then you have to have a good story to tell for why you are doing it.

    - AI makes for a good story. It's a way to achieve what you would have wanted to achieve anyway, while making it seem like you're cutting edge.

    • SAI_Peregrinus13 hours ago
      Reason 3: those people are mostly buffer to absorb variable workloads. Firing them increase efficiency at the expense of being unable to keep up with spikes in demand. Productivity will stay about the same until the next crisis hits, then drop.
    • api14 hours ago
      I wonder if AI also reveals unnecessary parts of the workforce by demonstrating that what they do is actually pretty trivial.

      There are a ton of basically BS office jobs that could probably be replaced by AI, or in some cases just revealed as superfluous.

      We need to just stop pretending we still need a 1:1 connection between employment and income and do UBI. Useless jobs help us preserve the illusions of a pre-post-industrial civilization. Instead of just paying people, we pay people to do work we don't need.

      • starlust214 hours ago
        The joke about someone using chatGPT to write a lengthy email that the recipient will summarize with ChatGPT is the perfect example of how pretend much work is.
        • AnimalMuppet5 hours ago
          Processes are the problem.

          Something went wrong once. Maybe not even in your organization, but it went wrong somewhere. Someone added a process to make sure that the problem didn't happen again, because that's what well-run organizations are supposed to do.

          But too often, people don't think about the cost of the procedure. People are going to have to follow this procedure every time the situation happens for the next N years. How much does that cost in peoples' time? In money? How much did the mistake cost? How often did it happen? So was the procedure a net gain or a net loss? People don't ask that, but instead the procedure gets written and becomes "industry best practice".

          (And for some industries, it is! Aviation, medical, defense... some of those have really tight regulation, and they require strict procedures. But not every organization is in those worlds...)

          So now you have poor corporate drones that have to run through that maze of procedures, over and over. Well, if GPT can run the maze for you, that's really tempting. It can cut your boredom and tedium, cut out a ton of meaningless work, and make you far faster.

          But on the other hand, if you are the person who wrote the procedure, you think that it matters that it be done correctly. The form has to be filled out accurately, not with random gibberish, not even with correct-sounding-but-not-actually-accurate data. So you cannot allow GPT to do the procedures.

          The procedure-writers and procedure-doers live in different worlds and have different goals, and GPT doesn't fix that at all.

      • sharpshadow14 hours ago
        There is this joke about socialism where hundreds of workers digging with shovels and somebody asks “Why not use that excavator? One machine could do it in no time” and the other answers “And put 20 men out of work? We’re creating jobs!”.
        • api12 hours ago
          This is why a lot of modern leftists are anti-tech. Tech destroys jobs. If we are going to maintain the fiction that full employment is necessary for a modern civilization, everyone has to have a job, and for that to be true we have to restrict our technological progress.

          Which is really just making a ton of people waste their time doing bullshit work. I fail to see how this is progressive.

          • AnimalMuppet6 hours ago
            Well, I was going to say that many people perceive unemployment as "society does not value you", and that message can be really destructive to people.

            But then I remembered how dehumanizing meaningless jobs are, and... I'm not sure how much of a win either direction is.

  • dinkblam14 hours ago
    My study finds AI can replace 96.83% of U.S. study makers
    • xhkkffbf14 hours ago
      I love that it's not 11% but 11.7% even though it's all just guesses. Somehow they have that much precision.
      • cinntaile14 hours ago
        They should give us a span that they believe in and then we check in a few years how accurate their guess was.
        • lo_zamoyski14 hours ago
          By then, they will have received their promotions and salary bumps and it won't matter.
      • pydry14 hours ago
        There was a previous study that said 47% by 2033: https://fortune.com/2015/04/22/robots-white-collar-ai/

        It predates LLMs so they werw predicting that poets and artists would be the last jobs to be automated. Which is kinda funny.

        Economists' predictions about investors' wet dreams have always been a little bit whimsical.

    • syngrog6614 hours ago
      I bet its 98.251% (+/- 0.00032%)

      clowns, all of them

    • Der_Einzige14 hours ago
      This but unironically:

      https://arxiv.org/abs/2403.20252

    • zkmon14 hours ago
      This is so true.
    • fHr14 hours ago
      haha real
  • stego-tech14 hours ago
    Read the project and its key paper before commenting:

    arxiv.org/abs/2510.25137

    The key takeaway buried between technical jargon is that these figures aren’t measuring workforce replacement, but task replacement. They aren’t saying AI can replace 12% of the workforce, rather that AI can replace 12% of the work performed, and its associated wage values, expected concentrations, and diverse impacts (across the lower 48). There does not seem to be a more user-friendly visual available to tinker with, at least that I could readily find on mobile.

    They try to couch this conclusion at the end, stating that workforce displacement isn’t going to happen by AI so much as by decision-makers in government and enterprise. It’s entirely possible to use AI tools to amplify productivity and output and lead to smaller work weeks with better labor outcomes, but we have ample evidence that, barring appropriate carrots and sticks, enterprises will fire folks to keep the profit for themselves while governments will victim-blame the unemployed for “not being current on skills”. This creates a strong disincentive for labor to cooperate with AI, because it’s a lose-lose Prisoner’s Dilemma for them: cooperation will either result in a boost in productivity that hurts those around them through displacement and an increased workload on themselves, or cooperation results in their own replacement in the midst of a difficult job market and broader economy. Cooperation is presently the worst choice for labor, and the authors do a milquetoast job highlighting this reality - but do better than most of their predecessors, at least.

    Really, it comes back to what I spoke about in 2023 when it comes to AI: the problem isn’t AI so much as a system that will hand its benefits to those of already immense wealth and means, and that is the problem that needs solving immediately.

  • vlovich12314 hours ago
    Interesting - that’s a 1T market just in the US alone. Probably another 1T in EU. It’s unclear how much there is in the rest of the world (China is basically inaccessible to US firms and after that it’ll depend on low wage local labor vs AI models).

    There’s also models getting more capable (larger share of the GDP) and GDP growing more quickly due to automation of GDP activities. But even without that it’s at least a 2T/year opportunity (assuming the model is even a little accurate).

    To me this validates the bull case that is being raised in private equity. The major risks are not if the market or valuations exist but whether it’ll be captured by a few major players or if open models and local inference eat away at centralization.

    • psunavy0314 hours ago
      And then when 1T worth of workers are laid off, who is going to buy the stuff that the companies who laid them off make?
      • vlovich12313 hours ago
        I am in no way making a value statement of whether this is good or bad. Just analyzing the opportunity.
        • brazukadev11 hours ago
          That was not a question of good and bad. There's no point in optimizing production if there's no demand for products. Then most businesses would go bankrupt and we would get into a huge recession until things get to a balance again, something worse than 1929
          • vlovich1237 hours ago
            Maybe maybe not. If AI is really taking over, that means the goods are also getting cheaper. It’s too difficult to prognosticate on the impact this has on human labor and society and the economy writ large
  • siliconc0w14 hours ago
    The difficulty is in the implementation. Many jobs could already be mostly replaced with just a basic system of record (i.e. a database) but it hasn't happened. The world still runs on paper, email, or maybe a shared spreadsheet if they're sophisticated.

    Organizations are glued together with interpersonal relationships and unwritten expertise so it's really hard to just drop in an AI solution - especially if it isn't reliable enough to entirely replace a person because then you need both which is more expensive.

  • zkmon14 hours ago
    But they should also look at the other side of the story. How many new problems will be created by that requires new jobs and investment. Most likely it's migration of jobs from one kind of work to other kind of work.
    • giva13 hours ago
      Much like "the Cloud" solved a lot of problems in IT, and replaced them with more, different, harder problems.
  • JohnMakin14 hours ago
    Then why aren't they? Why have we not seen that reflected anywhere at all?
    • koakuma-chan13 hours ago
      Because nobody knows how to use AI. Nobody cares to figure out. PMs just want features features features, and if something doesn't seem like there would be "business value," it is dismissed immediately.
  • nacozarina6 hours ago
    there isn’t a govt on earth that can survive that large & sudden an increase in long-term unemployment; overthrown or bankrupted, they’re gone either way. the pitchfork mob will proceed to start burning data centers. the idea they’ll all quietly choose serfdom over revolution is wildly unrealistic. ai needs much stronger regulation to have a chance at survival.
  • throw0101c14 hours ago
    If anyone is curious about automation and people's/worker's reaction to it, I recommend Blood in the Machine: The Origins of the Rebellion Against Big Tech by Brian Merchant:

    > The most urgent story in modern tech begins not in Silicon Valley but two hundred years ago in rural England, when workers known as the Luddites rose up rather than starve at the hands of factory owners who were using automated machines to erase their livelihoods.

    > The Luddites organized guerrilla raids to smash those machines—on punishment of death—and won the support of Lord Byron, enraged the Prince Regent, and inspired the birth of science fiction. This all-but-forgotten class struggle brought nineteenth-century England to its knees.

    > Today, technology imperils millions of jobs, robots are crowding factory floors, and artificial intelligence will soon pervade every aspect of our economy. How will this change the way we live? And what can we do about it?

    * https://www.hachettebookgroup.com/titles/brian-merchant/bloo...

    * https://www.bloodinthemachine.com/p/introducing-blood-in-the...

    * https://www.goodreads.com/book/show/59801798-blood-in-the-ma...

    * https://read.dukeupress.edu/critical-ai/article/doi/10.1215/...

  • atonse14 hours ago
    Interesting that their website (https://iceberg.mit.edu) looks quite obviously vibe coded.

    Products like v0.dev (and gemini-3 with nano banana in general) continue to get better at building website designs that don't look obviously vibe coded.

    • rs18614 hours ago
      I don't remember ever seeing a website that has a loading screen with words "Initializing React" on it. It's almost comical. Like that information is of any value to site visitors.
  • coffeecoders14 hours ago
    I think the real story isn’t that AI will replace 11.7% of workers. It is that we are about to discover that far more than 11.7% of the work we do was never actually work in the first place.

    Workflows that were untouchable will now be overhauled and the productivity gains just raises the throughput ceiling.

    • sublinear13 hours ago
      You're right that there are inefficiencies, but almost entirely communication overhead (pointless meetings, synchronous work, etc).

      What AI brings is the ability to bridge those communication gaps. Instead of bugging the engineer people can ask the AI for a summary of completed and ongoing work. Instead of needing so many meetings the AI can coordinate when people check in with it.

  • vb-844814 hours ago
    actually, there are plenty of office jobs nowadays that can be optimized/removed, reliably, with non-ai tools ....

    what we will probably see is AI used to build tools and automations that will optimize/remove these jobs

  • lesuorac14 hours ago
    I'll give a hot take.

    The real advantage AI gives is cover to change current processes. There's a million tiny tasks that could be automated and in aggregate would reduce labor needs by making labor more productive.

    AI isn't a feature. Spellcheck is a feature. Templates are a feature. Search is a feature. A database of every paywalled article is a feature. AI can't do anything but it gives cover for features that do.

    • falcor8414 hours ago
      Following with my own hot take, AI SWE agents, while very flawed, allow people to quickly iterate on possible approaches to change those processes. I think that once people have had more time to explore this capability, we'll see massive productivity increases.
  • hahahacorn14 hours ago
    This is like unbelievably awful journalism. From the abstract:

    >The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines. Analysis shows that visible AI adoption concentrated in computing and technology (2.2% of wage value, approx $211 billion) represents only the tip of the iceberg. Technical capability extends far below the surface through cognitive automation spanning administrative, financial, and professional services (11.7%, approx $1.2 trillion). [https://arxiv.org/abs/2510.25137]

    Does the author not know what displacement outcomes are?

    It's possible we got 2.2% better quality software by augmenting engineers.

    I expect we'll see at least 11.7% <metrixX> improvements in admin, financial, and professional services.

    There is likely also a depressive affect on the labor market - there is nuance here and it would be equally disingenuous to believe there will be zero displacement (although, there is a case for more labor participation is administrative bottlenecks / cost are solved, tbd).

    Either way this is like a textbook example of zero-sum minded journalist grossly misrepresenting the world.

    • signatoremo13 hours ago
      I think it’s a textbook example of HN skimming through the paper and the summary.

      The paper basically said:

      1) AI may affect 2.2% of tech adoption, in terms of wage values,

      2) but that’s only the surface. The rippled impact may be as much as 11.7% wage values.

      That’s it. That’s all the index that they came up with measures, nothing else. They didn’t say there would be no displacement outcome, only that the index doesn’t quantify it. In other words, it’s the worst case scenario.

      Give it a read and come back with better critics.

      • hahahacorn13 hours ago
        That's not true. They didn't measure wages, but used it as a proxy. What they're actually measuring is work done, or tasks.

        Last I checked, most people work a job where there is more work to do than time in the day to do it - which would be the conditions for believing that wage value index would be closely correlated with displacement.

        Not only does the article title say the thing the paper says it's not saying, there is little reason to believe that the thing it says is the outcome, even if the paper wasn't explicit about not saying the thing.

    • emp1734414 hours ago
      Too many people fall into the trap of believing the economy is zero-sum. You see it all the time on HN.
  • pydry14 hours ago
    >Beneath the surface lies the total exposure, the $1.2 trillion in wages, and that includes routine functions in human resources, logistics, finance, and office administration. Those are areas sometimes overlooked in automation forecasts.

    Those routine functions could have been automated before LLMs.

    Usually when theyre not it's due to some sort of corporate dysfunction which is not something LLMs can solve.

  • add-sub-mul-div14 hours ago
    There's always a lot of bending over backwards in these comments to create explanations for why the invention whose purpose is to replace labor won't replace labor.
    • hahahacorn13 hours ago
      Great point, tractors replaced labor and society has never recovered. We used to have a noble population of farmhands walking behind animals for miles, guiding plows with their bare hands. But thanks to tractors, all that fulfilling communal suffering vanished overnight.

      Tragic.

    • stego-tech14 hours ago
      I suspect part of that is denial: “AI won’t replace my job!” Which, sure, maybe this era of AI won’t. Maybe this LLM era won’t replace your job, this time.

      The problem is that we will eventually create tools that can and will replace labor. The Capital class is salivating over that prospect quite openly without any shame whatsoever for its consequences.

      Fighting against AI is the wrong move. Instead, we should be fighting against a system that fails to provide for human necessities and victim-blames those displaced by Capital, before Capital feels AI can sufficiently displace the workforce.

  • syngrog6614 hours ago
    bonus points for the ".7%"

    only thing better than pulling numbers out of the air is being very very precise

    (not)

  • nextworddev14 hours ago
    There you go, that’s all the AI revenue needed to justify capex
  • paxys14 hours ago
    I wonder if these researchers include their own jobs in the analysis. Because AI can very easily spit out random numbers and a lengthy explanation to make them seem believable.