134 pointsby cdrnsf2 hours ago20 comments
  • rising-skyan hour ago
    I guess this is trend now because it's a contrarian / attention grabbing headline. See:

    - "Thousands of CEOs just admitted AI had no impact on employment or productivity..." https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-s...

    - “Over 80% of companies report no productivity gains from AI…” https://www.tomshardware.com/tech-industry/artificial-intell...

    But fundamentally, large shifts like this are like steering a super tanker, the effects take time to percolate through economies as large and diversified as the US. This is the Solow paradox / productivity paradox https://en.wikipedia.org/wiki/Productivity_paradox

      > The term can refer to the more general disconnect between powerful computer technologies and weak productivity growth
    • XenophileJKO5 minutes ago
      I keep seeing the "Productivity Paradox" highlighted over an over again. I think one thing people are missing with this specific technology is that unlike many of the comparisons (computers, internet, broadband, etc), AI in particular doesn't have a high requirement at the consumer side. Everyone already has everything they need to use it.

      There will be a period like we are in now where dramatic capability gain (like recent coding gains) take a while for people to adapt to, however, I think the change will be much faster. Even the speed of uptake in coding tools over the last 3 months has been faster than I predicted. I think we'll see other shifts like this in different sectors where it changes almost over a series of a few months.

    • ej8831 minutes ago
      Even the source article in the first link, https://www.nber.org/papers/w34836

      the same firms "predict sizable impacts" over the next three years

      late 2025 was an inflection point for a lot of companies

  • d_wattan hour ago
    It took 20 years for computers to "add" to the economy.

    https://en.wikipedia.org/wiki/Productivity_paradox

    • preommran hour ago
      I am not saying this to be sarcastic - the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.

      It's not good enough to just say oreo ceos say we need to more oreos.

      There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.

      Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.

      • ozim39 minutes ago
        AI companies don’t have 20 years, they have max 5 years where they have to turn to profit.

        They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.

        So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.

        • rfv672312 minutes ago
          This outlook is as short-sighted as the 2000 fiber optic bust. Critics then thought overcapacity meant the end, yet that infrastructure eventually created the modern internet. Capital does not walk away from a fundamental shift just because of one market correction. While specific companies may fail, the long-term value of the technology ensures that investment will continue far beyond a five-year window.
      • co_king_5an hour ago
        > the problem is that people from OpenAI/Anthropic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.

        I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude. It's an entirely different world from GPT-4.0. This is serious intelligence.

        Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.

        • bigstrat200330 minutes ago
          > I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude.

          You're right. I can feel how far away it is and how these tools will in no way be capable of getting us there.

        • arctic-true25 minutes ago
          Researchers looked at GPT-3 in 2023 and saw “sparks of AGI”. The saying “feel the AGI” became widespread not long after, if I’m remembering right. We’ve been saying AGI is right around the corner for a while now. And of course, if you predict the end of the world every day, you’ll eventually be right. But for the moment, what we have is an exceptionally powerful coding assistant that can also speed up entry-level work in various other white collar industries. That is earth-shattering, paradigm-shifting. But given how competitive and expensive the AI game has become, that is not enough, so it needs to be “superintelligence” - and it’s just not.
          • yowayb20 minutes ago
            iirc, when Eliza came out, many people briefly believed it was sentient
        • chrysoprace35 minutes ago
          What does that mean? By what metric do you measure "AGI", whatever that means? Industry definitions are incredibly vague, perhaps intentionally so, with no benchmarks to define how a model, harness, or other technology might achieve "AGI". They have no intelligence, and can't even reason that you need to take your car to the car wash to have it washed[0].

          [0] https://news.ycombinator.com/item?id=47031580

          • conception30 minutes ago
            A link to a page where the top comment talks about how a major model doesn’t get stuck on the question doesn’t seem like much of a flex.
          • co_king_534 minutes ago
            Have you even used Claude?

            You can feel it coming.

            • albatross7910 minutes ago
              You seem to be doing a lot of feeling, have you tried thinking? It's pretty cool when you need a break from feeling.
        • EA-316739 minutes ago
          It’s amazing that economic analysis can be dismissed by “feeling the AGI”.

          You might as well be telling people to “HODL”

        • lanstin27 minutes ago
          Have you ever tried to trick an LLM? Did you have trouble?
        • AnimalMuppet8 minutes ago
          > Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.

          Yeah? So you must have a clear idea of where "there" is, and of the route from here to there?

          Forgive me my skepticism, but I don't believe you. I don't believe that you actually know.

    • RigelKentaurus9 minutes ago
      For the U.S. economy, productivity is defined as (output measured in $)/(input measured in $). Typically, new technologies (computers, internet, AI) reduce input costs, and due to competition in the market, companies are required to reduce their prices, thereby having an overall deflationary effect on the economy. It's entirely possible that AI will have a small or no effect on productivity as measured above, but society will benefit by getting access to inexpensive products and services powered by inexpensive AI. Individual companies won't use AI to improve their productivity but will need to use AI just to stay competitive.
    • yowayb22 minutes ago
      I think this paragraph from the wikipedia article captures it nicely:

      >Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel.

    • yifanlan hour ago
      The difference being that AI's marketing has been significantly more prevalent than any early computing efforts.
      • jsheardan hour ago
        Not to mention the investment is on another level. We've got companies with valuations in the hundred-billions talking about raising trillions to buy all of the computers in the world, before establishing whether they can even turn a profit, nevermind upend the economy.
        • bdangubic36 minutes ago
          the investments are being made by massively profitable companies (our biggest and brightest ones, the ones that have been carrying the economy for quite some time now, even before "AI"). even just in recent history we have seen companies making large investments and being very unprofitable until they weren't anymore (e.g. Uber). and it is always the same story, everyone is up in arms "this is not sustainable etc..."

          whether or not these companies can turn a profit - time will tell. but I am betting that our massively profitable companies (which are biggest spenders of course) perhaps know what they are doing and just maybe they should get the benefit of the doubt until they are proven wrong. but if I had to make a wager and on one side I have google, microsoft, amazon, meta... and on the other side I have bunch of AI bubble people with a bunch of time to predict a "crash" I'd put my money on the former...

          • arctic-true11 minutes ago
            The fact that the companies that have already shoveled billions of dollars at this are continuing to do so is equally consistent with AI improvement and adoption stalling as it is with infinite improvement and widespread adoption. Yes, it’s irrational to chase sunk costs - but unlike the VC funds that backed Uber and its competition, may of the players in this game are exposed to public markets, which are not known for being rigorously logical. If you pull back on your AI investments, the markets will punish you - probably vigorously - and if your only concern is the value of your stock options, it is entirely rational for you to act in a way that keeps the market from punishing their value. We’re 3 years in without showing any ROI, and who’s to say we can’t get 3 or 5 or 10 more? Plenty of time to cash out before the eventual reckoning.
          • jsheard26 minutes ago
            I'd maybe think twice about assuming Meta knows what they're doing after they just pissed $75 billion up the wall on a Metaverse dream that went nowhere.
            • bdangubic22 minutes ago
              if it was just Meta perhaps I’d think twice but it is not just Meta, it is all of them
              • albatross796 minutes ago
                "The lemmings can't be wrong, they're all doing it". I think you're overlooking the incentive structures here.
      • petcatan hour ago
        This seems false to me. Commodore and Apple were blitzing every advertising medium and especially TV ads in the early 1980s.
    • an hour ago
      undefined
    • kakapo5672an hour ago
      Yep, and the same with the internet. During the 1990s and 2000s, people kept wondering why the internet wasn't showing up in productivity numbers. Many asked if the internet was therefore just a fad or bubble. Same as some now do with AI.

      It takes time for technology to show measurable impact in enormous economies. No reason why AI will be any different.

      • rainsfordan hour ago
        Sure, but you have to consider Carl Sagan's point, "The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." Some truly useful technologies start out slow and the question is asked if they are fads or bubbles even though they end up having huge impact. But plenty of things that at first appeared to be fads or bubbles truly were fads or bubbles.

        Personally I think AI is unlikely to go the way of NFTs and it shows actual promise. What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it. The Internet didn't begin as a massive black hole sucking all the light out of the room for anything else before it really started showing commensurate ROI.

        • jeltz24 minutes ago
          Columbus was not a genius. He was an idiot who believed the earth was smaller than the scientists of his day, and the scientists were right. Columbus became successful through pure luck, genocide and cruelty.

          Most idiots like Columbus died in obscurity.

          • rainsford8 minutes ago
            Yeah the inclusion of Columbus is admittedly not great, but it's part of the original quote and the overall point is still a good one.
        • arisAlexis38 minutes ago
          Even that you mentioned NFTs in comparison hurts my mind
          • kibwen19 minutes ago
            I mean, it's an apt comparison, given that the Venn diagram between the pro-NFT hucksters and the pro-AI crowd is a circle. When you listen to people who were so publicly and embarrassingly wrong about the future try to sell you on their next hustle, skepticism is the correct posture.
      • recursivean hour ago
        Also no particular reason to group it in with those two. There are plenty of things that never showed up at all. It's just not a signal It's kind of like "My kid is failing math, but he's just bored. Einstein failed a lot too you know". Regardless of whether Einstein actually failed anything, there are a lot more non-Einsteins that have failed.
      • sillyflukean hour ago
        It didn't take mobile apps with the launch of the iPhone 20 years to add to the economy though, did it?
        • m4rtinkan hour ago
          The iPhone was not the first mobile device or even the first smartphone. Not to mention it did not support mobile applications as we know them today.
          • sillyfluke36 minutes ago
            That seems a tad reductionist. Why not just say the iPhone was completely inconsequential because afterall it's simply another "computer". Why not go even back further and start the timer at the first physical implementation of a Turing machine?

            The iPhone killer UX + App store release can be directly traced to the growth in tech in the subsequent years its release.

    • h0dlnHorsesan hour ago
      [dead]
  • mirekrusinan hour ago
    This article seems to have "basically zero" content.

    Today you have to be blind to not see the change that is coming.

    World has its own (massive) inertia, burocracy present in businesses accounting for a big part in it.

    AI itself is moving fast but not at infinite speeds. We start to have good enough tooling but it's not yet available to everyone and it still hangs on too many hacks that will need to crystalize. People have a lot of mess to sort out in their projects to start taking full advantage of AI tooling - in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it and "using ai" doesn't mean that "you can copy paste code to/from copilot 365".

    As people say - something changed around Dec/Jan. We're only now going to start seeing noticable changes and changes themselves will start speeding up as well. But it all takes time.

    • ipaddr20 minutes ago
      Nothing changed in Dec/Jan. Everything changed in 2023 with someones first openAI chat and things are slowly getting adopted into everything with high, marginal and negative benefits.

      Things are actually slowing down. And society will still see AI adding little to next years report. The costs still outweigh the benefits.

    • dvt28 minutes ago
      We're still 6-12+ months away from a "killer" AI product. OpenClaw showed what's possible-ish, but it breaks half the time, eats tokens like crazy, and can leak all kinds of secrets. Clearly there's potential there, and a lot of people are working on products in the AI space (myself included), but anyone that's seriously tried to wrangle these models will agree with the reality that it's very hard to reliably get them to do what you want them to do.
    • burgerone32 minutes ago
      It's not that the technology is not there yer, it's all the ethical concerns and the mental barrier that nobody wants to spend their day begging AI for solutions.
    • geraneum29 minutes ago
      > This article seems to have "basically zero" content.

      Why? It’s descriptive of the “past”. While you’re trying to predict the near/far “future” and project your assumptions. Two different things.

    • __loam28 minutes ago
      Can't even spell bureaucracy while you're making big predictions like this.
    • staplers36 minutes ago

        the change that is coming.
      
      Everything you argue reinforces that net output was still basically zero last year. I don't see them talking about 2026 data..
    • gaigalas23 minutes ago
      Change is always coming. It's cute when someone thinks this time it's going to be special.
  • snowhale9 minutes ago
    the measurement problem here is real. GDP captures output, not latent capacity or quality. an ops team that responds to 200 requests/week with AI at 2x speed doesn't show up in GDP if headcount stays flat. the value is captured in retention, fewer escalations, faster revenue ops cycles -- none of which hit a GDP line directly. the reason AI added zero isn't that it didn't work. it's that we're measuring the wrong thing.
  • mark_l_watson29 minutes ago
    I’ll do the Minority Report here: I loved the article, the point being that rich people hyping AI for their own enrichment have somewhat shutdown rational arguments of benefits vs. costs, the costs being: energy use, environmental impact of using environmentally unfriendly energy sources out of desperation, water pollution from by products of electronics production and recycling and from water use in data centers, diverting money from infrastructure and social programs, putting more debt stress on society, etc.

    I have been a paid AI practitioner since 1982, so I appreciate the benefits of AI - it is just that I hate the almost religious tech belief that real AI will happen from exponential cost increases for LLM training and inference for essentially linear gains.

    I get that some lazy ass people have turned vibe coding and development into what I consider an activity sort-of like mindlessly scrolling social media.

    • boxedemp9 minutes ago
      I've literally not met one person in tech who thinks LLMs will become sentient or conscious. But I always see people online claiming that there are lots of people who believe that.

      Where are they?

      Are we sure that's not a misunderstanding of the terminology? Artificial diamonds, such as cubic zirconia, are not diamonds, and nobody thinks they are. 'Artificial' means it's not the real thing. When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?

      Incidentally, this comment was written by AI.

    • 13 minutes ago
      undefined
  • user____name4 minutes ago
    There really need to be better metrics about the state of an economy than GDP.
  • pluto_modadican hour ago
    Why do I have a feeling that this will be ignored as biased by the people who need to read it the most.
    • ohyoutravelan hour ago
      It’s a grift being perpetuated by the folks at the top, who then sweep along in their slipstream folks under them, and so on. The folks who “need to hear this” are helpless to go against and so can’t back down, and the folks who don’t need to hear this because they’re driving it have their paychecks aligned to it, so they’re not backing down.
    • co_king_5an hour ago
      [flagged]
      • platevoltage20 minutes ago
        It's just missing a question mark man. Is this really something you should be doing on a 9 day old account?
    • brokencode38 minutes ago
      Why do I get the feeling that AI skeptics will treat it as definitive and irrefutable proof that they were right all along even though it’s one data point in an industry that’s hasn’t even been around for 5 years.
  • mgh238 minutes ago
    Trickle down effect reversal: > “A lot of the AI investment that we’re seeing in the U.S. adds to Taiwanese GDP, and it adds to Korean GDP but not really that much to U.S. GDP”
  • 31 minutes ago
    undefined
  • sillyflukean hour ago
    Bottom line, no one's buying your vibeslop when they can create and maintain their own for their custom needs. And if we're not buying each others vibeslop there's no productivity to be measured in the economy.

    With all this recent Claw stuff, it's weird that as people who should be championing the opposite due to our field of study or industry, some of us are now pushing a method of automation that is akin to robo vaccums randomly tracking dogshit across the carpet.

    In my working environment, people get dressed down for repeatedly communicating incorrect information. If they do it repeatedly in an automated fashion they will be publically shamed if they are senior enough.

    I have no idea what benefit a human-in-loop for sending an automatically generated emails or agent generated sdks or buliding blocks has when there is no guarentee or even a probability of correctness attached to the result. The effort for vaildating and editing a generated email can be equally or greater than manually writing a regular email let alone one of certain complexity or significance.

    And what do we do to create to try to guarentee a semblance of correctness? We add another layer of automated validation performed by, you guessed it, the same crew of wacky fuzzy operators that can inject correct sounding gibberish or business workflows at any moment.

    It's almost like trying to build a house of cards faster than the speed with which it is collapsing. There seems to be a morbid fascination among even the best of us with how far things can be taken until this way forward leads to some indisputable catastrophe.

    • ekjhgkejhgkan hour ago
      > a method of automation that is akin to robo vaccums randomly tracking dogshit across the carpet.

      Is it possible that this sort of problem will be fixed? Hypothetically, what would happen in a scenario where one of these apps can do in 1 hr the work that would take a developer a month, reliably? Or is your premise that will NEVER happen?

      • sillyflukean hour ago
        The same underlying magic that enables LLMs to be faster than a brute force SQL query on all the worlds data while producing "good enough" results appears to be the very thing that is creating hallucinations and finite context windows. ie there is no free lunch. It seems to be the theory in many in the field (ilya included?) that the obstacle might not be overcome without an LLM-level breakthrough in AI research, or maybe more likely, a breakthrough in hardware. Big tech until at least recently seems to have thought they can brute force it with energy (nuclear). But who's paying?
      • keybored38 minutes ago
        No need to stress out over us rank and file answering that question. An entire economy is boiling based on it.
  • qginan hour ago
    The most interesting thing about this is that the underlying economy is actually stronger than people realize. The narrative has been that AI data center construction was propping up an otherwise weak economy. If this analysis is true, then it wasn't being propped up by data center construction. The strength was usual and normal strength.

    I have no doubt that people will use this to axe grind about they think AI is dumb in general, but I feel like that misses the point that this is mostly about data center construction contributing to GDP.

    • Gigachadan hour ago
      The US economy is remarkably resilient considering its withstood a year of sabotage from the top down.
      • vachina22 minutes ago
        The top don’t run the show. Tells you how much a value they provide.
  • HardCodedBiasan hour ago
    I think this is key:

    "On top of that, there is currently no reliable way to accurately measure how AI use among businesses and consumers contributes to economic growth."

    No doubt people are using it work ( https://www.gallup.com/workplace/701195/frequent-workplace-c... ) the question is how much productivity results and to whom does it accrue.

    Partially this is AI capability (both today and in the past), partially this is people taking time to change their tools.

  • keyboredan hour ago
    Note last year. The vibes coming from the Claude dungeons tell a different story. Just in the last six weeks. We are on the precipice.
    • thomasfromcdnjs38 minutes ago
      I've been using claude code to code my own gpt model from absolute scratch in type script with c code it generates for the gpu. Anytime it wants to use cuda or some lib to do things faster I can keep telling it to write it in typescript or c etc lots of fun and it actually works lol
    • co_king_543 minutes ago
      ^This. Claude is very rapidly approaching AGI.

      Opus 4.6 is SPECIAL. nothing like other models. This is a new breed of intelligence.

      I give it 18-24 months until we see a full-scale societal transformation.

  • Madmallardan hour ago
    Yet the job situation for software developers in the United States is borderline terminal. Interesting.
    • co_king_5an hour ago
      COVID and "AI" lowered the threshold of acceptable service to the extent that software vendors are making offshoring attempts again.
  • deterministican hour ago
    I completely agree. If AI can't do 100% of a job then you can't remove the job.

    And most jobs that can be automated already has been automated using traditional software.

    • _aavaa_2 minutes ago
      If AI does 90% of the work, you can either do more work with your current staff, or fire a portion and have them do the same amount of work.
    • singpolyma39 minutes ago
      A lot of jobs that can be automated haven't been because it's not worth it is because the people with domain knowledge can't imagine automating it is other related problems.

      I'm not sure if LLMs will change that or not

    • codexonan hour ago
      You can replace it with a much lower paid employee though.
      • singpolyma38 minutes ago
        That's not growth. Growth is having the existing employee do more.
      • loloquwowndueoan hour ago
        A lower paid and less qualified employee won’t be able to spot when the AI screws up.

        Having a higher-paid, qualified employee supervising multiple AIs as the human only needs to spot for mistakes - maybe.

        • codexonan hour ago
          I'm not sure that's entirely true. For most things, checking if a solution is correct is much easier than implementing it (page looks wrong, can't login etc...)
      • qudat42 minutes ago
        You definitely cannot. Code org, architecture, and system design are senior level roles and responsibilities.
        • codexon36 minutes ago
          AI is already aware of the best practices. It does not just blindly do what you ask of it in the simplest way.
          • saulpw29 minutes ago
            Best practices are always situation dependent.
            • codexon20 minutes ago
              Claude code will prompt you and explain to you what practice fits a situation. It might not do it perfectly, but the foundations are there.
  • pigpag2 hours ago
    [dead]
  • ath3nd2 hours ago
    [dead]
  • phendrenad2an hour ago
    I'm sure we can find stories from the 1980s and 1990s about how the "world wide web" hasn't increased the GDP at all.
    • siban hour ago
      Given that the first communication between a web server and client was in December 1990 (and that was private to Tim B-L's environment), and it was released to the public in 1991, I bet we actually couldn't find such stories in the 1980s :)
    • trimethylpurine33 minutes ago
      I assume you mean technology, not the www (didn't exist). And, until around the second half of the 90s those papers were right. Most papers you'll find arguing that it wasn't contributing much to productivity were saying just that, that it wasn't, not that it won't. At the time, they were right. Productivity had stagnated despite heavy spending in technology.

      But now we have something else happening. It's hard to find an application for something that makes a lot of mistakes. That's not the same issue. The issue then was that no one had written the software yet. Everyone knew what software needed writing. The future was obvious. Here, not so much. We can't see how to make it not make mistakes.

      We have to hope someone will come up with a solution to that. Otherwise their big bets on something non-productive won't pan out the same way that the computer did, and we're all going to suffer for it.

  • mtct88an hour ago
    I think it’s still a bit too early to draw the conclusion.

    We need to get past the hype first and let the cash grabbers crash.

    After that, with a clear mind we can finally think about engineering this technology in a sane and useful way.

    • gdulli37 minutes ago
      What about social media, did that evolve into something sane and useful or has it remained owned by the cash grabbers? Have we not yet internalized that they've permanently captured control of technological advances?
  • co_king_5an hour ago
    AI may have added basically zero to US economic growth, but tech sector productivity is 250%-300% of what is was before Claude Code was released because of all the 10x-20x engineers it created.
    • platevoltage16 minutes ago
      Why do I feel like I'm being marketed to right now? Is this HackerNews or Instagram?
    • deterministican hour ago
      I assume this is sarcasm?
    • loloquwowndueoan hour ago
      Definitely missing a /s
    • tvaughanan hour ago
      GitHub has never been down since Copilot was released.