75 pointsby jodah4 hours ago13 comments
  • gbnwl3 hours ago
    Liked the article in general, but

    > These apps will win awards at the next all-hands. In two years they’ll be unmaintainable tech debt some poor soul inherits and rewrites from scratch.

    Huge assumption/prediction that I think is actually just wrong. There's this weird assumption from a certain crowd, never justified or explained, that tech debt accrued by AI is now, and will forever be, impossible for AI to address, and will for some reason require humans to fix. Working at pace with agents I accrue tech debt every day, then go through the code nightly, again with agents, to clean and tidy everything up.

    The more I see this view espoused the more bizzare it seems. People's assumptions seem to be "if AI couldn't one shot this perfectly the first time, then it's useless to try to have it go back over the codebase and identify and address issues". This doesn't match my personal experience at all, second or third passes over code with CC or Codex are almost always helpful and weed out critical issues, but I'm open to hearing from the rest of the HN crowd on their experiences on this.

    • apsurd3 hours ago
      Tech debt used here is likely a catch-all term, and you're disagreeing, reasonably so, with one definition.

      I think human understanding of the surface area of a company is already very unwieldy. AI balloons the surface area. at some point using more AI to solve AI is reasonable! But to whatever extent a human needs to interface and manage this world, that's the accrued debt.

    • hperrin3 hours ago
      AIs don’t produce well organized code. They duplicate effort, which is tech debt. Maybe one day they will be able to clear their own tech debt. And who knows, maybe they’ll still be heavily subsidized by VC money then.
      • latentsea39 minutes ago
        You can organise the code well once, template that and put guardrails in place for it to follow the structure you and the team have agreed is good. The engineering task becomes building the system that is capable of building the system to a high standard.
      • vidarhan hour ago
        Having them clear it is trivial. I have my harness refactor automatically on a steady cadence, something I could never afford to take the time to do manually.
      • kvirani3 hours ago
        This is just false.
    • gck13 hours ago
      Agreed, but it's a bit nuanced. I'm working on a fairly complex project now in a domain where I have no technical experience. The first iteration of the project was complete garbage, but it was garbage mainly because I asked for things to be done and never asked HOW it should be done. Result? Complete, utter garbage. It kinda, sorta worked, but again, I would never use it in anything important.

      Then we went through ~10 complete rewrites based on the learnings from previous attempts. As we went through these iterations, I became much more knowledgeable of the domain - because I saw failure points, I read the resulting code and because I asked the right questions.

      Without AI, I would likely have given up after iteration 2, and certainly would not do 10 iterations.

      So the nuance here is that iterating and throwing away the entire thing is going to become much cheaper, but not without an engineer being in the loop, asking the right questions.

      Note: each iteration went through dual reviews of codex and opus at each phase with every finding fixed and review saying everything is perfect, the best thing on earth.

      • busterarm34 minutes ago
        I'm seeing similar process but on large teams still finding this output to be unmaintainable.

        The problem is that vanishingly few people actually understand the code and are asking the agents to do all of the interpretation and reasoning for them.

        This code that you've built is only maintainable for as long as you are still around at the company to work on it -- it's essentially a codebase that you're the only domain expert in. That's not a good outcome for companies either.

        My prediction is that the companies that learn this lesson are the ones that are going to stick around. LLMs won't be in wide use for features but for throwaway busy-work type problems that eat lots of human resources and can't be ignored.

        • gck114 minutes ago
          I left my last company job just before "AI-first engineering" became mainstream, and you confirmed what I was feeling all this time - I have absolutely zero idea how teams actually manage to collaborate with LLM-managed projects. All the projects that I'm working now are my own and the only reason why I could do this is because I had unlimited time and unlimited freedom. There's no chance I would be able to do this in a team setting.

          I'm positive that the last company's CEO probably mandates by now that nobody must write a single line of code by hand and there's likely some rigid process everyone has to follow.

          Fun times ahead.

    • deklesen3 hours ago
      This also seems to implicitly assume that ai models won't get better - a bet I am not willing to make currently..
      • e3df3 hours ago
        Models get better with money (reinvestment).

        But if there aren't enough returns soon the money will eventually dry up for OAI and Anthropic and Google will not be trusted with their cash balance.

        Its amazing how people here think that money is a play-thing and this dance can go on forever. It cant and wont and the fear-induced marketing doesnt work forever either.

      • gbnwl3 hours ago
        Agreed. The confidence people have to predict what these tools will be capable of two years down the line, when it's barely been over a year since Claude Code was first released, is astounding.
  • mholm4 hours ago
    Decent sentiment and analogy, but writing this with AI with hackneyed examples undercuts the point
    • turtletontine4 hours ago
      I’m noticing one hallmark of blog posts made by people who talk to LLMs all day: they have 1-3 interesting points hidden in paragraphs upon paragraphs beating the horse dead. Your favorite LLM might tell you every thought is brilliant and all your words are beautiful, but please… edit it down. At the very least, out of respect for other people’s time.
      • justherefornews2 hours ago
        It's called body text or even "bread text" in some languages. It was historically meant to pad the pay for bread (writers got paid per word). Americans still do to this day and writing and blogs reflect it as well.
      • sph3 hours ago
        Haven’t you heard? Putting in effort is not cool any more. The best they can do is ask an LLM to edit it down.
    • 4 hours ago
      undefined
  • IanCal4 hours ago
    > Today’s backyard AI looks like AI. It is not AI.

    Getting real tired of people new to AI thinking only recent LLMs are AI somehow. BoW was a pretty solid technique and that only requires you to learn how to count to one.

    • mrbungie4 hours ago
      We can thank our AI overlords like sama and damodei for that.
  • skybrian4 hours ago
    If you want to show that that there's a risk of disaster you need to do better than making a silly analogy. Companies will often start expensive projects that fail and then they pick themselves up and move on. Big, profitable companies can afford bigger failures. Google has had a slew of failed projects, and Meta's metaverse stuff tanked, and they're still fine. They can afford to experiment.

    So which companies are betting so big that it might actually threaten them? Oracle maybe?

    • e3df3 hours ago
      "Google has had a slew of failed projects, and Meta's metaverse stuff tanked, and they're still fine. They can afford to experiment."

      Only with the blessing of shareholders. Frankly Google's search box and ad-tech has been carrying all of its failed bets but at some point people will start questioning if Google is returning enough cash given the results of new investments. Google's management does not own the cash - it holds the cash on behalf of the owners.

    • outside12343 hours ago
      Seems clear to me that OpenAI at this point is a Ponzi scheme waiting to collapse. This is why they are trying to IPO and dump their shares on the public market before they go bankrupt.
  • an hour ago
    undefined
  • operatingthetanan hour ago
    >A prompt template behind a REST endpoint is not a model.

    Not pulling any punches over there. It does feel like 95% of the "AI industry" consists of wrappers and associated tools.

  • deltamidway3 hours ago
    Great rant! Claw based propaganda posters makes me smile.
  • 348asGaq74 hours ago
    This is a great comparison. The US dominated software industry is centrally planned and in many ways run like a communist country, taking into account the whims of the current chairman in Washington.

    If the chairman dictates DEI, DEI it is. Most software developers put up the proper flags in their Twitter "bios" and purged opponents. The same developers now queue to work for Zuckerberg's "male energy" company.

    If the chairman and the industry dictate AI, AI it is. The same people who said girls and coal miners have to code now talk about efficiency, products and rationalize layoffs.

    This is the product of an industry that has been dominated by bullshitters for at least two decades.

    • 3 hours ago
      undefined
  • vingilot4 hours ago
    Note that the author of this blog post is also the author of a soon-to-be-published Manning book on safely implementing AI systems.
  • arisAlexis4 hours ago
    The outputs were wrong 2 years ago maybe.
  • tim3332 hours ago
    Comparing AI to steel production in the Great Leap Forward seems unfair. It's not some communist plan - it's a capitalist free for all similar to the industrial revolutions in the UK/US. It won't lead to a famine, it'll lead to the chaotic creative destruction capitalism usually produces.
  • cynicalsecurity3 hours ago
    Oh god, don't get me started on this. The article goes full opera-level tragedy, like we're all marching into some corporate gulag where AI eats our souls and the lights go out forever. "The famine comes later" my ass. It's peak doomer porn, written to make you feel like the sky is falling instead of just another round of executive circle jerking.

    The corporate world has always been 80% lies, fake KPIs and theatre. "Synergies", "disruptive innovation" "digital transformation", same shit since the 90s. Managers don't give a flying fuck about your clever moat. They wake up one day, get a spreadsheet from McKinsey saying "cut 15%" and boom - your undocumented wizardry gets deleted along with your badge. Nothing personal, just Excel doing what Excel does.

    Yes, the corporate bullshitry has been turbocharged with AI now. But it's nothing new and nothing that much tragic. At the very least the same AI can help me finally release personal projects that have been collecting dust for years. Who knows what the future will bring. I'd be much more worried of oil supply chokehold than of AI turbo circus in the corporate world. No oil means not having enough food tomorrow; or medical supplies. My child might die because of this. But AI temporarily causing perturbations at work is just another round of corporate theatre. Been there many times.

    Employment danger is real, but not apocalyptic. Some jobs will evaporate, sure. But even as the same articles states, now once thing ("AI know-how") replaced another thing ("domain knowledge siloing"). The corporate machine still needs warm bodies for the messy human parts: sales, talking to customers (customers hate talking to a robot, what a fucking surprise), covering ass. I would say, covering ass is the most important one, along with delegating the project management to someone else below on the corporate hierarchy, so upper management wouldn't have to work and would only keep asking for status updates. They would always need someone to type the actual AI requests. It's not like top management or VP would ever do that, neither they would ever run it automatically, since AI can delete production (happened many times), and they don't want to be the scapegoats.

    So yeah, the article is overdramatic trash for clicks. AI is just another round of that circus. The "famine" won't be real, it'll be a bunch of overpromises, just as usual. Same as it ever has been.

    • operatingthetanan hour ago
      >"Synergies", "disruptive innovation" "digital transformation", same shit since the 90s. Managers don't give a flying fuck about your clever moat. They wake up one day, get a spreadsheet from McKinsey saying "cut 15%" and boom - your undocumented wizardry gets deleted along with your badge. Nothing personal, just Excel doing what Excel does.

      The buzzwords you cite are the vulnerabilities of the corporations which predator consultancies rely on to make sales. I don't know that the corporate world is 'about' those things so much as it suffers from them.

  • supliminal4 hours ago
    What’s the story with Klarna? Any details around it?
    • madrox4 hours ago
      It’s the punchline at the very end of the article. They ended up with a different SaaS vendor.
      • supliminal3 hours ago
        Yeah I read through it but all of that is surface level. Any real insider info?

        Not sure why I was downvoted. I read the post and the linked articles.