54 pointsby v-mdev9 hours ago17 comments
  • fxtentaclean hour ago
    Code is not cheap. It's just heavily subsidised with VC money. But that won't last forever.

    Uber Eats also used to be dirt cheap. Surprise! it's not anymore.

    And even if you just pay API prices for Opus - as opposed to using a subsidized subscription - you can easily reach the point where the tokens for AI-generated code become comparable in price to just paying a junior dev salary for a manual implementation. AI is great for greenfield projects, where there is little to no existing context. But on real codebases, people memorize large parts of it. That allows them to navigate files with 100k+ tokens in them. (Wherease the Opus API will charge you $2.5 for each time the model runs through 100k thinking tokens reviewing your file.)

    But what AI can imitate pretty well is the result of having a clueless middle-manager review your code. So my prediction would be that the AI "revolution" will slim out management layers before it'll reach actual developers.

    • packetlostan hour ago
      I doubt that all of the providers hosting open models on open platforms are losing money on serving inference. They have the benefit of not having to pay for training, but the models are open and aren't going away anytime soon.
      • fxtentaclean hour ago
        Sadly, I have not been able to find any open model that comes close to Opus 4.6. So while they are much cheaper to deploy, they also aren't good enough for unsupervised agent execution. But you need a model that can run unsupervised for the claim "Code is Cheap" to become possible.
    • ieie336640 minutes ago
      Hardware has always gotten cheaper every year, and always will. You will be able to run Opus 4.6 tier models locally with junkyard hardware in 2035.
      • Pwntastic16 minutes ago
        Citation needed. Hardware prices have gone up substantially in the past year.
        • ieie336612 minutes ago
          "global warming is not real because it's really cold today"
  • kranner2 hours ago
    Towards the end this article contradicts itself so severely I don't think a human wrote this.

    But this isn’t really about AI enthusiasm or AI scepticism. It’s about industrialisation. It has happened over and over in every sector, and the pattern is always the same: the people who industrialise outcompete those who don’t. You can buy handmade pottery from Etsy, or you can buy it mass-produced from a store. Each proposition values different things. But if you’re running a business that depends on pottery, you’d better understand the economics.

    So which is it?

    Will an industrialised process always outcompete a pre-industrial process? Or do they not compete at all, because they value different things?

    • kjksfan hour ago
      Do you disagree with his analogy?

      Hand made pottery cannot compete on price with industrially made pottery and therefore majority of pottery is made industrially.

      100% human written code cannot compete on price with AI assisted code and therefore majority of code will be written with assistance of AI.

      The aside about etsy handmade pottery is that because they can't compete with industrially made pottery on price so they were killed in mass market pottery products and had to find a tiny niche. Before industrialization handmade pottery was mass market pottery. It was outcompeted in mass market and had to move into a niche.

      And that part of doesn't even translate into code. People are not buying lines of code, so you're not going to be buying handmade code.

      Handmade pottery can offer variety (designs) not available in mass produced pottery. When you look at software, you can't tell if it was 100% handwritten or written with assistance of AI.

      • kranneran hour ago
        If the argument was about cost per unit output, bringing in Etsy didn't make sense at all, especially when they explicitly mention it was about valuing different things.

        Handmade pottery can certainly be better quality than mass-produced pottery, just like handwritten code can be better quality than AI-assisted code. There is a spate of new MacOS apps that are clearly AI-written, with memory leaks, high CPU usage and UI that doesn't conform to MacOS conventions (in one instance I'm aware of, the interface has changed completely between updates). Of course users can tell the difference.

        If you're going to spend a lot of time making sure the AI-generated code is perfect, does the industrialisation analogy still hold? There's a spectrum here from vibe-coded to agentic to Copilot-level assistance to no AI assistance (which may be a little silly) of course.

      • csomaran hour ago
        This is interesting because the cost of cloning code is zero. The human written code could be cheaper than the AI one because of the cost of distribution. The same does not apply for pottery because to create/distribute an extra bowl, you need >0 resources.

        My point is (and the issue I have with the article) is that the quality of code (whatever that means) is not measured by the number of lines. Whether the code is generated by AI or humans, the market is not going to care. Same where it didn't care whether it was written by someone in Silicon Valley or in the middle of East Asia.

    • jubilanti2 hours ago
      And why are they talking about Etsy as if it doesn't bring in $2+ billon in revenue?
      • lmeyerov2 hours ago
        I'm not too familiar with etsy, but presumably most etsy sellers are closer to being lemonade stands than they are to being ikea

        And yes, sometimes it's nice to support a local lemonade stand. For my family's income, I know which segment I'd feel more confident to work for..

        • kranner2 hours ago
          Quality indie software in a niche that Ikea is not addressing can make a decent income unlike a lemonade stand.

          And unlike at (this hypothetical) Ikea, you wouldn't have to maintain the impression of 20x AI-augmented output to avoid being fired. Well, you could still use AI as much as you want, but you wouldn't have to keep proving you're not underusing it.

      • kjksfan hour ago
        The comparison valid for his example would be to compare revenues from mass produced pottery vs. revenues of handmade pottery sold on etsy.

        Methinks that mass produced pottery makes more than $2 billion and etsy pottery is a tiny fraction of overall etsy sales.

    • raviisoccupied2 hours ago
      'It's not X, it's Y' sentence formulations are usually indicative of LLM assisted writing.
  • voidUpdate2 hours ago
    Code is cheap, as long as you ignore the knock-on effects on RAM prices, storage prices, environmental costs, the fact that people are still burning thousands of dollars on tokens...
    • kjksf2 hours ago
      This is not first time in history that RAM prices spiked.

      And it'll be resolved the same way all others were.

      demand > supply => higher prices => incentive to produce more => produce more => supply > demand => lower prices

      The drastic drop in price of code is permament.

      • voidUpdate2 hours ago
        How long does it take to spin up enough completely new chip fabs to supply the demand?
        • cr125rideran hour ago
          Exactly. The problem with capitalism is how slow it is to respond to changes.
    • guzfip2 hours ago
      Easy when you’ve got direct injections of free ponzi money continuously.
  • bushido2 hours ago
    Interestingly, they landed on a conclusion which I have often argued against these days [0]. Code is absolutely cheap, and previously, it was the most important resource that we guarded.

    Entire job descriptions and functions were built to guard the engineer's time. Product owners, product managers, customer success, etc., all shielded the engineers who produced code because that was the scarcest resource.

    With that scarcity gone, we really need to be thinking about the entire structure differently. I'm definitely in the we still need people camp. The roles are wildly different, though. We can't continue doing the same job that we did with a slight twist.

    [0] https://dheer.co/gatekeeping-on-a-different-stage/

    • datadrivenangelan hour ago
      I partially disagree for two reasons:

      1. Code is absolutely cheap, but good, correct, non-vulnerable code is much cheaper than it was a few years ago but is still not free, especially in a large application.

      2. Requirements management is less important when the cost of software is lower because iteration is cheaper, but bad customer communication can absolutely result in negatively useful software, and there is a skill to understanding what people want and need that takes a lot of time to use well, so in many cases a product manager can still help do useful work... most won't though.

      • bushidoan hour ago
        That's partly the harness for me. I do believe product managers are still important, but they're important in deciding what gets shipped, not what gets built.

        Engineers are still important. They're important in building the harness to ensure that anything which is being built/shipped is of sufficient quality.

        In my opinion, testing/QA/etc is now the core product.

        But the best code that you'll get is literally connecting to the pain point the customer was saying to the agentic workflow that is building your product.

        Bad customer communication in my experience is the result of every person who handled the convo pre-engineers posturing the message trying to make sure the next person is motivated to get it to the next gatekeeper.

        This is all very biased based on my own workflow though.

  • eqvinoxan hour ago
    As a FOSS maintainer… code was already cheap before. Good code wasn't. And it still isn't… even if only because the cost includes review, but still often enough for the code itself too.
  • juancn2 hours ago
    Production grade code is still expensive for any non-trivial product. I think it may be getting even more expensive.

    It gets even harder when there's an expectation that your products implement some sort of AI.

    Not an LLM necessarily, but to succeed they need to feel easy and magical, the bar is higher, and that makes it expensive: more edge cases, harder to deploy, more expensive to run, and so on.

    Someone has to babysit the security and the runtimes, PMs still run around figuring out the competitive landscape and so on.

    AI just moved the pain points, for every part that's gotten easier, some other part got way harder mainly because we don't yet have the experience on how to effectively tackle the scale change of the challenges.

  • didgetmaster2 hours ago
    An important element that Willison left out of his definition of 'good code' is efficient!

    Software has an amazing multiplier effect. It can be copied to millions of machines and run billions of times each day. Code that wastes resources (time, memory, disk space, electricity, etc.) can become incredibly expensive to run, even if it was vibe coded in a day for a few dollars.

    Has anyone taken a serious look at all the code being spit out by AI with regards to how efficient it is?

    • kjksfan hour ago
      AI is assisting you. It'll write efficient code if you guide it to write efficient code. You're not a hapless victim of ai written code.

      To give you a concrete examples. Recently pretext library made waves. I looked at the code and noticed that isCJK could possibly faster.

      So I spent 30 minutes TELLING claude to write a benchmark and implement several different, hopefully faster, versions. Some claude came up with by itself and some were based on my guidance.

      You can see the result here: https://github.com/chenglou/pretext/issues/2

      The original isCJK, also written by AI (I assume), was fast. It wasn't obviously slow like lots of human JavaScript code I see.

      Claude did implement a faster version.

      Could I do the same thing (write multiple implementations and benchmark them) without Claude? Yes.

      Would I do it? Probably not. It would take significantly longer than 30 min. and I don't have that much time to spend on isCJK.

      Would I achieve as good result? Probably no. The big win came from replacing for .. of with regular for loop. Something that didn't occur to me but Claude did it because I instructed it to "come up with ideas to speed it up". I'm an expert in writing fast code but I don't know everything and I all good ideas. AI knows everything, you just need to poke it the right way.

      • didgetmasteran hour ago
        What worries me is that good, efficient code will no longer be widely shared like before. Everyone will just write their own inefficient version of a general purpose function or library because Claude or some other AI coder made it cheap and easy.
  • d01002 hours ago
    AI assisted development is the generalists dream

    Althought it still hasn't solved procrastinating the next plan prompt

    • bwestergard2 hours ago
      Have you seen the meme with three spidermen, labeled "Designer", "Product Manager", and "Engineer" wherein each is pointing to the other two and saying "I don't need you anymore!"?

      Most of the time, the person saying that is wrong.

  • nicpottier2 hours ago
    > This isn’t a minor detail, it’s the core constraint that shaped virtually every habit and institution in our industry.

    I am so so tired of this turn of phrase in LLM created content. I guess I don't know for sure whether this article was LLM written but I suspect so. Or, scarier still we are changing our own writing to match this slop.

    • xienze2 hours ago
      I find it amusing that software developers have no issue with having an LLM churn out slop code but have such a visceral reaction to slop articles.
      • philipov2 hours ago
        You are falling into the trap of thinking there's a single monolithic being called Software Developers that has inconsistent opinions. In fact, you're observing different people with conflicting values.
        • xienze2 hours ago
          Yeah yeah. But LLMs certainly have been embraced by a large number of developers. Many of whom I've observed react with disgust when they see "not X, but Y" or emdashes in an article. But when it comes to code, "wow this is so awesome!"
      • kjksfan hour ago
        I have no issue with with code generated by e.g. Claude because it's not "slop".

        On average, it's probably better than the code I would write.

        I say "on average" because AI doesn't make stupid mistakes, doesn't invert logical conditions. I know I do. Which I eventually fix, but it's better to not make them in the first place, hence "on average".

        And in cases that AI doesn't generate code up to my quality standards, I re-prompt it until it does. Or fix it myself.

        I'm not a hapless victim of AI. I'm a supervisor. I operate a machine that generates good code most of the time but not all of the time. I'm there to spot and correct the "not all of the time" cases.

        • xienze27 minutes ago
          But that's my point. LLMs generate good prose "most of the time", certainly better than most people are capable of doing. Yet we frequently react with disgust when we see tell-tale signs of LLM-generated text in articles. Why? Because it indicates the person was probably too lazy to write it themselves and are simply chucking a half-formed thought over the wall? Why don't we hold generated code to the same standard?
  • dude2507113 hours ago
    "That’s not a forecast. That’s the current state."

    Something about this sentence sequence looks vaguely familiar...

    • kranner2 hours ago
      Many other patterns in the text, re-arranging to make it more obvious:

      Why do we estimate stories? Because developer time is expensive and someone has to budget for it.

      Why do we prioritise features in backlogs? Because we can’t build everything and we need to choose what’s worth the cost.

      Why do we agonise over whether to refactor this module or write that debug interface? Because the time spent on one thing is time not spent on another.

      We have compilers: either it compiles or it doesn’t.

      We have test suites: either the tests pass or they don’t.

      Planning. Estimating. Feature prioritisation. Code review. Architecture review. Sprint planning. All of it is downstream of the assumption that writing code is the expensive part.

      ... type systems, linters, static analysis. Software gives us verification tools that most other domains lack.

  • twosdai4 hours ago
    Found this really well written. I really enjoyed reading it, and found myself agreeing with a lot of it.

    I wish the author wrote more about the day 2 problem cases with AI built applications. It somewhat matters what the programming language is, the architecture and design for debugging and reasoning verification when we want to alter the system specification.

    Basically as a Dev, or "Owner" of the application, we are responsible for the continuous changes and updates to the system. Which I've found hard to reason about in practice when speaking to other people, if I dont know the code explicitly.

    • qazxcvbnmlp3 hours ago
      They allude to it, but I think one of the new skills that will be valuable is reasoning about systems where you don't know the code. This is what “owners” and managers who don’t touch code do today.
      • Madmallard2 hours ago
        This isn't really a thing. You can't create an accurate 4K image from a low res JPG. Owners and managers who don't touch code don't know shit and have to go to the developers to learn about important decisions made in the application and how they work in detail.
        • croes2 hours ago
          Owners and managers will believe the AI companies that AI can create an accurate 4K image from a low res JPG.
        • jditu2 hours ago
          [dead]
  • ChrisArchitect2 hours ago
    Related from Simon in February:

    Writing code is cheap now

    https://news.ycombinator.com/item?id=47125374

  • croes2 hours ago
    >Code Is Cheap Now

    And electricity comes from the outlet and milk from the supermarket.

    At the moment billons of dollars of investor money heavily subsidizes the AI services, let's wait for the price when those companies need to generate profit

  • otabdeveloper42 hours ago
    Shit code was always cheap, this is why "technical debt" exists as a concept.
  • tovej2 hours ago
    I would question the framing that code is cheap now. That's not really meaningful. What is the cost most associated with software? Maintenance.

    Considering that, I would say a much more accurate statement is that sub-prime technical debt is now easy to take on.

    I'm surprised at the low quality of the grifting comments in this thread. I have a feeling that the vibe coding enjoyers used to at least make defensible statements. Now it's just pure hype. Seems like we're in the SBF being lauded for FTX part of the bubble.

    • otabdeveloper42 hours ago
      > sub-prime technical debt is now easy to take on

      Vibe-coded projects can't keep up with the scale of technical debt accretion. See the proliferation of OpenClaw clones - instead of fixing it we're iterating on rewriting it from scratch without fixing the core issues. (Give it a year and the "minimal" Claw-clones will also collapse under technical debt, because they're also vibe-coded, with all that implies.)

  • kratos0073 hours ago
    [dead]
  • dist-epoch2 hours ago
    > Developers who learn to specify, verify, and iterate will thrive.

    This will last for about one year.

    From next year agents will be prompting themselves. Human developers will have approximately zero economic value.

    • croes2 hours ago
      Prompting themselves to do what?
      • xienze2 hours ago
        Oh, you'll still need a human to give the initial prompt, like say "Write me a Notion clone." But after that, what value is the human developer really providing? Expertise? Why can't LLMs advance sufficiently to cover that in addition to the programming?

        This is a tale as old as time. Techies are so enamored with new gadgets that they eagerly develop the tools business managers will bury them with.

      • dist-epoch2 hours ago
        specify, verify, iterate