15 pointsby birdculture5 hours ago4 comments
  • conartist6an hour ago
    It's funny, but I think the accidental complexity is through the roof. It's skyrocketing.

    Nothing about cajoling a model to write what you want it to is essential complexity in software dev.

    In addition when you do a lot of building with no theory you tend you make lots and lots of new non-essential complexity.

    Devtools are no exception. There was already lots of nonessential complexity in them and in the model era is that gone? ...no don't worry it's all still there. We built all the shiny new layers right on top of all the old decaying layers, like putting lipstick on a pig.

  • chrisjj3 hours ago
    > LLMs ... completing tasks at the scale of full engineering teams.

    Ah, a work of fiction.

  • slopusila3 hours ago
    > My concerns about obsolescence have shifted toward curiosity about what remains to be built. The accidental complexity of coding is plummeting, but the essential complexity remains. The abstraction is rising again, to tame problems we haven't yet named.

    what if AI is better at tackling essential complexity too?

  • rvzan hour ago
    > With the price of computation so high, that inefficiency was like lighting money on fire. The small group of contributors capable of producing efficient and correct code considered themselves exceedingly clever, and scoffed at the idea that they could be replaced.

    There will always be someone ready to drag down prices of computation low enough so that it is then democratized for all, some may disagree but that would eventually be local inference as computer hardware gets better with clever software algorithms.

    In this AI story, you can take a guess who are the "The Priesthood" of the 2020s are.

    > You still have to know what you want the computer to do, and that can be very hard. While not everyone wrote computer programs, the number of computers in the world exploded.

    One can say, the number of AI agents will explode and surpass humans on the internet in the next few years, and reading the code and understanding what it does when generated from an AI will be even more important than writing it.

    So you do not get horrific issues like this [1] since now the comments in the code are now consumed by the LLM and due to their inherent probabilistic and unpredictable nature, different LLMs produce different code and cannot guarrantee that it is correct other than a team of expert humans.

    We'll see if you're ready to read (and fix) an abundance of lots of AI slop and messy architectures built by vibe-coders as maintainance costs and security risks skyrocket.

    [0] https://news.ycombinator.com/item?id=46912781

    [1] https://sketch.dev/blog/our-first-outage-from-llm-written-co...