Nothing about cajoling a model to write what you want it to is essential complexity in software dev.
In addition when you do a lot of building with no theory you tend you make lots and lots of new non-essential complexity.
Devtools are no exception. There was already lots of nonessential complexity in them and in the model era is that gone? ...no don't worry it's all still there. We built all the shiny new layers right on top of all the old decaying layers, like putting lipstick on a pig.
Ah, a work of fiction.
what if AI is better at tackling essential complexity too?
There will always be someone ready to drag down prices of computation low enough so that it is then democratized for all, some may disagree but that would eventually be local inference as computer hardware gets better with clever software algorithms.
In this AI story, you can take a guess who are the "The Priesthood" of the 2020s are.
> You still have to know what you want the computer to do, and that can be very hard. While not everyone wrote computer programs, the number of computers in the world exploded.
One can say, the number of AI agents will explode and surpass humans on the internet in the next few years, and reading the code and understanding what it does when generated from an AI will be even more important than writing it.
So you do not get horrific issues like this [1] since now the comments in the code are now consumed by the LLM and due to their inherent probabilistic and unpredictable nature, different LLMs produce different code and cannot guarrantee that it is correct other than a team of expert humans.
We'll see if you're ready to read (and fix) an abundance of lots of AI slop and messy architectures built by vibe-coders as maintainance costs and security risks skyrocket.
[0] https://news.ycombinator.com/item?id=46912781
[1] https://sketch.dev/blog/our-first-outage-from-llm-written-co...