There is a real advantage to having good code especially when using agents. "Good Code" makes iteration faster, the agent is unlikely to make mistakes and will continue to produce extensible code that can easily be debugged (by both you and the agent).
A couple months ago I refactored a module that had gotten unweildly, and I tried to test if Claude could add new features on the old code. Opus 4.5 just could not add the feature in the legacy module (which was a monster function that just got feature-crept), but was able to completely one shot it after the refactor.
So there is clear value in having "clean code", but I'm not sure how valuable it is. If even AGI cannot handle tech debt, then there's value is at least building scaffolding (or atleast prompting the scaffolding first). On the other hand there may be a future where the human doesn't concern himself with "clean code" at all: if the value of "clean code" only saves 5 minutes to a sufficiently advanced agent, the scaffolding work is usefuless.
My reference is assembly - I'm in my early 30s and I have never once cared about "clean" assembly. I have cared about the ASM of specific hot functions I have had to optimize, but I've never learned what is proper architecture for assembly programs.
IMO we shouldn't strive to make an entire codebase pristine, but building anything on shaky foundations is a recipe for disaster.
Perhaps the frontier models of 2026H2 may be good enough to start compacting and cleaning up entire codebases, but with the trajectory of how frontier labs suggest workflows for coding agents, combined with increasing context window capabilities, I don't see this being a priority or a design goal.
I don't think this will happen - or rather I don't think you can ask someone, human or machine, to come in and "compact and clean" your codebase. What is "clean" code depends on your assumptions, constraints, and a guess about what the future will require.
Modularity where none is required becomes boilerplate. Over-rigidity becomes spaghetti codes and "hacks". Deciding what should be modular and what should be constant requires some imagination about what the future might bring and that requires planning.
A vast number of things. There are a vast number of things I will accept having done in even mediocre quality, as in the old pre-AI world, I would never get to them at all.
Every friend with a startup idea. Every repetitive form I have to fill out every month for compliance. Just tooling for my day to day life.
Everyone thinks they're a startup founder now is a major part of the problem. Y'all are falling for the billionaire marketing. Anything that can be built with vibe coding for a startup can be instantly copied by someone else. This makes no sense.
Since most work on software projects is going to be done via coding, debugging, QA, etc AI agents you should prioritize finding ways to increase the velocity of these AI agents to maximize the velocity of the project.
>Are you that bad at it?
That is irrelevant.
>Is there anything you really have to get done regardless of quality right this second?
You are implying that AI agents have low quality work, but that is not the case. Being able to save time for an equivalent result is a good thing.
>Just write the code yourself, and stop training your replacement.
AI labs are the ones doing the training better AI.
Why?
I really cant wait until the inference providers 5x their prices and you guys realize you're completely incompetent and that you've tied your competency 1:1 to the quality and quantity of tokens you can afford. You're going to be a character from the movie idiocracy.
Of course you'll still be coping by claiming you're so productive using some budget kimi 2.5 endpoint that consumes all your personal data and trains on your inputs.
Opus 4.5 changed that and like every programming tool I've used in the past, I decided to sit seriously with it and try and learn how to use it. If coding agents turn out to be a bust, then oh well, it goes into the graveyard of shit I've learned that has gone nowhere (Angular, Coffeescript, various NoSQL databases, various "big data" frameworks). Even now one of my favorite languages is Rust, but I really took the plunge into the language before async/await and people also called it overhyped.
If coding agents are real, I don't want to be struggling to learn how to use them while everyone else is being 10x more productive. There's no downside to learning how to use them for me. I've invested my time in many hyped software frameworks, some good and some bad.
I never said they would.
>Of course you'll still be coping by claiming you're so productive using some budget kimi 2.5 endpoint
I would. I already run my persistent agent on Kimi 2.5 and use Kimi CLI.
There is still a market for good code in the world, however. The uses of software are nearly infinite, and while certain big-name software gets a free pass on being shitty due to monopoly and network effects, other types of software will still find people who will pay for them if they are responsive, secure, not wildly buggy, and can add new features without a 6 month turnaround time because the codebase isn't a crime against humanity.
On another note, there have been at least four articles on the front page today about the death of coding. As there are every other day. I know I'm tired of reading them, but don't people get bored of writing them?
Nope, the value the code creates was always what was valued.
Now we can refactor more easily than ever. And quite a lot of code was throwaway to begin with.. so there’s no need to deliver good code. Not in the first iteration. But if it is going to be improved upon, part of the improvement will be to prepare it for that improvement.
I understand the sentiment here but it shouldn't be surprising that people are upset that their profession and livelihoods are being drastically changed due to advances in AI.
Look, it's either this or a dozen articles a day about Claude Code.
Also, I would assume there are not many significant pages on $B/Trillion companies that take 5 seconds to load text that are used frequently.
> I know I'm tired of reading them, but don't people get bored of writing them?
People never get tired of reading or commenting on commentary on their hobbies.
I use Electron applications. They are usable, for some value of the word. I am certainly not happy about it, though. I loathe the fact that I have 32GB RAM and routinely run into memory issues on a near-daily basis that should literally never happen with the workloads I'm doing. With communication-based apps like Slack and Discord where your choice of software to use comes down entirely to where the people you're communicating are, you will use dogshit because there is no point to communicating to the void on a technically superior platform.
On the topic of Electron, I’m really torn. I can’t help but feel some gratitude for the fact a few of the work tools I need work on Linux (stuff like Slack, Teams, Zoom).
Once AI/Agents actually master all tools we currently use (profilers, disassembly, debuggers) this may change but this won't be for a few years.
I could certainly see the point they were trying to make, but pointed out that compilers produced code from abstract syntax trees, and the created abstract syntax trees by processing tokens that were defined by a grammar. Further, the same tokens in the same sequence would always produce the same abstract syntax tree. That is not the case with coding 'agents'. What they produce is, by definition, an approximation of a solution to the prompt as presented. I pointed out you could design a lot of things successfully just assuming that the value of 'pi' was 3. But when things had to fit together, they wouldn't.
We are entering a period where a phenomenal amount of machine code will be created that approximates the function desired. I happen to think it will be a time of many malfunctioning systems in interesting and sometimes dangerous ways.
Apt analogy. I’m gonna steal it!
That audience is changing. Increasingly, the primary reader is an agent, not a human. Good code now means code that lets agents make changes quickly and safely to create value.
Humans and agents have very different constraints. Humans have limited working memory and rely on abstraction to compress complexity. Agents are comfortable with hundreds of thousands of tokens and can brute-force pattern recognition and generation where humans cannot.
We are still at the start of this shift. Our languages and tools were designed for humans. The next phase is optimizing them for agents, and it likely will not be humans doing that optimization. LLMs themselves will design tools, representations, and workflows that suit agent cognition rather than human intuition.
Just as high-level languages bent machine code toward human needs, LLMs let us specify intent at a much higher level. From there, agents can shape the underlying systems to better serve their own strengths.
For now, engineers are still needed to provide rigor and clearly specify intent. As feedback loops shorten, we will see more imperfect systems refined through use rather than upfront design. The iteration looks less like careful planning and more like saying “I expected you to do ABC, not XYZ,” then correcting from there.
The problem with this argument is many do not believe this sort of leverage is possible outside of a select few domains, so we're sort of condemned to stay at a low level of abstraction. We comfort ourselves by saying it is pragmatic.
LLMs target this because the vast, vast majority of code is not written like this, for better or for worse. (It's not a value judgment, it just is.) This is a continuation (couldn't resist) of the trend away from things like SICP. Even the SICP authors admitted programming had become more about experimentation and gluing together ready-made parts than building beautifully layered abstractions which enable programs to just fall out of easily.
I don't agree with the author, BTW. Good code is needed in certain things. It's just a lot of the industry really tries to beat it out of you. That's been the case for awhile. What's different now is that devs themselves are seemingly joining in (or at least, are being perceived to be).
> The problem with this argument is many do not believe this sort of leverage is possible outside of a select few domains, so we're sort of condemned to stay at a low level of abstraction.
I think theres a similar tangential problem to consider here: people don't think that they are the person to create the serious abstraction that saves every future developer X amount of time because its so easy to write the glue code every time. A world where every library or API was as well thought out as the virtual memory subsystem would be an overspecified but at the same time enable creations far beyond the ones seen today (imo).
> Even the SICP authors admitted programming had become more about experimentation and gluing together ready-made parts than building beautifully layered abstractions which enable programs to just fall out of easily.
People talk about writing the code itself and being intimate with it and knowing how every nook and cranny works. This is gone. It’s more akin to on call where you’re trudging over code and understanding it as you go.
Good code is easy to understand in this scenario; you get a clear view of intent, and the right details are hidden from you to keep from overwhelming you with detail.
We’re going to spend a lot more time reading code than before, better make it a very good experience.
I think that’s untrue, I think it’s /more/ important than before. I think you’re going to have significantly more leverage with these tools if you’re capable of thinking.
If you’re not, you’re just going to produce garbage extremely fast.
The use of these tools does not preclude you from being the potter at the clay wheel.
I'm not worried about this at Modal, but I am worried about this in the greater OSS community. How can I reasonably trust that the tools I'm using are built in a sound manner, when the barrier to producing good-looking bad code is so low
Honest answer: You never could.
And the comment by 'ElatedOwl is pretty directly responding to that second idea.
Nothing has fundamentally changed! A good solution is a good solution.
I do worry that the mental health of developers will take a downturn if they’re forced into a brain rotting slop shovelling routine, however.
So yes readability and good concise code is still important.
So having used Claude Code since it came out I’ve decided the resulting code is overall just as good as what I’d see in regular programming scenarios.
The real risk isn't that agents can't read messy code - it's that without humans deeply understanding the codebase, you lose the ability to catch when an agent has missed edge cases, taken a shortcut, or produced something that technically passes but doesn't actually solve the right problem. We've all seen agents "cheat" their way through tasks in ways that look correct on the surface.
So the question isn't whether good code matters less now - it arguably matters more. Clean architecture, clear documentation, and well-understood code are what let you verify that an agent did the right thing. And testing remains as useful as it's always been, not because the agent needs it, but because humans need proof that the system actually works. Tests are a spec, a review mechanism, and a safety net all in one.
We're a long way from truly hands-off AI development. Until then, writing good code is how you stay in control.
- PMs hate it because you're busy putting up scaffolding instead of painting
- Managers hate it because they have to cover for it
- Other engineers hate it because they could be doing it better
- VPs and directors hate it because they can't think beyond the release cycle, so the engineer is an architecture astronaut who should focus
There is basically no reward for actually putting thought into a programming solution anymore. The incentives are aligned against it unless you can get your manager to run interference for you.
I agree it is sad though. I changed careers from one I was unhappy with into software development. Part of what drew me to software was that (at least sometimes) it feels like there is a beauty in writing what the author describes as great code. It makes you really feel like a 'master craftsman', even if that sounds a bit dramatic. That part of the profession seems to fading away the more agentic coding catches on. I still try to minimize use of any LLM's when doing personal projects so I can maintain that feeling.
Afaic, people designing circuits still do care about that.
> Good Assembly
The thing with the current state of coding is that we are not replacing "Coding Java" with something else. We are replacing it with "Coding Java via discussion". And that can be fine at times but it still is a game of diminishing returns. LLMs still make surprising mistakes, they too often forget specifics, make naive assumptions and happily follow along local minima. All of the above lead to inflated codebases in the long run which leads to bogged down projects and detached devs.
This is the point that everybody needs to calm down and understand. LLMs are fantastic for POCs _which then get rewritten_. Meaning: the point is to rewrite it, by hand. Even if this is not as fast as shipping the POC and pretending everything is ok (don't do this!) it still drastically speeds up the software engineering pipeline and has the potential to increase Good Code overall.
A perfectly reasonable rule in software organizations is: For greenfield code, LLMs are strictly required for 1st-pass prototyping (also required!). And then: Hand writes (within reason) for production code. Your company will not lose their competitive edge following this guideline, and this includes your hard-earned skills.
This statement, makes almost zero sense - A perfectly reasonable rule in software organizations is: For greenfield code, LLMs are strictly required for 1st-pass prototyping (also required!). And then: Hand writes (within reason) for production code. Your company will not lose their competitive edge following this guideline, and this includes your hard-earned skills.
"Give me a proxy, written in go, that can handle jwt authentication" isn't your traditional crud stuff, but Claude answers that quite well.
I think "good code" t was a "nice" pursuit but became too much of an end in itself while code was always - for me - just a means to create something that "just werks"
But I'm not sure the "good code" fans need to worry because they might be able to obsess over "proper prompting" and the "correct way to use agents" or "appopriate ai tooling" or something like that on this next wave of "code creation"
The authors colleague needed a couple of tries to write a kernel extension and somehow this means something about programming. If it was not for LLMs I would not have gone back to low-level programming, this stuff is actually getting fun again. Lets check the assembly the compiler produced for the code the LLM produced.
On the other hand, the other responsibilities of being an engineer have become quite a bit less appealing.
I also make sure to describe and break down problems when I ask an agent to implement them in such a way that they produce code that I think is elegant.
It seems to me like people think there's only two settings: either slaving away carefully on the craft of your code at a syntactic level, manually writing it, or shitting out first pass vide-coded slop without taking care to specify the problem or iterate on the code afterwards. But you can apply just as much care to what the agent produces, and in my experience, still see significant speedups, since refactoring and documentation and pulling out common abstractions and so on are something that agents can do extremely reliably and quickly, but otherwise require a lot of manual text editing and compile/test passes to do yourself.
As long as you don't get hung up on making the agent produce exactly character for character, the code you would have produced, but instead just have good standards for functionality and cleanliness and elegance of design.
I think the thing you are missing is that people are
> shitting out first pass vide-coded stuff without really taking care to specify the problem or iterate on the code afterwards
to assume that people will take a path other than the path of least resistance now when they never did before, such as copy-pasting directly from stackoverflow without understanding the implications of the code.
Now, there is the very valid point that those that don't care about code quality can now churn it out at vastly accelerated rates, but that didn't really feel like what the original article was talking about. It felt like it was specifically trying to make the claim that a Gentic tools don't really afford the ability to refine or improve your code or strongly discourage it, such that you kind of can't care about good code anymore. And it's that that I wanted to push back on.
I believe the right use of AI makes it possible to write more beautiful code than ever before.
I find that the flaws of agentic workflows tend to be in the vein of "repeating past mistakes", looking at previous debt-riddled files and making an equivalently debt-riddled refactor, despite it looking better on the surface. A tunnel-vision problem of sorts
The tragedy, for me, is that the bar has been lowered. What I consider to be "good enough" has gone down simply because I'm not the one writing the code itself, and feel less attachment to it, as it were.
If the answer is yes then it’s a tragedy - but one that presumably will pass once we collectively discover it. If not, then it’s just nostalgic.
If not, we could see that LLMs of tomorrow struggle to keep up with the bloat of today. The "interest on tech debt" is a huge unknown metric w.r.t. agents.
Just right now no one cares enough yet. Give it a year or two.
I could conceive something evolving on a different abstraction layer - say, clean requirements and tests, written to standard, enhanced with “common sense”
https://users.cs.utah.edu/~elb/folklore/mel.html
But now, reading, understanding, and maintaining the software is the job of coding agents. You are free to do the interesting work: setting goals and directions for the agents, evaluating the finished product, communicating with stakeholders, etc. This has always been the hard and interesting part of systems design: solving real-world problems for people and businesses.
For example, I think LLM benefit from having very verbose code that repeats and repeats the same important code/comments/strings/exceptions all over the place. We are going to produce more code than ever before.
From what I have seen so far, productivity will go up but there will more and more rot underneath the systems that we build. The fact that LLMs are tireless is both a blessing and a curse. It will never stop writing code until you tell it to.
We will arrive in a future where we NEED LLMs to understand our own code. At that point, do you hire another highly skilled and expensive individual to manage the complexity of your codebase or spend 100$ more ask an LLM to fix it (and promise we will do the refactor in the future!)
In a sense, we have had this before. We no longer write assembly. We don't program against the hardware anymore, even if you are writing C, you are writing against the abstract C machine. We are no longer writing against any machine, it is more of a concept now. This is just another step of abstraction. "Can you please change this list to be sorted alphanumerically" will no longer involve a calculated change in UI code. It is more like telling the computer what to do.
The way I am using LLM now is to get the rough general direction. "Can you please write me an NFS driver for this new filesystem with stub functions", then it spits out ~500 lines of code that I can now study. After I am done learning, I will usually rewrite everything myself. But this is probably not productive for future LLM uses. It might be better to keep the code as is if I want to keep using LLMs to iterate.
Frankly, I don't know how I feel about this. It is probably just a part of getting older and seeing the world move past you? For god's sake I am not even 30 yet.
If the function is a black box, but you’re sure the inputs produces a certain output without side effects and is fast, do you NEED “good code” inside?
After about 10yrs of coding, the next 10 of coding is pretty brainless. Better to try and solve people/tech interaction problems than plumbing up yet-another-social/mobile/gaming/crypto thing.
AI is at best a good intern or a new junior developer. We're locking in mediocrity and continuing enshittification. "Good enough" is codified as good enough, and nothing will be good or excellent. Non-determinism, and some amount of inaccuracy on the margins continually, no matter the industry or task at hand including finance, just so we can avoid paying a person to do the job
Non determinism and inaccuracy are also very real features of human programmers.
There are thousand of examples where tech became obsolute and frankly it’s given. No coders opinion will change it, but everybody is free to do what ever hobby the want. Author does seem to accept it, but commentor above not.
As civilizations declined, pride in one's work, would have been more or less as described in this comment.
But that's soviet bureucracy and Potempkin villages with extra steps.