This happens, but it's only one way to use a coding agent. I'm working on a small, personal project, but I ask it to do "code health" tasks all the time, like complicated refactorings, improving test coverage, improving the tooling surrounding the code, and fixing readability issues by renaming things. Project quality keeps getting better. I like getting the code squeaky clean and now I have a power washer.
You do have to ask for these things, though.
Some people like using hand tools and others use power tools, but our goals aren't necessarily all that different.
I've found it very unsatisfactory (both experience and results) to use them to replace code production. But in terms of augmenting the process - used to critique, explore alternatives, surface information - they're getting really quite handy.
In reality there are tons of tasks at work that are boring and time constrained. There are days I don't enjoy it, and days I do. It's not binary - I still love programming by hand but at times I let Agents work whilst reviewing the results.
If you try to do anything outside of typical n-tiered apps (e.g. implement a well documented wire protocol with several reference implementations on a microcontroller) it all falls apart very very quickly.
If the protocol is even slightly complex then the docs/reqs won't fit in the context with the code. Bootstrapping / initial bring-up of a protocol should be really easy but Claude struggles immensely.
I have had an AI assistant reverse engineer a complex TCP protocol (3-simultaneous connections each with a different purpose, all binary stuff) from a bunch of PCAPs and then build a working Python server to speak that protocol to a 20-year-old Windows XP client. Granted, it took two tries: Claude Opus 4.1 (this was late September) was almost up to the task, but kept making small mistakes in its implementation that were getting annoying. So I started fresh with Codex CLI and GPT-5.1-Codex had a working version in a couple hours. Model and tool quality can have a huge impact on this stuff.
Claude Opus 4.5 is truly impressive.
The sloppier a web app is, the more CSS frameworks are fighting for control of every pixel, and simply deleting 500,000 files to clear out your node_modules brings Windows to its knees.
On the other hand, anything you can fit in a small AVR-8 isn't very big.
Whatever you do, your mileage may vary.
Dependencies are minimal. There’s no CSS framework yet and it’s a little messy, but I plan to do an audit of HTML tag usage, CSS class usage, and JSX component usage. We (the coding agent and I) will consider whether Tailwind or some other framework would help or not. I’ll ask it to write a design doc.
I’m also using Deno which helps.
Greenfield personal projects can be fun. It’s tough to talk about programming in the abstract when projects vary so much.
Speak for yourself, OP. I have my gripes with LLMs but they absolutely can and will help me create and understand the code I write.
> At least, they value it far less than the end result.
This does not appear to apply to OP at all, but plenty of programmers who like code for the sake of code create more problems than they solve.
In summary, LLMs amplify. The bad gets worse and the good gets better.
As for me, sometimes I code because I want something to do a specific thing, and I honestly couldn't be bothered to care how it happens.
Sometimes I code because I want something to work a very specific way or to learn how to make something work better, and I want to have my brain so deep in a chunk of code I can see it in my sleep.
Sometimes the creative expression is in the 'what' - one of my little startup tasks these days is experimenting with UI designs for helping a human get a task done as efficiently as possible. Sometimes it's in the 'how' - implementing the backend to that thing to be ridiculously fast and efficient. Sometimes it's both and sometimes it's neither!
A beautiful thing about code is that it can be a tool and it can be an expressive medium that scratches our urge to create and dive into things, or it can be both at the same time. Code is the most flexible substance on earth, for good and for creating incredible messes.
The thing I don't value is typing out all of that code myself.
Now, if I had just said, "Dear Claude, make it so I can read files from any client and figure out how to represent the results in the same way, no matter what the input is". I can agree, I _might_ be stepping into "you're not gonna understand the software"-land. That's where responsibility comes into play. Reading the code that's produced is vital. I however, am still not at the point where I'm giivng feature work to LLMs. I make a plan for what I want to do, and give the mundane stuff to the AI.
Isn't this a bit like saying you love storytelling, but you don't value actually speaking the words?
Because this feels very close to skating across a line where you don't actually understand or value the real medium.
Basically - the architectural equivalent of this leads to things like: https://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse
Where the architects are divorced from the actual construction, and the end result is genuinely terrible.
I frequently find that the code I write using agents is better code, because small improvements no longer cost me velocity or time. If I think "huh, I should really have used a different pattern for that but it's already in 100+ places around the codebase" fixing it used to be a big decision... now it's a prompt.
None of my APIs lack interactive debugging tools any more. Everything that needs documentation is documented. I'm much less likely to take on technical debt - you take that on when fixing it would cost more time than you have available, but those constraints have changed for me.
You're blanket replacing chunks of code without actually considering the context of each one.
Personally - I still have mixed feelings about it. The Hyatt Regency walkway was literally one of the examples brought up in my engineering classes about the risks of doing "simple pattern changes". I'm not referencing it out of thin air...
---
Havens Steel Company had manufactured the rods, and the company objected that the whole rod below the fourth floor would have to be threaded in order to screw on the nuts to hold the fourth-floor walkway in place. These threads would be subject to damage as the fourth-floor structure was hoisted into place. Havens Steel proposed that two separate and offset sets of rods be used: the first set suspending the fourth-floor walkway from the ceiling, and the second set suspending the second-floor walkway from the fourth-floor walkway.[22]
This design change would be fatal. In the original design, the beams of the fourth-floor walkway had to support the weight of the fourth-floor walkway, with the weight of the second-floor walkway supported completely by the rods. In the revised design, however, the fourth-floor beams supported both the fourth- and second-floor walkways, but were strong enough for 30% of that load.
---
Just use a different pattern? In this case, the steel company also believed it was a quick pattern improvement... they avoided a complex installation issue with threaded rods. Too bad it killed some 114 people.
I'm going to use a human comparison here, even though I try to avoid them. It's like having a team of interns who you explain the refactoring to, send them off to help get it done and review their work at the end.
If the interns are screwing it up you notice and update your instructions to them so they can try again.
I've worked in a couple positions where the software I've written does actually deal directly with the physical safety of people (medical, aviation, defense) - which I know is rare for a lot of folks here.
Applying that line of thinking to those positions... I find it makes me a tad itchy.
I think there's a lot of software where I don't really mind much (ex - almost every SaaS service under the sun, most consumer grade software, etc).
And I'm absolutely using these tools in those positions - so I'm not really judging that. I'm just wondering if there's a line we should be considering somewhere here.
I genuinely feel less nervous about working on those categories of software if I can bring coding agents along for the ride, because I'm confident I can use those tools to help me write software that's safer and less likely to have critical bugs.
Armed with coding agents I can get to 100% test coverage, and perform things like fuzz testing, and get second and third opinions on designs, and have conversations about failure patterns that I may not personally have considered.
For me, coding agents represent the ability for me to use techniques that were previously constrained by my time. I get more time now.
I'm still creating software but with English that's compiled down to some other language.
I'm personally comfortable reading code in many languages. That means I'm able (hopefully!) to spot something that doesn't look quite right. I don't have to be the one pressing keys on the keyboard but I'm still accountable for the code i compile and submit.
For an old grey beard, this is actually fine, but if you're still in love with coding, it must be a loss.
They can also help reduce the number of devs a company needs to operate and arguably reduce the skill level needed to generate software changes that provide business value. While it is true that these tools could lead to companies keeping the staff count and increasing productivity, the reality is that the productivity increase from these tools isn't big enough to offset the big dev team expense while the tools are argubly increasing productivity high enough that you can do more with less.
And yes, companies laying off folks because AI might suffer in the next decade or two. But that won't save _you_ from being unemployed _now_ and struggling to find a role until the tide shifts. The market can afford to remain irrational far longer than your average dev can afford to remain unemployed.
Now I can try two or three implementations of something that I can throw away in the process of really understanding a problem and then quickly do it right.
Instead of spending a day tracking down a problem in my mental blind spot I have a 70% chance of getting the answer in two minutes.
Instead of overthinking the documentation I can have a prototype running in 10 minutes.
I value writing and playing my own music.
Most are happy enough to listen to other people’s music.
Somehow with music this isn’t a religious war.
My point is, though, it occurred to me why he's excited about it. He has no ability whatsoever to write music in notes, or song lyrics. But with his tool, he's able to make music that he finds decent enough to feel excited about helping to shape it.
No criticism to those who can't do a thing on their own, but are excited to be able to do it with a tool. And yes, you can certainly elaborate on and debate craftsmanship, and the benchmarks and measures of quality of an end result when made through expert skill and care, or by amateurs with a powerful (and perhaps imperfect or imprecise) tool.
So personal anecdote, using generative code has not interested me personally, because I love writing code, and I'm very good at it, and I'm very fast. Of course machines can do things faster than me (once I learn the different skill of prompting), but speed hasn't really been a massively limiting factor for me when trying to build things. (There are lots of other things that can get in my way!)
I'm reminded of the oft used quote, "He who can, does; he who cannot, teaches" - George Bernard Shaw. (Just, now the teaching is that of a machine, who then does.)
So you (or in this case I) get all excited about how fantastic it is, but others that hear it are just kind of 'meh'. The only way I know this is listing to songs shared in that same exuberance by others, and to me they are 'meh'.
I shared this sentiment with some folks and one person said 'yeah, you should try writing your own music sometime...same thing happens' xD
Interestingly, "Suno" in one of the world languages (Hindi) means "listen".
That's a pretty sweeping generalization. Just because I don't value the act of typing into a keyboard doesn't mean that I don't value the craft of creating and understanding software. I am not outsourcing my understanding to the LLM, I am outsourcing the typing of the code.
What you are describing is not engineering, it's (pardon the phrase) vibe coding. Claude Code is just a tool, and everyone is going to use that tool differently. There is nothing inherent in the tool itself that requires you to surrender your agency and understanding. If you do, that's on you, not Claude.
Is it wild the thing I took a while to learn is now done basically by a GPU? Yes. But hand writing code is not my identity. Never has been.
I still do enjoy having an LLM help me through some mental roadblocks, explore alternatives, or give me insight on patterns or languages I'm not immediately familiar with. It speeds up the process for me.
Why assign YOUR feeling about an interaction to everyone else? LLMs coding agents are a tool for me to investigate and learn about what I'm doing. For others they can be a therapist, or organizational tool, or whatever.
The reality is that the vast majority of software is nowhere near those levels of priority aside from the big social media apps. Like many jobs, many apps out there are helpful because they increase the money flow in the economy not because they are critical
I'm as critical of AI code generation as the next guy, but unfortunately we live in a world of lots of accidental complexity forced on us, and it's not surprising that a lot of people are simply relieved that they can leave some of the boring and frankly exhausting footwork to collect all the necessary boilerplate to some assistant. That allows them to focus on high level understanding and the actual creative part, rather than the opposite, as OP suggests here.
This comes with the assumption that everyone is vibe coding. Which just isn’t the case in the professional circles I’m part of. People are steering tools and putting up guard rails to save time typing. The “creating” part of coding has very little to do with the code, in fact my perspective is that it’s the most insignificant part of creating software.
At the end of the day, how code gets into a pr doesn’t matter. A human should be responsible to review, correct, and validate the code.
FWIW I find these statements trivializing of the craft and the passion. Some of us do like the craft of creating a massive structure that we understand from the pylons to the nuts and bolts. Reviewing AI-generated code doesn't bring us close to the understanding of the problem that comes from having solved the problem ourselves.
If this level of detail-orientation doesn't interest you, that's fine, and it perhaps shouldn't bother you to have someone say that? We can agree these are subjective values.
I’d much rather get into the intricacies of the business use cases, game mechanics, architectural paradigms, than to focus on typing something I’ve done dozens of times before. I think that’s where I’m at with it.
OK, in that case you make a fair point. I'm not averse to the typing autocompletion either. But most of the work I've been involved with has been research-oriented where the AI's offer to help solve the problem is neither welcome nor useful. So it's a different orientation altogether.
I, personally, don't understand what it is people enjoy about using these tools. I find them tedious, boring, and they often make me angry: subtle bugs and outright lies in the output and no prompt can resolve the problem so I end up having to fix it by hand anyway. It's not pleasant for me.
But other people do and while I don't get it I try not to yuck in their yum too much.
There is no empathy from companies though. They don't care about code and never have.
They should though. Every line of code is a liability. And now we have tools that can generate liability on demand faster than a team of dedicated humans who are trying to be conservative about managing that liability. You still have to be careful of course but now you're taking responsibility for what the machine generates. You're not the one driving anymore and using a tool. You're a tool being used. At least, that's how I feel about it.
Of course to capital holders and investors this doesn't matter, so we may end up being forced to use these tools even if they're not sufficient or useful. Even if they generate liability. We're rather good, as an industry, at deflecting the consequences of liability.
they can not only generate code but also explain code, concepts, architecture and show you stuff
great learning tool
Most people have a car just to get around, it's true, and some people love tinkering with cars and engines. The car you get around in doesn't have to be the same car as the one you tinker with.
Also the whole I am more productive vibe, well management will happilly reduce team size, it has always been do more with less, and now the robots are here.
Each day one step closer to have software development reach the factory level.
Yes some will be left around to maintain the robots, or do the little things they aren't still yet able to perform (until they do), and a couple of managers.
All the rest, I guess there are other domains where robots haven't yet taken over.
I for one am happier to be closer to retirement, than hunting for junior jobs straight out of a degree, it is going to get though out there.
This is very similar to the statement - People who love using Python (or other language not C or assembly) to create software are loving it because they don’t value the act of creating & understanding the software.
It's extra hilarious to hear someone you _thought_ treated their code work as a craft refer to "producing 3 weeks worth of work in the last week" because (a) I don't believe it, not for one bit, unless you are the slowest typist on earth and (b) it clearly positions them as a code _consumer_, not a code _creator_, and they're happy about it. I would not be.
Code is my tool for solving problems. I'd rather write code than _debug_ code - which is what code-gen-bound people are destined to do, all day long. I'd rather not waste the time on a spec sheet to convince the llm to lean a little towards what I want.
Where I've found LLMs useful is in documentation queries, BUT (and it's quite a big BUT) they're only any good at this when the documentation is unchanging. Try ask it questions about nuances of the new extension syntax in c# between dotnet 8 and dotnet 10 - I just had to correct it twice in the same session, on the same topic, because it confidently told me stuff that would not compile. Or in the case of elasticsearch client documentation - the REST side has remained fairly constant, but if you want help with the latest C# library, you have to remind it all the time of the fact - not because it doesn't have any information on the latest stuff, but because it consistently conflates old docs with new libraries. An attempt to upgrade a project from webpack4 to webpack5 had the same problems - the llm confidently telling me to do "X", which would not work in webpack 5. And the real kicker is that if you can prove the LLM wrong (eg respond with "you're wrong, that does not compile"), it will try again, and get closer - but, as in the case with C# extension methods, I had to push on this twice to get to the truth.
Now, if they can't reliably get the correct context when querying documentation, why would I think they could get it right when writing code? At the very best, I'll get a copy-pasta of someone else's trash, and learn nothing. At the worst, I'll spin for days, unless I skill up past the level of the LLM and correct it. Not to mention that the bug rate in suggested code that I've seen is well over 80% (I've had a few positive results, but a lot of the time, if it builds, it has subtle (or flagrant!) bugs - and, as I say, I'd rather _write_ code than _debug_ someone else's shitty code. By far.
> we automate stuff we don't value doing, and the people automating all their code-gen have made a very clear statement about what they want to be doing - they want _results_ and don't actually care about the code (which includes ideas like testing, maintainability, consistent structure, etc)
Not necessarily. I sometimes have a very clear vision of what I want to build, all the architecture, design choices, etc. It's simply easier to formalize a detailed design/spec document + code review if everthing follow what I had in mind, than typing everything myself.It's like the "bucket" tool in Paint. You don't always need to click pixel by pixel if you already know what you want to fill.
Whatever your design document/spec, there are generally a lot of ways and variations of how to implement it, and programmers like the OP do care about those.
You don’t have Paint perform the flood fill five times and then pick the result you like the most (or dislike the least).
> Whatever your design document/spec, there are generally a lot of ways and variations of how to implement it, and programmers like the OP do care about those.
You could make the same argument about compilers : whatever is the code you wrote, your compiler may produce assembly instructions in an undeterministic way.Of course, there are many ways to write the same thing, but the end performance is usually the same (assuming you know what you are doing).
If your spec is strong enough to hold between different variations, you shouldn't need to worry about the small details.
The difference is that the compiler is bound by formal (or quasi-formal) language semantics. In terms of language semantics, you always get precisely the same result, regardless of how the compiler implements it. When you change the source code, you can reason and predict with precision about how this will change the behavior of your compiled program. You can’t do that reasoning with AI prompts, they don’t have that level of predictability.
Bit of a stretch, I think, because the compiler guarantees it will follow the language spec. The LLM will be influenced by your spec but there are no guarantees.
I havent run into this type yet, thankfully. As an AI lover, the architecture of the code is more important than before.
* It’s harder to understand code you didn’t write line by line, readability is more important than it was before.
* Code is being produced faster and with lower bars; code collapsing under its own shitty weight becomes more of a problem than it was before.
* Tests/compiler feedback helps AI self correct its code without you having to intervene; this is, again, more important than it was before.
All the problems I liked thinking about before AI are how I spend my time. Do I remember specific ActiveRecord syntax anymore? No. But that was always a Google search away. Do I care about what those ORM calls actually generate SQL wise and do with the planner? Yes, and in fact it’s easier to get at that information now.
- Person learns how to do desirable hard thing
- Person forms identity around their ability to do hard thing
- Hard thing is so desired that people work to make it easier
- Hard thing becomes easy, lowering the bar so everybody can do it (democratization)
- Good for society, but individual person feels their identity, value and uniqueness in the world challenged. Sour grapes, cope, and bitterness follow.
The key is to not form your identity around "Thing." And if you have done so, now is the time to broaden this identity and become a more well rounded individual instead of getting bitter.
You should form your identity around more lasting/important things like your values, your character, your family, your community, and the general fact that you can provide value to those around you in many ways.
For instance, I use a React form library called Formik. It's outdated and hasn't seen a real commit in years. The industry has moved onto other form libraries, but I really like Formik's api and have built quite a bit of functionality around it. And while I don't necessarily need a library to be actively worked on to use it, in this instance, it's lack of updates have caused it to fall behind in terms of performance and new React features.
The issue is that I'm also building a large, complex project and spend 80-90% of my waking time on that. So what do I do? Do I just accept it and move on? Take the time to migrate to a form library that very well be out-of-date in a year when React releases again? Or, do I experiment with Claude by asking it to update Formik to using the latest React 19 features? Long story short, I did the latter and have a new version of Formik running in my app. And during that, I got to watch and ask Claude what updates it was making and most importantly, why it was making those updates. Is it perfect? No. But it's def better than before.
I love programming. I love building stuff. That doesn't change for me with these tools. I still spend most of my time hand-writing code. But that doesn't mean there isn't a place for this tech.
> I’ll likely never love a tool like Claude Code, even if I do use it, because I value the task it automates. [...] Like other technologies, AI coding tools help us automate tasks: specifically, the ones we don’t value.
Where the article talks about value, you're talking about time [savings] - but you both actually mean the same thing: Getting a fair amount of value for the time spent.
I also don't seem to get your React Formik example... programming isn't solely about "SemVer numbers going up", it's about designing powerful abstractions for (re-)occurring problems. Being on the consuming side of a UI form library is something different from designing its API.
For one thing, I'm sure stable products have been build with Formik@1.0.0^ (it's at @2.4.9 currently). For a second thing, I don't think doing the manual labor of playing a smarter dependabot is as valuable as you think it is. Formik still has 3 million weekly downloads with its latest release being 2 months old, why don't you upstream your changes?
This is straight from the article: "People who love using AI to create software are loving it because they don’t value the act of creating & understanding the software." How is my response that this is missing the point wrong? I have no personal feelings about AI. I don't "love" it. And I also value the act of creating and understanding software, but I don't have the time to do all of that. So, I'm failing to see what point you're making.
> programming isn't solely about "SemVer numbers going up",
Did you read my post at all? What on earth does this have to do with Formik using legacy API's and not being as performant as the other options?
> it's about designing powerful abstractions for (re-)occurring problems. Being on the consuming side of a UI form library is something different from designing its API.
Again, did you read my post at all?
> For one thing, I'm sure stable products have been build with Formik@1.0.0^ (it's at @2.4.9 currently).
What? What does this have to do with it being years behind current React features? Do you even use React? Don't tell me you're arguing about a React form library while not actively using React?
> For a second thing, I don't think doing the manual labor of playing a smarter dependabot is as valuable as you think it is
lol What?
? Formik still has 3 million weekly downloads with its latest release being 2 months old, why don't you upstream your changes?
This is what happens when you think just Googling and thinking you know everything. Just a quick question, that last "release", what did it include? Actually, take this a step further, in the last 2 years, what major updates were released?
> Formik still has 3 million weekly downloads with its latest release being 2 months old, why don't you upstream your changes
Who said I wasn't lol? What is wrong with you? Not only have you completely misinterpreted what I've said (while not having any relevant experience in the area), you're now accusing me of things.
What an absolutely ridiculous reply.
https://github.com/jaredpalmer/formik/tree/v2.1.6/packages/f...
lol "Formik just had a release" you don't know what you're talking about.
Are you Copilot, github-actions[bot], or jaredpalmer himself? (ref: https://github.com/jaredpalmer/formik/graphs/contributors?fr...)
> lol "Formik just had a release" you don't know what you're talking about.
GitHub can be difficult to navigate, I guess you wanted to link to the release page: https://github.com/jaredpalmer/formik/releases/tag/formik%40...
you: "WHY AREN'T YOU ON THE CONTRIBUTOR LIST!>!?!?!"
The irony in sharing that screen while it obviously shows it hasn't been maintained. lol.
lol go ahead, look at that commit. What did it do? And what about the one prior to that? Explain to me in your own words (no AI) how Formik has kept up to date with new React features?
> Are you Copilot, github-actions[bot], or jaredpalmer himself? (ref: https://github.com/jaredpalmer/formik/graphs/contributors?fr...)
But since you're on the repo, go take a look at the issues and discussions log. Go search for "Is htis repo dead"? Go read any number of the 10,000 comments about forms in React on any social media site of your choosing. If Jared works at Vercel and Formik, according to you, is still in "active development", why would they use RHF?
You're not a serious person if you think you can Google a few things and automatically understand the form ecosystem in React.