Letting a robot write code for me, however tedious it would be to write manually, made me feel like I was working in someone else's codebase. It reminds me of launching a videogame and letting someone else play through the boring parts. I might as well not be playing. Why bother at all?
I understand this behaviour if you're working for a company on some miserable product, but not for personal projects.
So I am in the same boat, AI can write some good skeleton code for different purposes so I can get running faster but with anything complex and established it serves very little benefit. I'll end up spending more time trying to understand why and how it is doing something then I'd spend just doing it myself. When AI is a magical fix button that's awesome, but even in those circumstances I'm just buying LLM-debt - if I never need to touch that code again it's fine, but if I need to revise the code then I'll need to invest more time into understanding it and cleaning it up then I initially saved.
I'm not certain how much other folks are feeling this or if it's just me and the way my brain works, but I struggle to see the great savings outside of dead simple tasks.
AI stops coding being about the journey, and makes it about the destination. That is the polar opposite of most people's coding experience as a professional. Most developers are not about the destination, and often don't really care about the 'product', preferring to care about the code itself. They derive satisfaction from how they got to the end product instead of the end product itself.
For those developers who just want to build a thing to drive business value, or because they want a tool that they need, or because they think the end result will be fun to have, AI coding is great. It enables them to skip over (parts of) the tedious coding bit and get straight to the result bit.
If you're coding because you love coding then obviously skipping the coding bit is going to be a bad time.
Then they aren't programmers anymore, are they? We don't call people using no-code platforms "programmers" and we wouldn't trust them one bit to review actual code.
AI is simply the new no-code platform, except that the scope of what it can do is much larger while the reliability of what it produces is much lower.
In the future though, sure,it'll be possible to build a decent app without ever seeing or understanding the code.
I do the same as you with AI now, it's allowing me to build simple things quickly and revise later. Sometimes I never have to. I feel similarly that I'm no longer progressing as a dev just maintaining what I know. That might change I might adapt how I approach work and find the balance but for now it's a new activity entirely.
I've talked to many people over the years who saw coding as a get shit done activity. Stop when it's good enough. They never approached it really as a hobby and a learning experience. It wasn't about self progression to them. Mentioning that I read computer books resulted in a disgusted face "You can just google what you need when you need it".
Always felt odd to me, software development was my hobby something I loved not just a job. Now I think they will thrive in this world. It's pure results. No need to know a breath of things or what's out there to start on the right foot. AI has it all somewhere in it's matrix. Hopefully they develop enough taste to figure out what's good from bad when it's something that matters.
Yes! Learning is fun!
Tim Bryce, one of the foremost experts on software methodology, hated programmers and considered them deeply sad individuals who had to compensate for their mediocre intelligence and narrow thinking by gatekeeping technology and holding the rest of the company hostage to them. And, he said upper management in corporate America agreed with him.
If you place a lot of value in being a good programmer, then to the real movers and shakers in society you are at best a tool they can use to get even richer. A tool that will soon be replaced by a machine. The time has come for programmers to level up their soft skills and social standing, and focus their intelligence on the business rather than the code. It sucks but that's the reality of the AI era.
I'm not sure I'll ever write this kind of code again now. For months now all I've done is think about the higher level architectural decisions and prompt agents to write the actual code, which I find enjoyable, but architectural decisions are less clean and therefore for me less enjoyable. There's often a very clear good and bad way to right a method, but how you organise things at a higher level is much less binary. I rarely ever get that, "yeah, I've done a really good job there" feeling when making higher level decisions, but more of "eh, I think this is probably a good solution/compromise, given the requirements".
this was already happening even before AI - human review is limited, linting is limited, type checking is limited, automated testing is limited
if all of these things were perfect at catching errors then we would not need tracing and observability of production systems - but they are imperfect and you need that entire spectrum of things from testing to observability to really maintain a system
so if you said - hey I'm going to remove this biased, error prone, imperfect quality control step and just replace it with better monitoring... not that unreasonable!
LLM-agents have made making products, especially small ones, a lot easier, but sacrifice much of the crafting of details and, if the project is small enough, the architecture. I've certainly enjoyed using them a lot over the last year and a half, but I've come to really miss fully wrapping my head around a problem, having intimate knowledge of the details of the system, and taking pride in every little detail.
For a prototype, it's pretty amazing to generate a working app with one or two prompts. But when I get serious about it, it becomes such a chore. The little papercuts start adding up, I lose speed as I deal with them, and the inner workings of the app becomes a foreign entity to me.
It's counterintuitive, but what's helping me enjoy coding is actually going slower with AI. I found out that my productivity gains are not on building faster, but learning faster and in a very targeted way.
edit - an interesting facet of AI progress is that the split between these two types of work gets more and more granular. It has led me to actively be aware of what I'm doing as I work, and to critically examine whether certain mechanics are inherently toilistic or creative. I realized that a LOT of what I do feels creative but isn't - the manner in which I type, the way I shape and format code. It's more in the manner of catharsis than creation.
Just like how, in writing a story, a writer must also toil over each sentence, and should this be an emdash or a comma? and should I break the paragraph here or there? All this minutia is just as important to the final product as grand ideas and architecture are.
If you don't care about those little details, then fine. But you sacrifice some authorship of the program when you outsource those things to an agent. (And I would say, you sacrifice some quality as well).
Most writers can't even get a first draft of anything done, and labor under the mistaken assumption that a first draft is just a few minor edits away from being the final book. The reality is that a first draft might be 10% of the total time of the book, and you will do many rounds of rereading and major structural revision, then several rounds of line editing. AI is bad at line editing (though it's ok at finding things to nitpick), so even if your first draft and rough structural changes are 100% AI, you have basically a 0% chance of getting published unless you completely re-write it as part of the editing process.
Google defined "toil" as, very roughly, all the non-coding work that goes into building, deploying, managing a system: https://sre.google/workbook/eliminating-toil/ , https://sre.google/sre-book/eliminating-toil/ .
Quote: "Toil is the kind of work tied to running a production service that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows."
Variations of this definition are widely used.
If we map that onto your writing example, "toil" would be related to tasks like getting the work published, not the writing process itself.
With this definition of toil, you can certainly remove the toil without removing the creative work.
This is too low level. You’d be better off describing the things that need testing and asking for it to do red/green test-driven development (TDD). Then you’ll know all the tests are needed, and it’ll decide what tests to write without your intervention, and make them pass while you sip coffee :)
> I don’t trust it yet is when code must be copy pasted.
Ask it to perform the copy-paste using code - have it write and execute a quick script. You can review the script before it runs and that will make sure it can’t alter details on the way through.
Just don't expect to run a successful restaurant based on it.
In any case, those are ingredients (analogous to...libraries I guess?) and not to the whole application. If you served someone a canned sandwich or canned sushi or some such, they'd notice.
Pick your favorite GoF design pattern. Is that they best way to do it for the computer or the best way to do it for the developer?
I'm just making this up now, maybe it's not the greatest example; but, let's consider the "visitor" pattern.
There's some framework that does a big loop and calls the visit() function on an object. If you want to add a new type, you inherit from that interface, put visit() on your function and all is well. From a "good" engineering practice, this makes sense to a developer, you don't have to touch much code and your stuff lives in it's own little area. That all feels right to us as developers because we don't have a big context window.
But what if your code was all generated code, and if you want to add a new type to do something that would have been done in visit(). You tell the LLM "add this new functionality to the loop for this type of object". Maybe it does a case statement and puts the stuff right in the loop. That "feels" bad if there's a human in the loop, but does it matter to the computer?
Yes, we're early LLMs aren't deterministic, and verification may be hard now. But that may change.
In the context of a higher-level language, y=x/3 and y=x/4 look the same, but I bet the generated assembly does a shift on the latter and a multiply-by-a-constant on the former. While the "developer interface", the source code, looks similar (like writing to a visitor pattern), the generated assembly will look different. Do we care?
Why is this name bad? Because an llm will get confused by it and di the wrong thing half the time.
So I don’t care about assembly because it does not matter usually in any metric. I design using code because that’s how I communicate intent.
If you learn how to draw, very quickly, you find that no one talks about lines (which is mostly all you do), you will hear about shapes, texture, edges, values, balance…. It’s in these higher abstractions intent resides.
Same with coding. No ones thinks in keywords, brackets, or lines of code. Instead, you quickly build higher abstractions and that’s where you live in. The pros is that those concepts habe no ambiguity.
AI takes the craft out of being an IC. IMO less enjoyable.
AI takes the human management out of being an EM. IMO way more enjoyable.
Now I can direct large-scope endeavors and 100% of my time is spent on product vision and making executive decisions. No sob stories. No performance reviews. Just pure creative execution.
More concrete examples to illustrate the core points would have been helpful. As-is the article doesn't offer much - sorry.
For one, I am not sure what kind of code he writes? How does he write tests? Are these unit tests, property-based tests? How does he quantify success? Leaves a lot to be desired.
I'm excited to work on more things that I've been curious about for a long time but didn't have the time/energy to focus on.
I’m working on library code in zig, and it’s very nice to have AI write the FFI interface with python. That’s not technically difficult or high risk, but it is tedious and boring.
Realistically having a helper to get me over slumps like that has been amazing for my personal productivity.
I feel more like a software producer or director than an engineer though.
These kind of pain points usually indicated too much of or a wrong architecture. Being able to fee these kind of things when the clanker does the work is a thing we must think about.
the boilerplate stuff is spot on though. the 10-type dispatch pattern is exactly where i gave up doing it manually
I hate writing proposals. It's the most mind numbing and repetitive work which also requires scrutinizing a lot of details.
But now I've built a full proposal pipeline, skills, etc that goes from "I want to create a proposal" (it collects all the info i need, creates a folder in google drive, I add all the supporting docs, and it generates a react page, uses code to calculate numbers in tables, and builds an absolutely beautiful react-to-pdf PDF file.
I have a comprehensive document outline all the work our company's ever done, made from analyzing all past proposals and past work in google drive, and the model references that when weaving in our past performance/clients.
It is wonderful. I can now just say things like "remove this module from the total cost" and without having to edit various parts of the document (like with hand-editing code). Claude (or anything else) will just update the "code" for the proposal (which is a JSON file) and the new proposal is ready, with perfect formatting, perfect numbers, perfect tables, everything.
So I can stay high level thinking about "analyze this module again, how much dev time would we need?" etc. and it just updates things.
If you'd like me to do something like this with your company, get in touch :) I'm starting to think (as of this week) others will benefit from this too and can be a good consulting engagement.
Uh, no. The happy path is the easy part with little to no thinking required. Edge cases and error handling is where we have to think hardest and learn the most.
> That includes code outside of the happy path, like error handling and input validation. But also other typing exercises like processing an entity with 10 different types, where each type must be handled separately. Or propagating one property through the system on 5 different types in multiple layers.
With AI, I feel I'm less caught up in the minutia of programming and have more cognitive space for the fun parts: engineering systems, designing interfaces and improving parts of a codebase.
I don't mind this new world. I was never too attached to my ability to pump out boilerplate at a rapid pace. What I like is engineering and this new AI world allows me to explore new approaches and connect ideas faster than I've ever been able to before.
This is the hidden super power of LLM - prototyping without attachment to the outcome.
Ten years ago, if you wanted to explore a major architectural decision, you would be bogged down for weeks in meetings convincing others, then a few more weeks making it happen. Then if it didn't work out, it feels like failure and everyone gets frustrated.
Now it's assumed you can make it work fast - so do it four different ways and test it empirically. LLMs bring us closer to doing actual science, so we can do away with all the voodoo agile rituals and high emotional attachment that used to dominate the decision process.
In the sense that, I was trying to explain what I wanted to do to a coworker and my manager, and we kept going back and forth trying to understand the shape of it and what value it would add and how much time it would be worth spending and what priority we should put on it.
And I was like -- let me just spend like an hour putting together a partially working prototype for you, and claude got _so close_ to just completely one-shotting the entire feature in my first prompt, that I ended up spending 3 hours just putting the finishing touches on it and we shipped it before we even wrote a user story. We did all that work after it was already done. Claude even mocked up a fully interactive UI for our UI designer to work from.
It's literally easier and faster to just tell claude to do something than to explain why you want to do it to a coworker.
I'd rather spend my time preparing for this new world now.
But if AI is capable of that it’s not a big step to being capable of doing any white collar job, and we’ll either reorganize our economy completely or collapse.
I spend tons of time handholding LLMs--they're not a replacement for thinking. If you give them a closed-loop problem where it's easy to experiment and check for correctness, then sure. But many problems are open-loop where there's no clear benchmark.
LLMs are powerful if you have the right ideas. Input = output. Otherwise you get slop that breaks often and barely gets the job done, full of hallucinations and incorrect reasoning. Because they can't think for you.
https://lighthouseapp.io/blog/introducing-lighthouse
It looks like a vibe coded website.
I run 17 products as an indie maker. AI absolutely helps me ship faster — I can prototype in hours what used to take days. But the understanding gap is real. I've caught myself debugging AI-generated code where I didn't fully grok the failure mode because I didn't write the happy path.
My compromise: I let AI handle the first pass on boilerplate, but I manually write anything that touches money, auth, or data integrity. Those are the places where understanding isn't optional.
What's worse, the more I rely on the bot, the less my internal model of the code base is reinforced. Every problem the bot solves, no matter how small, doesn't feel like a problem I solved and understanding I'd gained, it feels like I used a cheat code to skip the level. And passively reviewing the bot's output is no substitute for actively engaging with the code yourself. I can feel the brainrot set in bit by bit. It's like I'm Bastian making wishes on AURYN and losing a memory with every wish. I might get a raw-numbers productivity boost now, but at what cost later?
I get the feeling that the people who go on about how much fun AI coding is either don't actually enjoy programming or are engaging in pick-me behavior for companies with AI-use KPIs.
My work often entails tweaking, fixing, extending of some fairly complex products and libraries, and AI will explain various internal mechanisms and logic of those products to me while producing the necessary artifacts.
Sure my resulting understanding is shallow, but shallow precedes deep, and without an AI "tutor", the exploration would be a lot more frustrating and hit-and-miss.
imo, this isn't paranoid at all, and it very likely filters through the LLM, unless you provide a tool/skill and explicit instructions. Even then you're rolling the dice, and the diff will have to be checked.
Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.
If a story hangs around on the front page even after you've flagged it, you can always email us (hn@ycombinator.com) and we'll take a look. That will get our attention much more quickly than a comment.