If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
For one, a power tool like a bandsaw is a centaur technology. I, the human, am the top half of the centaur. The tool drives around doing what I tell it to do and helping me to do the task faster (or at all in some cases).
A GenAI tool is a reverse-centaur technology. The algorithm does almost all of the work. I’m the bottom half of the centaur helping the machine drive around and deliver the code to production faster.
So while I may choose to use hand tools in carpentry, I don’t feel bad using power tools. I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
It’s a bit different.
GenAI is none of that, it's not a power tool, even though it can use power tools or generate output like the above power tools do. GenAI is hiring someone else to build a bird house or a spice rack, and then saying you had a hand in the results. It's asking the replicator for "tea, earl grey, hot". It's like how we elevate CEOs just because they're the face of the company, as if they actually did the work and were solely responsible for the output. There's skill in organization and direction, not all CEOs get undeserved recognition, but it's the rare CEO who's getting their hands dirty creating something or some process, power tools or not. GenAI lets you, everyone, be the CEO.
Does money appear in my account at the end of every two weeks or formally RSUs appear in my brokerage account at the end of every vesting period?
At the end of the day, that’s what supports my addiction to food and shelter.
Not every conversation about GenAI and slop is about your eating habits.
These are astroturfed bot comments, aren't they?
Would you be happier if I said I love writing assembly language code by hand like I did in 1986?
Prior to GPS and a navigation device, you would either print out the route ahead of time, and even then, you would stop at places and ask people about directions.
Post Google Maps, you follow it, and then if you know there's a better route, you choose to take a different path and Google Maps will adjust the route accordingly.
Humans are involved with assembly only because the last bits are maniacally difficult to get right. Humans might be involved with software still for many years, but it probably will look like doing final assembly and QA of pre-assembled components.
1. Looking at the contract and talking to sales about any nuances from the client
2. Talking to the client (use stakeholder if you are working for a product company) about their business requirements and their constraints
3. Designing the architecture.
4. Presenting the architecture and design and iterating
5. Doing the implementation and iterating. This was the job of myself and a team depending on the size of the project. I can do a lot more by myself now in 40 hours a week with an LLM.
6. Reviewing the implementation
7. User acceptance testing
8. Documentation and handover.
I’ve done some form of this from the day I started working 25 years ago. I was fortunate to never be a “junior developer”. I came into my first job with 10 years of hobbyist experience and implementing a multi user data entry system.
I always considered coding as a necessary evil to see my vision come to fruition.
e.g. when I built a truck camper, maybe 50% was woodworking but I had to do electrical, plumbing, metalworking, plastic printing, and even networking infra.
The satisfaction was not from using power tools (or hand tools too) — those were chores — it was that I designed the entire thing from scratch by myself, it worked, was reliable through the years, and it looked professional.
LLMs serve the same purpose for me.
But I don’t at all believe that AI-assisted coding is doomed to do this to us and believe thinking so is a misread of the metaphor.
(As is lumping all of “GenAI” together.)
There were carpenters who refused to use power tools, some still do. They are probably happy -- and that's great, all the power to them. But they're statistically irrelevant, just as artisanal hand-crafted computer coding will be. There was a time when coders rejected high level languages, because the only way they felt good about their code is if they handcrafted the binary codes, and keyed them directly into the computer without an assembler. Times change.
All previously mentioned levels produce deterministic results. Same input, same output.
AI-generation is not deterministic. It’s not even predictable. And example of big software companies clearly show what mass adoption of AI tools will look like in terms of software quality. I dread if using AI will ever be an expectation, this will be level of enshittification never before imagined.
In any case, clinging to the fact that this technology is different in some ways, continues to ignore the many ways it's exactly the same. People continue to cling to what they know, and find ways to argue against what's new. But the writing is plainly on the wall, regardless of how much we struggle to emotionally separate ourselves from it.
If these tools are non-deterministic then how did someone at Anthropic spend the equivalent of $20,000 of Anthropic compute and end up with a C compiler that can compile the Linux kernel (one of the largest bodies of C code out there).
There is clearly something that completely missies the point about the but-muh-non-determinism argument. See my direct response: https://news.ycombinator.com/item?id=46936586
You'll notice this objection comes up each time a "OpenClaw changed my life" or conversely "Agentic Coding ain't it fam" article swings by.
And my retort to you (and them) is, "Oh yeah, and so?"
What about me asking Claude Code to generate a factorial function in C or Python or Rust or insert-your-language-of-choice-here is non-deterministic?
If you're referring to the fact that for a given input LLMs (or whatever) because of certain controls (temperature controls?) don't give the same outputs for the same inputs. Yeah, okay. If we're talking about conversational language that makes a meaningful difference to whether it sounds like an ELISA robots or more like a human. But ask an LLM to output some code then that code has to adhere to functional requirements independent of, muh, non-determinism. And what's to stop you (if you're so sceptical/scared) writing test-cases to make sure the code that is magically whisked out of nowhere performs as you so desire? Nothing. What's to stop you getting one agent to write the test-suite (and for you to review to the test-suite for correctness and for another agent to the write the code and self-correct based off of checking its code against the test-suite? Nothing
I would advise anyone encountering this but-they're-non-deterministic argument on HN to really think through what the proponents of this argument are implying. I mean, aren't humans non-deterministic. (I should have thought so.) So how is it, <extra sarcasm mode activated>pray tell</extra sarcasm mode activated> humans manage to write correct software in the first place?
I’ve also said code is prose for me.
I am not some autistic programmer either, even if these statements out of context make me sound like one.
The non-determinism has nothing to do with temperature; it has everything to do with that fact that even at temp equal to zero, a single meaningless change can produce a different result. It has to do with there being no way to predict what will happen when you run the model on your prompt.
Coding with LLMs is not the same job. How could it be the same to write a mathematical proof compared to asking an LLM to generate that proof for you? These are different tasks that use different parts of the brain.
In other words, if you and me always get the same results back for the same prompt (definition of determinism,) isn't that just really, really power hungry Google?
I think this is a distinction without a difference, we all know what we mean when way say deterministic here.
And for that matter, going back to the band saw analogy, a measure of a quality of a great band saw is, in fact, that the blade won’t snap in half in the middle of a cut. If a band saw manufacturer produces a band saw with a really low binomial p-value (meaning it is less deterministic/more stochastic) that is a pretty lousy band saw, and good carpenters will know to stay away from that brand of band saws.
To me this paints a picture of a distinction that does indeed have a difference. A pretty important difference for that matter.
Lots of the complains about agents sound identical to things I've heard and even said myself about junior engineers.
That said, there's always going to need to be people who can reach below the abstraction and agentic coding loops deprive you of the ability to get those reps in.
Regardless, personally, there's no comparison between an LLM and a junior; always rather work with a junior.
I even have exactly the same discussion after it messed up, like "My code is working, ignore that failing test, that was always broking, and I definitey didn't break it just now".
The Ludddites were workers who lived in an era without any social or state protections for labourers. Capitalists were using child labour to operate the looms because it was cheaper than paying anyone a fair wage. If you didn’t like the conditions you could go work as an indentured servant for the state in the work houses.
Luddites used organized protests in the form of collective violence to force action when they had no other leverage. People were literally shot or jailed for this.
It was a horrible part of history written by the winners. That’s why everyone thinks Luddites were against technology and progress instead of social reforms and responsibility.
That has more to do with how much demand there is for what you're doing. With software eating the world and hardware constraints becoming even more visible due to the chips situation, we can expect that there will be plenty of work for SWE's who are able to drive their coding agents effectively. Being the "top" (reasoning) or the "bottom" half is a matter of choice - if you slack off and are not highly committed to delivering quality product, you end up doing the "bottom" part and leaving the robot in the driver's seat.
Code isn’t really like that. Hand written code scales just like AI written code does. While some projects are limited by how fast code can be written it’s much more often things like gathering requirements that limits progress. And software is rarely a repeated, one and done thing. You iterate on the existing product. That never happens with furniture.
How much is coding actually the bottleneck to successful software development?
It varies from project to project. Probably in a green field it starts out pretty high but drops quite a bit for mature projects.
(BTW, "mature" == "successful", for the most part, since unsuccessful projects tend to get dropped.)
Not that I'm not AI-denier. These are great tools. But let's not just swallow the hype we're being fed.
If you can't code by hand professionally anymore, what are you being paid to do? Bring the specs to the LLMs? Deal with the customers so the LLMs don't have to?
Yet, there is no way a product manager without any coding experience could have done it. First, the API needed to communicate to the main app correctly such as formatting, correcting data. This required human engineer guidance and experience working with expected data. AI was lost. Second, the API was designed extremely poorly. You first had to make a request, then retry a second endpoint over and over again while the Chinese API did its thing in the background. Yes, I had to poll it. I then had to do load testing to make sure it was reliable (it wasn't). In the end, I gave a recommendation that we shouldn't rely on this Chinese company and back out of the deal before we send them a huge deposit.
A non-technical PM couldn't have done what I did... for at least a few more years. You need a background and experience in software development to even know what to prompt the AI. Not only that, in the last 3 years, I developed an intuition on where LLMs fail and succeed when writing code.
I still have a job. My role has changed. I haven't written more than 10 lines of code in a day for months now. Yes, it's kind of scary for software devs right now but I'm honestly loving this as I was never the kind of dev who loved the code, just someone who needed to code to get what I wanted.
I’ve spent enough time working with cross-functional stakeholders to know that the vast majority of PM (whether of the product, program, or project variety), will not be capable of running AI towards any meaningful software development goal. At best they can build impressive prototypes and demos, at worst they will corrupt data in a company-destroying level of failure.
How do you tell a computer exactly what you want it to do, without using code?
No one but seniors with years and years of experience is producing like that. As evidenced how much the juniors i work with struggle to do the same
Right now millions of developers are providing tons of architecture questions and answers. That's all going to be used as training data for the next model coming out in 6 months time.
This is a moat on our jobs as deep as a puddle.
If you believe LLMs will be able to do complex coding tasks, you must also concede they will be able to make the relatively simpler architecture choices easily simply by asking the right questions. Something they're already starting to be able to do.
Now you've put your finger on something. Who is capable of asking the right questions?
It's not a massive jump to go from, 'add a button above the table to the right that when clicked downloads and excel file', to 'The client's asking to dowbload an excel file".
If you believe the LLMs will graduate from junior level coding to senior in the next year, which they're clearly not capable of doing yet despite all the hype, there is no moat of going from coder to BA to PM.
And then you don't need middle management either.
I’ve been working for cloud consulting companies/departments for six years.
Customers were willing to pay mid level (L5) consultants with @amazon.com by their names (AWS ProServe) $x to do one “workstream”/epic worth of work. I got paid $x - Amazon’s cut in cash and RSUs.
Once I got Amazon’ed, I had to get a staff level position (senior equivalent at BigTech) at a third party company where now I am responsible for larger projects. Before I would have needed people - now I need code gen tools and my quarter century of development experience and my decade of experience leading implementations + coding.
(Color me skeptical.)
But yeah, if anybody can do it, the salaries are going to plummet. You don't need a CS degree to tell the AI to try again.
Everything just changed. Fundamentally.
If you don't adapt to these tools, you will be slower than your peers. Few businesses will tolerate that.
This is competitive cycling. Claude is a modern bike with steroids. You can stay on a penny farthing, but that's not advised.
You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.
What remains to be seen is how many of us the market needs and how much the market will pay us.
I'm hoping demand and comp remain constant, but we'll see.
The one thing I will say is that we need ownership in these systems ASAP, or we'll become serfs to computing.
The management has decided that the latter is preferable for short term gains.
It's actually worse than that, because really the first case is "produce 1x good code". The hard part was never typing the code, it was understanding and making sure the code works. And with LLMs as unreliable as they are, you have to carefully review every line they produce - at which point you didn't save any time over doing it yourself.
That's what so many of you are not getting.
Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.
I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.
> Look at the pretty pictures AI generates. That's where we are with code now.
Oh, that is a great analogy. Yes, those pictures are pretty! Until you look closer. Any experienced artist or designer will tell you that they are dogshit and don't have value. Don't look further than at Ubisoft and their Anno 117 game for a proof.Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.
When I notice a genAI image, I force myself to stop and inspect it closely to find what nonsensical thing it did.
I've found something every time I looked, since starting this routine.
"Glossy" might be a good word (no i don't mean literally shiny, even if they are sometimes that).
Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.
I agree, but this is an oversimplification - we don't always get the speed boosts, specifically when we don't stay pragmatic about the process.
I have a small set of steps that I follow to really boost my productivity and get the speed advantage.
(Note: I am talking about AI-coding and not Vibe-coding) - You give all the specs, and there are "some" chances that LLM will generate code exactly required. - In most cases, you will need to do >2 design iterations and many small iterations, like instructing LLMs to properly handle error gracefully recover from errors. - This will definitely increase speed 2x-3x, but we still need to review everything. - Also, this doesn't take into account the edge cases our design missed. I don't know about big tech, but when I have to do the following to solve a problem
1. Figure out a potential solution
2. Make a hacky POC script to verify the proposed solution actually solves the problem
3. Design a decently robust system as a first iteration (that can have bugs)
4. Implement using AI
5. Verify each generated line
6. Find out edge cases and failure modes missed during design and repeat from step3 to tweak the design, or repeat from step4 to fix bug.
WHENEVER I jump directly from 1 -> 3 (vague design) -> 5, Speed advantages become obsolete.
> Bob Slydell: What you do at Initech is you take the specifications from the customer and bring them down to the software engineers?
> Tom Smykowski: Yes, yes that's right.
> Bob Porter: Well then I just have to ask why can't the customers take them directly to the software people?
> Tom Smykowski: Well, I'll tell you why, because, engineers are not good at dealing with customers.
> Bob Slydell: So you physically take the specs from the customer?
> Tom Smykowski: Well... No. My secretary does that, or they're faxed.
> Bob Porter: So then you must physically bring them to the software people?
> Tom Smykowski: Well. No. Ah sometimes.
> Bob Slydell: What would you say you do here?
The agents are the engineers now.
It's a bit like eating junk food everyday and ah sometimes I go see the doctor he keep saying I should eat more healthy and lose some weight.
Your code in $INSERT_LANGUAGE is no less of a spec to machine code than english is to $INSERT_LANGUAGE.
Spec is still needed, spec is the core problem of engineering. Too much specialization have made job titles like $INSERT_LANGUAGE engineer, which deviated too far from the core problem, and it is being rectified now.
Then you are simply fucked. The code you deliver will contain bugs which LLM sometimes will be able to fix and sometimes will be not. And as a person who has no clue you will have no idea how to fix it when LLM can not. Also even when LLM code is correct it can and sometimes does introduce gross performance fuckups, like using patterns that employ N-square complexity instead of N for example. Again as a clueless person you are fucked. And if one goes to areas like concurrency, multithreading optimizations one gets fucked even more. I can go on and on on way more particular reasons to get screwed.
For a person who can hand code AI becomes amazing tool. For me - it helps immensely.
There are few skills that are both fun and highly valued. It's disheartening if it stops being highly valued, even if you can still do it in private.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
I'm not pretending. I'm only sad.
Right now the only way to save time with LLMs is to trust the output and not review it. But if you do that, you're just going to produce crappy software.
- documentation for well-known frameworks and libs, "how do I do [x] in [z]?" questions
- port small code chunks from one language to another
- port configuration from one software to another (example: I got this Apache config, make me the equivalent in NGinX)
Which is already pretty cool if you don't think about the massive amount of energy spent for this, but definitely not the "10x" productivity boost I hear about.> trust the output and not review it
Not gonna happen here :)
A few even make a good living by selling their artisanal creations.
Good for them!
It's great when people can earn a living doing what they love.
But wool spinning and cloth weaving are automated and apparel is mass produced.
There will always be some skilled artisans who do it by hand, but the vast majority of decent jobs in textile production are in design, managing machines and factories, sales and distribution.
It's pretty surprising to see people on this site (assume mostly programmers) to think of code in terms of quantity. I always thought developers believe in less code the better.
You cannot tell AI to do just one thing, have it do it extremely well, or do it reliably.
And while there's a lot of opinions wrapped up in it all, it is very debatable whether AI is even solving a problem that exists. Was coding ever really the bottleneck?
And while the hype is huge and adoption is skyrocketing, there hasn't been a shred of evidence that it actually is increasing productivity or quality. In fact, in study after study, they continue to show that speed and quality actually go down with AI.
No job site would tolerate someone bringing a hand saw to cut rafters when you could use a circular saw, the outcome is what matters. In the same vein, if you’re too sloppy cutting with the circular saw, you’re going to get kicked off the site too. Just keep in mind a home made from dimensional lumber is on the bottom of the precision scale. The software equivalent of a rapper’s website announcing a new album.
There are places where precision matters, building a nuclear power plant, software that runs an airplane or an insulin pump. There will still be a place for the real craftsman.
Nevertheless, the main motivator for me has been always the final outcome - a product or tool that other people use. Using AI helps me to move much faster and frees up a lot of time to focus on the core which is building the best possible thing I can build.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
Opus 4.5 just came out around 3 months ago. We are still very early in this game. Creating things this year already makes me feel like I'm in the Enchanted Pencil (*) cartoon in which the boy draws an object with a magic pencil and makes it reality within seconds. With the collective effort of everyone involved in building the AI tools and the incentives aligned (as they are right now) the progress will continue be very rapid. You can still code by hand but it will be very hard to compete in the market without the use of AI.
(*) It's a Polish cartoon from the 60s/70s (no language barrier) - https://www.youtube.com/watch?v=-inIMrU1t7s*
There are two attitudes stemming from the LLM coding movement, those who enjoyed the craft of coding MORE, and those who enjoy seeing the final output MORE.
A friend of mine reposted someone saying that "AI will soon be improving itself with no human intervention!!" And I tried asking my friend if he could imagine how an LLM could design and manufacture a chip, and then a computer to use that chip, and then a data center to house thousands of those computers, and he had no response.
People have no perspective but are making bold assertion after bold assertion
If this doesn't signal a bubble I don't know what does
There's going to be minimal "junior" jobs where you're mostly implementing - I guess roughly equivalent to working wood by hand - but there's still going to be jobs resembling senior level FAANG jobs for the foreseeable future.
Someone's going to have to do the work, babysit the algorithm, know how to verify that it actually works, know how to know that it actually does what it's supposed to do, know how to know if the people who asked for it actually knew what they were asking for, etc.
Will pay go down? Who knows. It's easy to imagine a world in which this creates MORE demand for seniors, even if there's less demand for "all SWEs" because there's almost zero demand for new juniors.
And at least for some time, you're going to need non-trivial babysitting to get anything non-trivial to "just work".
At the scale of a FAANG codebase, AI is currently not that helpful.
Sure, Gemini might have a million token context, but the larger the context th worse the performance.
This is a hard problem to solve, that has had minimal progress in what - 3 years?
If there's a MAJOR breakthrough on output performance wrt context size - then things could change quickly.
The LLMs are currently insanely good at implementing non-novel things in small context windows - mainly because their training sets are big enough that it's essentially a search problem.
But there's a lot more engineering jobs than people think that AREN'T primarily doing this.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
Who are "we" and why do "we" "pretend"? > Will they ever carve furniture by hand for a business? Probably not.
This is definitely a stretch.Bullshit. The value in software isn't in the number of lines churned out, but in the usefulness of the resulting artifact. The right 10,000 lines of code can be worth a billion dollars, the cost to develop it is completely trivial in comparison. The idea that you can't take the time to handcraft software because it's too expensive is pernicious and risks lowering quality standards even further.
I take issue even with this part.
First of all, all furniture definitely can't be built by machines, and no major piece of furniture is produced by machines end to end. Even assembly still requires human effort, let alone designs (and let alone choosing, configuring, and running the machines responsible for the automable parts). So really a given piece of furniture may range from 1% machine built (just the screws) to 90%, but it's never 100 and rarely that close to the top of this range.
Secondly, there's the question of productivity. Even with furniture measuring by the number of chairs produced per minute is disingenuous. This ignores the amount of time spent on the design, ignores the quality of the final product, and even ignores its economic value. It is certainly possible to produce fewer units of furniture per unit of time than a competitor and still win on revenue, profitability, and customer sentiment.
Trying to apply the same flawed approach to productivity to software engineering is laughably silly. We automate physical good production to reduce the cost of replicating a product so we can serve more customers. Code has zero replication cost. The only valuable parts of software engineering are therefore design, quality, and other intangibles. This has always been the case, LLMs changed nothing.
I could use AI to churn out hundreds of thousands of lines of code that doesn't compile. Or doesn't do anything useful, or is slower than what already exists. Does that mean I'm less productive?
Yes, obviously. If I'd written it by hand, it would work ( probably :D ).
I'm good with the machine milled lumber for the framing in my walls, and the IKEA side chair in my office. But I want a carpenter or woodworker to make my desk because I want to enjoy the things I interact with the most. And don't want to have to wonder if the particle board desk will break under the weight of my frankly obscene number of monitors while I'm out of the house.
I'm hopeful that it won't take my industry too long to become inoculated to the FUD you're spreading about how soon all engineers will lose their job to vibe coders. But perhaps I'm wrong, and everyone will choose the LACK over the table that last more than most of the year.
I haven't seen AI do anything impressive yet, but surely it's just another 6mo and 2B in capex+training right?
It's at least possible that we would eventually do a rollback to status quo and swear to never devalue human knowledge of the problems we solve.
Love this way of putting it. I hate that we can mostly agree that devaluing expertise of artists or musicians is bad, but that devaluing the experience of software engineers is perfectly fine, and actually preferable. Doing so will have negative downstream effects.
Psst ==> https://www.youtube.com/watch?v=k6eSKxc6oM8
MY project (MIT licensed) ...
Eg in my team I heavily discourage generating and pushing generated code into a few critical repositories. While hiring, one of my points was not to hire an AI enthusiast.
"What did you used to do?"
"Programming. You?"
"I was a lawyer."
The cult has its origins in taylorism - a sort of investor religion dedicated to the idea that all economic activity will eventually be boiled down to ownership and unskilled labor.
And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.
I'm fundamentally convinced that my investment into deep long term grokking of a project will allow me to surpass primarily LLM projects over the long term in raw velocity.
It also stands to reason that any task that i deem to NOT further my goal of learning or deep understanding that can be done by an LLM i will use the LLM for it. And as it turns out there are a TON of those tasks so my LLM usage is incredibly high.
I have never thought of that aspect! This is a solid point!
I generally frame this as: Are you optimizing for where you will be in 6 months, or 2 years?
Successful projects: quite often much longer than 10 years
Code quality doesn't matter until lots of people start using what you wrote and you need to maintain/extend/change it
God it's a depressing thought that whatever work you do is just a throwaway no-one will use. That shouldn't be your end goal
I didn't say that.
In fact if your code doesn't significantly change over time it probably means your project wasn't successful.
That's one of the biggest benefits of software quality and the long-term investment: how easy is your thing to change?
If anything I'd claim using LLMs can actually free up your time to really focus on the proper design of the software.
I think the disconnect here is that people bashing LLMs don't understand that any decent engineer isn't just going around vibe coding, but instead creating a well thought design (with or without AI) and using LLMs to speed up the implementation.
In addition, for my old product which is 5+ years old, AI now writes 95%+ of code for me. Now the programming itself takes a small percentage of my time, freeing me time for other tasks.
This is proving GP's point that you're going off feels and/or exaggerating
From a user perspective I often implement a feature and then just throw it away no worries because I can reimplement it in an hour again based on my findings. No sunken cost. Also I can implement very small details that otherwise I'd have to backlog. This leads to a higher quality product for the user.
From a code standpoint I frequently do large refactors that also would never have been worth it by hand. I have a level of test coverage that would be infeasible for a one man show.
Boring stuff from a programming standpoint but stuff that helps businesses so they pay for it.
I feel more lost and unsure instead of good - because I didn't write the code, so I don't have its internal structure in my head and since I didn't write it there's nothing to be proud of.
I tried writing a small utility library using Windows Copilot, just for some experience with the tach (OK, not the highest tech, but I am 73 this year) and found it mildly impressive, but quite slow compared to what I would have done myself to get some quality out of it. It didn't make me feel good, particularly.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
We don't stand a chance and we know it.
Drugs, alcoholism, overeating, orgies, doom scrolling, gambling.
Addictions are a problem or danger to humans, no doubt. But we don't stand a chance? I'm not sure the evidence supports your argument.
Your control over the code is your prompt. Write more detailed prompts and the control comes back. (The best part is that you can also work with the AI to come up with better prompts, but unlike with slop-written code, the result is bite-sized and easily surveyable.)
At least when I write by hand, I have a deep and intimate understanding of the system.
That is exactly the type of help that makes me happy to have AI assistance. I have no idea how much electricity it consumed. Somebody more clever than me might have prompted the AI to generate the other 100 loc that used the struct to solve the whole problem. But it would have taken me longer to build the prompt than it took me to write the code.
Perhaps an AI might have come up with a more clever solution. Perhaps memorializing a prompt in a comment would be super insightful documentation. But I don't really need or want AI to do everything for me. I use it or not in a way that makes me happy. Right now that means I don't use it very much. Mostly because I haven't spent the time to learn how to use it. But I'm happy.
Really I'd rather have AI generate a codegen script that deterministically does the struct from schema generation
I've had enough instances where it's slid in a subtle change like adding "ing" to a field name to not fully trust it
I've spent a lot of my career cleaning up stuff like that, I guess with AI we just stop caring?
This is how ridiculous workflows evolve, but it really isn't AI's fault.
Us humans are expensive part of the machine.
LLMs are not good enough for you to set and forget. You have to stay nearby babysitting it, keeping half an eye on it. That's what's so disheartening to many of us.
In my career I have mentored junior engineers and seen them rapidly learn new things and increase their capabilities. Watching over them for a shirt while is pretty rewarding. I've also worked with contract developers who were not much better than current LLMs, and like LLMs they seemed incapable of learning directly from me. Unwilling even. They were quick to say nice words like, "ok, I understand, I'll do it differently next time," but then they didn't change at all. Those were some of the most frustrating times in my career. That's the feeling I get when using LLMs for writing code.
while I have more time to do what?
For work, I regularly have 2-4 agents going simultaneously, churning on 1-3 features, bug fixes, doc updates.I pop between them in the "down time", or am reviewing their output, or am preparing the requirements for the next thing, or am reviewing my coworkers MRs.
Plenty to do that isn't doom scrolling.
Has there been any sort of paradigm shift in coding interviews? Is LLM use expected/encouraged or frowned upon?
If companies are still looking for people to write code by hand then perhaps the author is onto something, if however we as an industry are moving on, will those who don't adapt be relegated to hobbyists?
It’s going to take a while.
1. The thing to be written is available online. AI is a search engine to find it, maybe also translate it to the language of choice.
2. The thing (system or component or function) is genuinely new. The spec has to be very precise and the AI is just doing the typing. This is, at best working around syntax issues, such as some hard-to-remember particular SQL syntax or something like that. The languages should be better.
3. It‘s neither new nor available online but a lot to type out and modify. The AI does all the boilerplate. This is a failure of the frameworks and languages to require so much boilerplate.
I sometimes dread writing code that's in a state of bad disrepair or is overly complex, think a lot of the "enterprise" code out there - it got so bad that I more or less quit a job over it, though never really stated that publicly, alongside my mind going dark places when you have pressure to succeed but the circumstances are stacked against you.
For a while I had a few Markdown files that went into detail exactly why I hated it, in addition to also being able to point my finger at a few people responsible for it. I tried approaching it professionally, but it never changed and the suggestions and complaints largely fell on deaf ears. Obviously I've learnt that while you can try to provide suggestions, some people and circumstances will never change, often it's about culture fit.
But yeah, outsource all of that to AI, don't even look back. Your sanity is worth more than that.
I think the 10 lines of code people worry their jobs now become obsolete. In cases where the code required googling how to do X with Y technology, that's true. That's just going to be trivially solvable. And it will cause us to not need as many developers.
In my experience though, the 10 lines of finicky code use case usually has specific attributes:
1. You don't have well defined requirements. We're discovering correctness as we go. We 'code' to think how to solve the problem, adding / removing / changing tests as we go.
2. The constraints / correctness of this code is extremely multifaceted. It simultaneously matters for it to be fast, correct, secure, easy to use, etc
3. We're adapting a general solution (ie a login flow) to our specific company or domain. And the latter requires us to provide careful guidance to the LLM to get the right output
It may be Claude Code around these fewer bits of code, but in these cases its still important to have taste and care with code details itself.
We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
I'm gonna assume you think you're in the other camp, but please correct me if I'm mistaken.
I'd say I'm in the 10 lines of code camp, but I'd say that group is the least afraid of fictionalized career threat. The people that obsess over those 10 lines are the same people who show up to fix the system when prod goes down. They're the ones that change 2 lines of code to get a 35% performance boost.
It annoys me a lot when people ship broken code. Vibe coded slop is almost always broken, because of those 10 lines.
At the same time I make enough silly mistakes hand coding it feels irresponsible to NOT have a coding LLM generate code. But I look at all the code and (gasp) make manual changes :)
No ones care about a random 10 lines of code. And the focus of AI hypers on LoC is disturbing. Either the code is correct and good (allows for change later down the line) or it isn't.
> We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
You do remember how easy it is to do `git clone`?
The question to me becomes whether the PM -> engineering handoff outdated? Should they be the same person? Does it collapse to one skillet for this work?
If they don’t like it, take it away. I just won’t do that part because I have no interest in it. Some other parts of the project, I do enjoy working on by hand. At least setting up the patterns I think will result in simple readable flow, reduce potential bugs, etc. AI s not great at that. It’s happy to mix strings, nulls, bad type castings, no separation of concerns, no small understandable functions, no reusable code, etc. which is th part i enjoy thinking about
Also “pull records from table X and display them in a data grid. Include a “New” button and associated functionality respecting column constraints in the database. Also add an edit and delete button for each row in the table”. God, it’s really nice to have an LLM get that 85% of the way done in maybe 2 min.
If you "set and forget", then you are vibe coding, and I do not trust for a second that the output is quality, or that you'd even know how that output fits into the larger system. You effectively delegate away the reason you are being paid onto the AI, so why pay you? What are you adding to the mix here? Your prompting skills?
Agentic programming to me is just a more efficient use of the tools I already used anyway, but it's not doing the thinking for me, it's just doing the _doing_ for me.
> What are you adding to the mix here? Your prompting skills?
The answer to that is an unironic and dead-serious "yes!".
My colleagues use Claude Opus and it does an okay job but misses important things occasionally. I've had one 18-hour session with it and fixed 3 serious but subtle and difficult to reproduce bugs. And fixed 6-7 flaky tests and our CI has been 100% green ever since.
Being a skilled operator is an actual billable skill IMO. And that will continue to be the case for a while unless the LLM companies manage to make another big leap.
I've personally witnessed Opus do world-class detective work. I even left it unattended and it churned away on a problem for almost 5h. But I spent an entire hour before that carefully telling it its success criteria, never to delete tests, never to relax requirements X & Y & Z, always to use this exact feedback loop when testing after it iterated on a fix, and a bunch of others.
In that ~5h session Opus fixed another extremely annoying bug and found mistakes in tests and corrected them after correcting the production code first and making new tests.
Opus can be scary good but you must not handwave anything away.
I found love for being an architect ever since I started using the newest generation [of scarily smart-looking] LLMs.
I very much enjoy the actively of writing code. For me, programming is pure stress relief. I love the focus and the feeling flow, I love figuring out an elegant solution, I love tastefully structuring things based on my experience of what concerns matter, etc.
Despite the AI tools I still do that: I put my effort into the areas of the code that count, or that offer intellectually stimulating challenge, or where I want to make sure to explore manually think my way into the problem space and try out different API or structure ideas.
In parallel to that I keep my background queue of AI agents fed with more menial or less interesting tasks. I take the things I learn in my mental "main thread" into the specs I write for the agents. And when I need to take a break on my mental "main thread" I review their results.
IMHO this is the way to go for us experienced developers who enjoy writing code. Don't stop doing that, there's still a lot of value in it. Write code consciously and actively, participate in the creation. But learn to utilize and keep busy agents in parallel or when you're off-keyboard. Delegate, basically. There's quite a lot of things they can do already that you really don't need to do because the outcome is completely predictable. I feel that it's possible to actually increase the hours/day focussing on stimulating problems that way.
The "you're just mindlessly prompting all day" or "the fun is gone" are choices you don't need to be making.
I think though it is probably better for your career to churn out lines, it takes longer to radically simplify, people don’t always appreciate the effort. Plus instead if you go the other way, increase scope and time and complexity that more likely will result in rewards to you for the greater effort.
I also would rather a project take longer and struggle through it without using AI as I find joy in the process. But as I said in my original post I understand that type of work appears to be coming to an end.
The reason Claude code or Cursor feels addictive even if it makes mistakes is better illustrated in this post - https://x.com/cryptocyberia/status/2014380759956471820?s=46
But I guess that's nothing new.
True, and you really do need to internalize the context to be a good software developer.
However, just because coding is how you're used to internalizing context doesn't mean it's the only good way to do it.
(I've always had a problem with people jumping into coding when they don't really understand what they are doing. I don't expect LLMs to change that, but the pernicious part of the old way is that the code -- much of it developed in ignorance -- became too entrenched/expensive to change in significant ways. Perhaps that part will change? Hopefully, anyway.)
For me, LLMs are joyful experiences. I think of ideas and they make them happen. Remarkable and enjoyable. I can see how someone who would rather assemble the furniture, or perhaps build it, would like to do that.
I can’t really relate but I can understand it.
I wonder who follows. Perhaps it has already happened. I look at the code but there are people who build their businesses as English text in git. I don’t yet have the courage.
For me, LLMs have been a tremendous boon for me in terms of learning.
It's so ironic because computers/computer programs were literally invented to avoid doing grunt work.
I am not responsible for choosing whether the code I write using a for loop or while loop. I am responsible for whether my implementation - code, architecture, user experience - meets the functional and non functional requirements. It’s been well over a decade that my responsibilities didn’t require delegation to other developers doing the work or even outsourcing an entire implementation to another company like a SalesForce implementation.
Now that I have more experience and manage other SWEs, I was right, that stuff was dumb and I'm glad that nobody cares anymore. I'll spend the time reviewing but only the important things.
Once I got to the point where I was delegating complete implementations to seniors with just “this is a high level idea of what Becky’s department wants. You now know as much I do. If you have any business related questions go ask Becky and come back to me with a design and these are our only technical constraints”. Then two weeks later there are things I might have done differently. But it meets all of the functional and non functional requirements. I bite my toungue and move on.
His team is going to be responsable for it.
Now I don’t treat AI as a senior developer. I treat it as a mid level ticket taker. If their is going to be a feature change, I ain’t doing it any more. The coding agent is. I am just going to keep good documentation in various MD files for context.
In other comment, meant that other reviewers who used to nitpick have stopped for whatever reason, maybe because overall people are busier now.
I think we should be worrying about more urgent things, like a worker doing the job of three people with ai agents, the mental load that comes with that, how much of the disruption caused by ai will disproportionately benefit owners rather than employees, and so on.
And others are not able to believe the (not extreme) but visible speed boost from pragmatic use of AI.
And sadly, whenever the discussion about the collective financial disadvantage of AI to software engineers will start and wherever it goes…
The owners and employers will always make the profits.
It absolutely is.
>Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
You don't ship it, the AI does. You're just the middleman, a middleman they can eventually remove altogether.
>Now, I would be lying if I said I didn’t use LLMs to generate code. I still use Claude, but I do so in a more controlled manner.
"I can quit if I want"
>Manually giving claude the context forces me to be familiar with the codebase myself, rather than tell it to just “cook”. It turns code generation from a passive action to a deliberate thoughtful action. It also keeps my brain engaged and active, which means I can still enter the flow state. I have found this to be the best of both worlds and a way to preserve my happiness at work.
And then soon the boss demands more output, like the guys who left it all to Claude and even run 5x in parallel give.
In fact, it's even worse - driving a car is one of the least happy modes of getting around there is. And sure, maybe you really enjoy driving one. You're a rare breed when it comes down to it.
Yet it's responsible by far for the most people-distance transported every day.
You could look back throughout human history at the inventions that made labor more efficient and ask the same question. The time-savings could either result in more time to do even more work, or more time to keep projects on pace at a sane and sustainable rate. It's up to us to choose.
Succinctly: process over product.
I also like writing code by hand, I just don't want to maintain other people's code. LMK if you need a job referral to hand refactor 20K lines of code in 2 months. Do you also enjoy working on test coverage?
There’s talk of war in the state of Nationstan. There are two camps: those who think going to war is good and just, and those who think it is not practical. Clearly not everyone is pro-war. There are two camps. But the Overton Window is defined with the premise that invading another country is a right that Nationstate has and can act on. There are by definition (inside the Overton Window) no one who is anti-war on the principle that the state has no right to do it.[2]
Not all articles in this AI category are outright positive. They range from the euphoric to the slightly depressed. But they share the same premise of inevitability; even the most negative will say that, of course I use AI, I’m not some Luddite[3]! It is integral to my work now. But I don’t just let it run the whole game. I copy–paste with judicious care. blah blah blah
The point of any Overton Window is to simulate lively debate within the confines of the premises.
And it’s impressive how many aspects of “the human” (RIP?) it covers. Emotions, self-esteem, character, identity. We are not[4] marching into irrelevance without a good consoling. Consolation?
[1] https://news.ycombinator.com/item?id=44159648
[2] You can let real nations come to mind here
This was taken from the formerly famous (and controversial among Khmer Rouge obsessed) Chomsky, now living in infamy for obvious reasons.
[3] Many paragraphs could be written about this
[4] We. Well, maybe me and others, not necessarily you. Depending on your view of whether the elites or the Mensa+ engineers will inherit the machines.
I almost never agree with the names Claude chooses, i despise the comments it adds every other line despite me telling it over and over and over not to, oftentimes i catch the silly bugs that look fine at first glance when you just let Claude write its output direct to the file.
It feels like a good balance, to me. Nobody on my team is working drastically faster than me, with or without AI. It very obviously slows down my boss (who just doesn't pay attention and has to rework everything twice) or some of the juniors (who don't sufficiently understand the problem to begin with). I'll be more productive then them even if i am hand-writing most of the code. So i don't feel threatened by this idea that "hand written code will be something nobody does professionally here soon" -- like the article said, if I'm responsible for the code i submit, I'm still the bottleneck, AI or not. The time i spend writing my own code is time I'm not poring over AI output trying to verify that it's actually correct, and for now that's a good trade.
Bean counters don't care about creativity and art though, so they'll never get it.