It always surprises me that this isn't obvious to everyone. If AI wrote 100% of the code that I do at work, I wouldn't get any more work done because writing the code is usually the easy part.
A shift to not writing code (which is apparently sometimes possible now) and managing AI agents instead is a pretty major industry change.
It's like how every job requires math if you make it far enough.
If they could write exactly what I wanted but faster, I'd probably stop writing code any other way at all because that would just be a free win with no downside even though the win might be small! They don't write exactly what I want though, so the tradeoff is whether the amount of time they save me writing it is lost from the extra time debugging the code they wrote rather than my own. It's not clear to me that the code produced by an LLM right now is going to be close enough to correct enough of the time that this will be a net increase in efficiency for me. Most of the arguments I've seen for why I might want to consider investing more of my own time into learning these tools seem to be based on extrapolation of trends to up to this point, but it's still not clear to me that it's likely that they'll become good enough to reach a positive ROI for me any time soon. Maybe if the effort to actually start using them more heavily was lower I'd be willing to try it, but from what I can tell, it would take a decent amount of work for me to get the point where I'm even producing anything close to what I'm currently producing, and I don't really see the point of doing that if it's still an open question if it will ever close the remaining gap.
Imperfectly fixing obvious problems in our processes could gain us 20%, easy.
Which one are we focusing on? AI. Duh.
All I had to do was a two line prompt, and accept the pull request. It probably took 10 minutes out of my day, which was mostly the people I was helping explaining what they thought was wrong. I think it might've taken me all day if I had to go through all the code and the documentation and fixed it. It might have taken me a couple of days because I probably would've made it less insane.
For other tasks, like when I'm working on embedded software using AI would slow me down significantly. Except when the specifications are in German.
All OSS has been ingested, and all the discussion in forum like this about it, and the personal blog posts and newsletters about it; and the bug tracking; and theh pull requests, and...
and training etc. is only going to get better and filtering out what is "best."
At best, what I find online are basic day 1 tutorials and proof on concept stuff. None of it could be used in production where we actually need to handle errors and possible failure situations.
There is barely anything that qualifies as documentation that they are willing to provide under NDA for lock-in reasons/laziness (ERPish sort of thing narrowly designed for the specific sector, and more or less in a duopoly).
The difficulty in developing solutions is 95% understanding business processes/requirements. I suspect this kind of thing becomes more common the further you get from a "software company” into specific industry niches.
How many hours per week did you spend coding on your most recent project? If you could do something else during that time, and the code still got written, what would you do?
Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
So reducing the part where I go from abstract system to concrete implementation only saves me time spent typing, while at the same time decoupling me from understanding whether the code actually implements the system I have in mind. To recover that coupling, I need to read the code and understand what it does, which is often slower than just typing it myself.
And to even express the system to the code generator in the first place still requires me to mentally bridge the gap between the goal and the system that will achieve that goal, so it doesn't save me any time there.
The exceptions are things where I literally don't care whether the outputs are actually correct, or they're things that I can rely on external tools to verify (e.g. generating conformance tests), or they're tiny boilerplate autocomplete snippets that aren't trying to do anything subtle or interesting.
Yes, there is artistry, craftsmanship, and "beautiful code" which shouldn't be overlooked. But I believe that beautiful code comes from solid ideas, and that ugly code comes from flawed ideas. So, as long as the (human-constructed) idea is good, the code (whether it is human-typed or AI-generated) should end up beautiful.
My judgement is built in to the time it takes me to code. I think I would be spending the same amount of time doing that while reviewing the AI code to make sure it isn't doing something silly (even if it does technically work.)
A friend of mine recently switched jobs from Amazon to a small AI startup where he uses AI heavily to write code. He says it's improved his productivity 5x, but I don't really think that's the AI. I think it's (mostly) the lack of bureaucracy in his small 2 or 3 person company.
I'm very dubious about claims that AI can improve productivity so much because that just hasn't been my experience. Maybe I'm just bad at using it.
Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.
Okay, you've switched to English. The speed of typing the actual tokens is just about the same but...
The standard library is FUCKING HUGE!
Every concept that you have ever read about? Every professional term, every weird thing that gestures at a whole chunk of complexity/functionality ... Now, if I say something to my LLM like:
> Consider the dimensional twins problem -- how're we gonna differentiate torque from energy here?
I'm able to ... "from physics import Torque, Energy, dimensional_analysis" And that part of the stdlib was written in 1922 by Bridgman!
And extremely buggy, and impossible to debug, and does not accept or fix bug reports.
AI is like an extremely enthusiastic junior engineer that never learns or improves in any way based on your feedback.
I love working with junior engineers. One of the best parts about working with junior engineers is that they learn and become progressively more experienced as time goes on. AI doesn't.
And come on: AI definitely will become better as time goes on.
I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.
It’s sort of the opposite: You don’t get to the proper judgement without playing through the possibilities in your mind, part of which is accomplished by spending time coding.
The point is still valid, although I've seen it made many times over.
But at this point I'm not confident that I'm not failing to identify a lot of LLM-generated text and not making false positives.
Unlikely. AI keeps improving, and we are already at the point where real people are accused of being AI.
Clever pitch. Don't alienate all the people who've hitched their wagons to AI, but push valuing highly-skilled ICs as an actionable leadership insight.
Incidentally, strategy and risk management sound like a pay grade bump may be due.
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished... The Bun acquisition blows a hole in that story.”
But what the article actually discusses and demonstrates by the end of the article is how the aspects of engineering beyond writing the code is where the value in human engineers is at this point. To me that doesn't seem like an example of a revealed preference in this case. If you take it back to the first part of the original quote above it's just a different wording for AI being the code writer and engineering being different.
I think what the article really means to drive against is the claim/conclusion "because AI can generate lots of code we don't need any type of engineer" but that's just not what the quote they chose to set out against is saying. Without changing that claim the acquisition of Bun is not really a counterexample, Bun had just already changed the way they do engineering so the AI wrote the code and the engineers did the other things.
And what about vibe coding? The whole point and selling point of many AI companies is that you don’t need experience as a programmer.
So they sell something that isn’t true, it’s not FSD for coding but driving assistance.
The house of the feeble minded: https://www.abelard.org/asimov.php
> Tighten the causal claim: “AI writes code → therefore judgment is scarce”
As one of the first suggestions, so it's not something inherent to whether the article used AI in some way. Regardless, I care less about how the article got written and more about what conclusions really make sense.
> The Bun acquisition blows a hole in that story.
> That contradiction is not a PR mistake. It is a signal.
> The bottleneck isn’t code production, it is judgment.
> They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.
> Leaders don’t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands.
Not to mention the gratuitous italics-within-bold usage.
I don’t know if HN has made me hyper-sensitized to AI writing, but this is becoming unbearable.
When I find myself thinking “I wonder what the prompt was they used?” while reading the content, I can’t help but become skeptical about the quality of the thinking behind the content.
Maybe that’s not fair, but it’s the truth. Or put differently “Fair? No. Truthful? Yes.”. Ugh.
Technically, there’s still a horse buggy whip market, an abacus market, and probably anything else you think technology consumed. It’s just a minuscule fraction of what it once was.
All the last productivity multipliers in programming led to increased demand. Do you really think the market is saturated now? And what saturated it is one of the least impactful "revolutionary" tools we got in our profession?
Keep in mind that looking at statistics won't lead to any real answer, everything is manipulated beyond recognition right now.
Also I do hold a belief that most tech companies are taking a cost/labor reduction strategy for a reason, and I think that’s because we’re closing a period of innovation. Keeping the lights on, or protecting their moats, requires less labor.
This AI craze swooped in at the right time to help hold up the industry and is the only thing keeping it together right now. We're quickly trying to build all the low-hanging fruit for it, keeping many developers busy (although not like it used to be), but there isn't much low-hanging fruit to build. LLMs don't have the breadth of need like previous computing revolutions had. Once we've added chat interfaces to everything, which is far from being a Herculean task, all the low-hanging fruit will be gone. That's quite unlike previous revolutions where we had to build all the software from scratch, effectively, not just slap some lipstick on existing software.
If we want to begin to relive the past, we need a new hardware paradigm that needs all the software rewritten for it again. Not an impossible thought, but all the low-hanging hardware directions have also been picked at this point so the likelihood of that isn’t what it used to be either.
They didn't. But it may be a relevant point that all of that was slow enough to spread that we can't clearly separate them.
Anyway, the idea that any one of those large markets is at saturation point requires some data. AFAIK, anything from mainframe software to phones has (relatively) exploded in popularity every time somebody made them cheaper, so that is a claim that all of those just changed (too recently to measure), without any large thing to correlate them.
> That's quite unlike previous revolutions where we had to build all the software from scratch
We have rewritten everything from scratch exactly once since high-level languages were created in the 70s.
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”
Software engineering pre-LLMs will never, ever come back. Lots of folks are not understanding that. What we're doing at the end of 2025 looks so much different than what we were doing at the end of 2024. Engineering as we knew it a year or two ago will never return.
I use AI as a smart auto complete - I’ve tried multiple tools on multiple models and I still _regularlt_ have it dump absolute nonsense into my editor - in thr best case it’s gone on a tangent, but in the most common case it’s assumed something (often times directly contradicting what I’ve asked it to do), gone with it, and lost the plot along the way. Of course when I correct it it says “you’re right, X doesn’t exist so we need to do X”…
Has it made me faster? Yes. Had it changed engineering - not even close. There’s absolutely no world where I would trust what I’ve seen out of these tools to run in the real world even with supervision.
In startups I’ve competed against companies with 10x and 100x the resources and manpower on the same systems we were building. The amount of code they theoretically could push wasn’t helping them, they were locked to the code they actually had shipped and were in a downwards hiring spiral because of it.
I can’t see how buying a runtime for the sake of Claude Code makes sense.
This argument requires us to believe that AI will just asymptote and not get materially better.
Five years from now, I don't think anyone will make these kinds of acquisitions anymore.
I assume this is at least partially a response to that. They wouldn't buy a company now if it would actually happen that fast.
That's not what asymptote means. Presumably what you mean is the curve levelling off, which it already is.
It hasn't gotten materially better in the last three years. Why would it do so in the next three or five years?
I don’t know why the acquisition happened, or what the plans are. But it did happen, and for this we don’t have to suspend disbelief. I don’t doubt Anthropic has plans that they would rather not divulge. This isn’t a big stretch of imagination, either.
We will see how things play out, but people are definitely being displaced by AI software doing work, and people are productive with them. I know I am. The user count of Claude Code, Gemini and ChatGPT don’t lie, so let’s not kid ourselves.