Before, if you had a vague spec you'd write a small prototype to clarify your thinking. Now you can have a complete implementation in minutes — but you still have an unclear spec. You've just moved the uncertainty forward in the process, where it's more expensive to catch.
The teams I've seen use LLMs well treat the output as a rough draft that requires real review, not a finished product. The teams that get into trouble treat generation speed as the goal. Both groups produce the same lines of code. Very different results.
Yes, and knowing what to write has always been the more important challenge, long before AI. But - one thing I’ve noticed is that in some cases, LLMs can help me try out and iterate on more concepts and design ideas than I was doing before. I can try out the thing I thought was going to work and then see the downsides I didn’t anticipate, and then fix it or tear it down and try something else. That was always possible, but when using LLMs this cycle feels much easier and like it’s happening much faster and going through more rough draft iterations than what I used to do. I’m trying more ideas than I would have otherwise, and it feels like it’s leading in many cases to a stronger foundation on which to take the draft through review to production. It’s far more reviewing and testing than before, but I guess in short, there might be an important component of the speed of writing code that feeds into figuring out what to write; yes we should absolutely focus entirely on priorities, requirements, and quality, but we also shouldn’t underestimate the impact that iteration speed can have on those goals.
We need a comparison between an LLM and an experienced engineer reviewing a juniors system design for some problem. I imagine the LLM will be way too enthusiastic about whatever design is presented and will help force poor designs into working shape.
I’d never done half as much code profiling & experimenting before. Now that generating one-shot code is cheap, I can send the agent off on a mission to find slow code and attempt to speed it up. This way, only once it has shown speedup is there and reasonably attainable do I need to think about how to speed the code up “properly”. The expected value was too low when the experimenting was expensive.
I could write a lot about what I’ve tried and learnt, but so far this article is a very based view and matches my experience.
I definitely suffered under the unnecessary complexity and wished to never’ve used AI at moments and even with OPUS 4.6 I could feel how it was confused and couldn’t understand business objectives really. It became way faster to jump in code, clean it up and fix it myself. I’m not sure yet where and how the line is and where it will be.
It might not actually deliver working things all that much faster than I could, but I don't feel mentally drained by the process either. I used to spend a lot of time reading architecture docs in order to understand available solutions, now I can usually get a sense for what I need to know just from asking ChatGPT how certain things might be done using X tool.
In the last few days, I've stood up syncthing, tailscale with a headscale control plane, and started making working indicators and strategies in PineScript, TradingView's automated trading platform. Things I had no energy for or would have been weeklong projects take hours or a day or so. AI's strengths synergize really well with how humans want to think.
I just paste an error message in, and ChatGPT figures out what I'm trying to do from context, then gives me not just a possible resolution, but also why the error is happening. The latter is just as useful as the former. It's wrong a lot, but it's easy to suss out.
- guardrails are required to generate useful results from GenAI. This should include clear instructions on design patterns, testing depth, and iterative assessments.
- architecture decision records are one useful way to prevent GenAI from being overly positive.
- very large portions of code can be completely regenerated quickly when scope and requirements change. (skip debugging - just regenerate the whole thing with updated criteria)
- GenAI can write thorough functional and behavioral unit tests. This is no longer a weakness.
- You must suffer the questions and approvals. At no time can you let agents run for extended periods of time on progressive sets of work. You must watch what is generated. One thing that concerns me about the new 1mm context on Claude Code is many will double down on agent freedom. You can’t. You must watch the results and examine functionality regularly.
- No one should care about actual code ever again. It’s ephemeral. The role of software engineering is now molding features and requirements into functional results. Choosing Rust, C#, Java, or Typescript might matter depending on the domain, but then you stop caring and focus on measuring success.
My experience is rolled up in https://devarch.ai/ and I know I get productive and testable results using it everyday on multiple projects.
Caveat: it still works best in a codebase that is already good. So while any one line of code is ephemeral, how is the overall codebase trending? Towards a bramble, or towards a bonsai?
If the software is small and not mission critical, it doesn’t matter if it becomes a bramble, but not all software is like that.
A good codebase depends on the business context, but in my case its an agile one that can react to discovered business cases. I’ve written great typed helpers that practically allow me to have typed mongo operators for most cases. It makes all operations really smooth. AI keeps finding cretaive ways of avoiding my implementations and over time there are more edge cases, thin wrappers, lint ignore comments and other funny exceptions. Whilst I’m losing the guarantees I built...
> very large portions of code can be completely regenerated quickly when scope and requirements change.
This is complete and utter nonsense coming from someone who isn't actually sticking around maintaining a product long enough in this manner to see the end result of this.
All of this advice sounds like it comes from experience instead of theoretical underpinning or reasoning from first principles. But this type of coding is barely a year old, so there's no way you could have enough experience to make these proclamations.
Based on what I can talk about from decades of experience and study:
No natural language specification or test suite is complete enough to allow you to regenerate very large swaths of code without changing thousands of observable behaviors that will be surfaced to users as churn, jank, and broken workflows. The code is the spec. Any spec detailed enough to allow 2 different teams (or 2 different models or prompts) to produce semantically equivalent output is going to be functionally equivalent to code. We as an industry have learned this lesson multiple times.
I'd bet $1,000 that there is no non-trivial commercial software in existence where you could randomly change 5% of the implementation while still keeping to the spec and it wouldn't result in a flood of bug reports.
The advantage of prompting in a natural language is that the AI fills in the gaps for you. It does this by making thousands of small decisions when implementing your prompt. That's fine for one offs, and it's fine if you take the time to understand what those decisions are. You can't just let the LLM change all of those decision on a whim, which is the natural result of generating large swaths of code, ignoring it, and pretending it's ephemeral.
I think this has always been the case. "Bad programmers worry about the code. Good programmers worry about data structures and their relationships." Perhaps you mean that they shouldn't worry about structures & relationships either but I think that is a fools errand. Although to be fair neither of those need to be codified in the code itself, but ignore those at your own peril...
Whilst the author clearly has a belief that falls down on one side of the debate, I hope folks can engage with the "Should we abandon everything we know" question, which I think is the crux of things. Evidence that AI-driven-development is a valuable paradigm shift is thin on the ground, and we've done paradigm shifts before which did not really work out, despite massive support for them at the time. (Object-Oriented-Everything, Scrum, etc.)
I am fully on board with gen AI representing a paradigm shift in software development. I tried to be careful not to take a stance on other debates in the larger conversation. I just saw too many people talking about how much code they're generating as proof statements when discussing LLMs. I think that, specifically---i.e., using LOC generated as the basis of any meaningful argument about effectiveness or productivity---is a silly thing to do. There are plenty of other things we should discuss besides LOC.
I wonder if you have a take on measuring productivity in light of the potential difficulty of achieving good outcomes across the general population?
You mention in the second appendix (which I skipped on my first read), that you are a rather experienced LLM user, with experiences in all the harnesses and context management which are touted as "best practice" nowadays. Given the effort this seems to take, do you think we're vulnerable to mis-measuring.
My mind is always thrown to arguments about Agile, or even Communism. "True Communism has never been tried" or "Agile works great when you do it right", which are still thrown about in the face of evidence that these things seem impossible, or at least very difficult, to actually implement successfully across the general population. How would we know if AI-driven-development had a theoretical higher maximum "productivity" (substitute with "value", "virtue", "the general good", whatever you want here) than non AI-driven-development, but still a lower actual productivity due to problems in adoption of the overall paradigm?
That is an unsatisfying answer. I can point to anecdotes that suggest AI is hurting productivity or improving it, but those don't make an argument. And the extremes on either side make it very difficult to consider. How do you weigh "An LLM deleted my production database" against "I built a business on the back of AI-assisted software"?
I think we have to wait and see. And we should revisit questions of cost and value continuously, not just about LLMs, but generally in life. Most of my motivation (though not an overwhelming majority) around using LLMs right now is a mix of curiosity and wanting to avoid the fate of the steam shovel.
our profession is under threat.
It is. But I don't think it's AI that threatens it. It's susceptible to hype people who, unfortunately, have the power over people's jobs. C-level management who don't know anything better than parroting what others in the industry are saying. How is that "all engineers will be replaced in 6 months" going?For very small projects, code may be the main bottleneck. Just to write the code is what takes most of the time. Adding code faster can accelerate development.
For larger projects, design, integration, testing, feature discovery, architecture, bug fixing, etc. takes most of the time. Adding code faster may slow down development and create conflicts between teams.
Discussing without a common context makes no sense in this situation.
So, depending on your industry and the size of the projects that you have worked on one thing or the other may be true.
If an activity (getting code into source files) used to take up <50% of the time of programmers, then removing that bottleneck cannot even double the throughput of the process. This is not taking into account non-programmer roles involved in software development. This is akin to Amdahl's law when we talk about the benefits of parallelism.
I made no argument with regard to threat to the profession, and I make none here.
There is also a set of codebases in which LLMs are one-shotting the most correct code and even finding edgecases that would've been hard to find in human reviews.
At a surface level, it seems obvious that legacy codebases tend to fall in the first category and more greenfield work falls in the second category.
Perhaps, this signals an area of study where we make codebases more LLM-friendly. It needs more research and a catchy name.
Also, certain things that we worry about as software artisans like abstractions, reducing repeated code, naming conventions, argument ordering,... is not a concern for LLMs. As long as LLMs are consistent in how they write code.
For e.g. One was taught that it is bad to have multiple "foo()" implementations. In LLM world, it isn't _that_ bad. You can instruct the LLM to "add feature x and fix all the affected tests" (or even better "add feature x to all foo()") and if feature x relies on "foo()", it fixes every foo() method. This is a big deal.
I have come to the realization that most people in the industry don't know this body of knowledge, or even that it exists.
I'm now seeing the same people trying to solve their ineffectiveness with AI.
I don't know what to think about this situation. My intuition hints at it not being good.
It’s quite similar with code, and with code less is more. for try 1 and 2
But there’s a more important difference: I can’t spin up 20 decent human programmers from my terminal.
The argument that "code was never the bottleneck" is genuinely appealing, but it hasn’t matched my experience at all. I’m getting through dramatically more work now. This is true for my colleagues too.
My non-technical niece recently built a pretty solid niche app with AI tools. That would have been inconceivable a few years ago.
We need to address Jevons' Paradox somehow.
Definitely would entertain -- I do agree with your framing. I just think the article undersells the impact of fast+cheap codegen.
Lowering the cost of implementation will (has) expose new bottlenecks elsewhere. But imho many of those bottlenecks probably weren’t worth serious investment in solving before. The codegen change will shift that.
To see the other bottlenecks starting to be taken seriously now, but (if I'm to be petulant) all the "credit" of solving the code bottleneck being taken by LLM systems, it's painful, especially when you are in a local domain where the code gen bottleneck doesn't matter very much and hasn't for a long time.
I suspect engineers that managed to solve the code generation bottlenecks are compulsive problem solvers, which exacerbates the issue.
That isn't to say there are some domains where it still does matter, although I'm dubious that LLM codegen is the best solve, but I am not dubious that it is at least a solve.
AI, if anything, is amazing at collaborating. It's not perfectly aligned, but you sure can get it to tell you when your idea is unsound, all while having lessened principal-agent issues. Anything we can do to minimize the number of people that need to align towards a goal, the more effectively we can build, precisely due to the difficulties of marshalling large numbers of people. If a team of 4 can do the same as a team of 10, you should always pick the team of 4, even if they are more expensive put together than the 10.
My primary use of LLM tools is as a collaborator.
I agree that if you try to use the LLM as a wholesale outsourcing of your thought process the results don’t scale. That’s not the only way to use them, though.
I have absolutely been on projects where there were too many cooks in the kitchen, and adding more people to the team only led to additional chaos, confusion, and complexity. Ever been in a meeting where a designer, head of marketing, and the CTO are all giving feedback on what size font a button should be? I certainly have, and it's absurd.
One of my worst experiences arose due to having a completely incompetent PM. Absolutely no technical knowledge; couldn't even figure out how to copy and paste a URL if his life depended on it. He eventually had to be be removed from a major project I was on, and I was asked to take over PM duties, while also doing my dev work. I was actually happy to do so, because I was already having to spend hours babysitting him; now I could just get the same tasks done without the political BS.
Could adding many AI tools to a project become problematic? Maybe. But let's not pretend throwing more humans at a project is going to lead to some synergistic promised land.
ALL OF IT is meaningless. It's a pointless discussion.
The full PDF is available for download. It's mostly a series of essays, so you can pick and choose and read nonlinearly. It's worth thinking about beyond nihilistic takes.
the speed is real but it mostly just moves where I spend my time. less typing, more reading and testing. which is... fine? but it's not the 10x thing people keep claiming
I think it would typically have taken you longer.
That's actually highly doubtful to me.
Tons of studies and writing about how reading and debugging code is wildly more time consuming than writing it. That time goes up even more when you're not the one that wrote the code in the first place. It's why we've spent decades on how to write readable/maintainable code.
So either all this shit about reading/maintaining code being difficult was lies and we've spent decades wasting our time or AIs can only improve productivity if you stop verifying/debugging code.
So I find it very unlikely that it would have taken more than a couple hours to just write it the first time.
> AI systems can internalize the textbook knowledge of a field and apply it coherently at scale. AI can now reliably operate within established engineering practice. This is a genuine milestone that removes much of the drudgery of repetition and allows engineers to start closer to the state of the art.
This matches my experience, there is a lot of code that we probably should not need to write and rewrite anymore but still do because this field has largely failed at deriving complete and reusable solutions to trivial problems - there is a massive coordination problem that has fragmented software across the stack and LLMs provide one way of solving it by generating some of the glue and otherwise trivial but expensive and unproductive interop code required.
But the thing about productivity is that it's not one thing and cannot be reduced to an anecdote about a side-project, or a story about how a single company is introducing (or mandating) AI tooling, or any single thing. Being able to generate a bunch of code of varying quality and reliability is undeniably useful, but there are simply too many factors involved to make broad sweeping claims about an entire industry based on a tool that is essentially autocomplete on crack. Thus it's not surprising that recent studies have not validated the current hype cycle.
[0] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...
https://www.antifound.com/posts/advent-of-code-2022/
So much of our industry has spent the last two decades honing itself into a temple built around the idea of "leet code". From the interview to things like advent of code.
Solving brain teasers, knowing your algorithms cold in an interview was always a terrible idea. And the sort of engineers it invited to the table the kinds of thinking it propagated were bad for our industry as a whole.
LLM's make this sort of knowledge, moot.
The complaints about LLM's that lack any information about the domains being worked in, the means of integration (deep in your IDE vs cut and paste into vim) and what your asking it to do (in a very literal sense) are the critical factors that remain "un aired" in these sorts of laments.
It's just hubris. The question not being asked is "Why are you getting better results than me, am I doing something wrong?"
I'm not sure if this is a direct response to the article or a general point. The article includes an appendix about my use of LLMs and the domains I have used them in.
For someone like that, LLMs are much closer to literally replacing what they do, which seems to explain a lot of the complaints. They’re also not used to working at a higher level, so effective LLM use doesn’t come naturally to them.
1. Assume you're to work on product/feature X.
2. If God were to descend and give you a very good, reality-tested spec:
3. Would you be done faster? Of course, because as every AI doomer says, writing code was never the bottleneck!!1!
4. So the only bottleneck is getting to the spec.
5. Guess what AI can help you with as well, because you can iterate out multiple versions with little mental effort and no emotional sunk cost investment?
ergo coding is a solved problem
- must be using the latest state of the art model from the big US labs
- must be on a three digit USD per month plan
- must be using the latest version of a full major harness like codex, opencode, pi
- agent must have access to linting, compilation tools and IDE feedback
- user must instruct agent to use test driven development and write tests for everything and only consider something done if tests pass
- user must give agent access to relevant documentation, ie by cloning relevant repositories etc
- user must use plan mode and iterate until happy before handing off to agent
- (list is growing every month)
---
if the author of a blog post about AI coding doesnt respect all of these, reading his blog posts is a waste of time because he doesn't follow best practices
I would honestly appreciate constructive feedback on LLM usage, because, as I stated, I am constantly having to rework code that LLMs generate for me. The value I get from LLMs is not in code generation.
> LLMs entice us with code too quickly. We are easily led.
Arguably _is_ your argument. That people aren't doing the above and it's causing problems. You probably agree that just spinning up Claude code on the regular plan without doing the above can still generate a fuck-ton of code but that shouldn't be used as evidence either for or against AI effectiveness.
Maybe knock it off since the rules changed to not allow AI comments.
You can because, I guess, your project may have a small scope, few people working on it, no dependencies etc.
I cannot, because each line that I change has an effect in millions of other lines and hundreds of other people, and millions of users.
Different situation, different needs.
That sounds like an architectural problem.
That trade can still make sense for a throwaway MVP. Most people underestimate how fast the maintenance bill add up once the first non-obvious bug report lands.
its like saying "don't write code because we will have to debug it later".