Writing code is flow-state compatible. You build up a mental model, hold it in working memory, and produce output in a continuous stream. Reviewing LLM output requires constant context-switching between "what does this code do" and "is this what I actually wanted." That's cognitively expensive in a way that no amount of prompt engineering fixes.
The exhaustion isn't primarily from bad prompts or slow feedback loops. It's that the LLM turned you from a writer into a full-time reviewer, and reviewing is harder work per unit of output than writing. The productivity gain is real, but the effort doesn't feel lower because you traded one kind of cognitive effort for a more draining one. The task-switching cost between "understanding intent" and "verifying implementation" is well-studied in cognitive science and it's not something you can optimize away with better technique.
I assume until LLMs are 100% better than humans in all cases, as long as I have to be in the loop there will be a pretty hard upper bound on what I can do and it seems like we’ve roughly hit that limit.
Funny enough, I get this feeling with a lot of modern technology. iPhones, all the modern messaging apps, etc make it much too easy to fragment your attention across a million different things. It’s draining. Much more draining than the old days
The code part is trivial and a waste of time in some ways compared to time spent making decisions about what to build. And sometimes even a procrastination to avoid thinking about what to build, like how people who polish their game engine (easy) to avoid putting in the work to plan a fun game (hard).
The more clarity you have about what you’re building, then the larger blocks of work you can delegate / outsource.
So I think one overwhelming part of LLMs is that you don’t get the downtime of working on implementation since that’s now trivial; you are stuck doing the hard part of steering and planning. But that’s also a good thing.
I've written it up here, including the transcript of an actual real session:
https://www.stavros.io/posts/how-i-write-software-with-llms/
And when you make the decisions it is you who is responsible for them. Whereas if you just do the coding the decisions about the code are left largely to you nobody much sees them, only how they affect the outcome. Whereas now the LLM is in that role, responsible only for what the code does not how it does it.
The result is that I could say that it was code that I myself approved of. I can't imagine a time when I wouldn't read all of it, when you just let them go the results are so awful. If you're letting them go and reviewing at the end, like a post-programming review phase, I don't even know if that's a skill that can be mastered while the LLMs are still this bad. Can you really master Where's Waldo? Everything's a mess, but you're just looking for the part of the mess that has the bug?
I'm not reviewing after I ask it to write some entire thing. I'm getting it to accomplish a minimal function, then layering features on top. If I don't understand where something is happening, or I see it's happening in too many places, I have to read the code in order to tell it how to refactor the code. I might have to write stubs in order to show it what I want to happen. The reading happens as the programming is happening.
You might think that the "constant" task switching is draining, but I don't switch that frequently. Often I keep the main focus on one task and use the waiting time to draft some related ideas/thoughts/next prompt. Or browse through the code for light review/understanding. It also helps to have one big/complex task and a few simpler things concurrently. And since the number of details required to keep "loaded" in your head per task is fewer, switching has less cost I think. You can also "reload" much quicker by simply chatting with the agent for a minute or two, if some detail have faded.
I think a key thing is to NOT chase after keeping the agents running at max efficiency. It's ok to let them be idle while you finish up what your doing. (perhaps bad of KV cache efficiency though - I'm not sure how long they keep the cache)
(And obviously you should run the agent in a sandbox to limit how many approvals you need to consider)
[1] I use the urgent-window hint to get a subtle hint of which workspace contain an agent ready for input.
EDIT: disclaimer - I'm relative new to using them, and have so far not used them for super complex tasks.
But I absolutely loathe reviewing these generated PRs - more so when I know the submitter themselves has barely looked at the code. Now corporate has mandated AI usage and is asking people to do 10k LOC PRs every day. Reviewing this junk has become exhausting.
I don’t want to read your code if you haven’t bothered to read it yourselves. My stance is: reviewing this junk is far more exhausting. Coding is actually the fun part.
That's a big red flag if I ever saw one. Corporate should be empowering the engineering team to use AI tooling to improve their own process organically. Is this true or exaggeration? If it's true I'd start looking for a more balanced position at more disciplined org.
I’ve certainly seen my share of what I call slot driven development where a developer just throws things at the wall until something mostly works. And plenty if cut and paste development.
But it’s far from the majority. It’s usually the same few developers at a company doing it, while the people who know what they’re doing furiously work to keep things from falling apart.
If the majority of devs were doing this nothing would work. My worry is that AI lets the bad devs produce this kind of work on a massive scale that overwhelms the good devs ability to fight back or to even comprehend the system.
Many top labs [1] [2] already have heavily automated code review already and it's not slowing down. That doesn't mean I'm trusting everything blindly, but yes, over time, it should handle less and less "lower level" tasks and it's a good thing if it can.
[1] https://openai.com/index/harness-engineering/ [2] https://claude.com/blog/code-review
Further I want to vent about two things:
- Things can be improved.
- You are allowed to complain about anything, while not improving things yourself.
I think the mid 2010s really popularized self improvement in a way that you can't really argue with (if you disagree with "put in more effort and be more focused", you're obviously just lazy!). It's funny because the point of engineering is to find better solutions, but technically yes, an always valid solution is just "suck it up".
But moreover, if you do not allow these two premises, what ends up happening in practice for a lot of people, is that basically you can just interpret any slightly pushback as "oh they're just a whiner", and if they're not doing something to fix their problem this instant, that "obviously" validates your claim (and even if they are, it doesn't count, they should still not be a "debbie downer", etc.).
Sometimes a premise can sound extreme, but people forget that premises are not in a complete logical vaccuum, you actually live out and believe said premises, and by taking on a certain position, it's often more about what follows downstream from the behavior than the actual words themselves.
You know you can leave abusive relationships. Ditch the clanker and free your mind.
I mostly use YOLO mode which means I'm not constantly watching them and approving things they want to do... but also means I'm much more likely to have 2-3 agent sessions running in parallel, resulting in constant switching which is very mentally taxing.
There's probably a Codex equivalent, but I don't know what it is.
Working with an agent coding all day can be exhilarating but also exhausting - maybe it’s because consequential decisions are packed more tightly together. And yes cognition still matters for now.
same thing happened with crypto - the underlying technology is cool but the community is what makes it so hated
I think the exhausting part is more probably more tied to the evaluation of the work the agent is doing, understanding its thought process and catching the hang up can be tedious in the current state of AI reasoning.
As somebody who has been coding for just shy of 40 years and has gone through the actual pain on learning to run a high level and productive dev team, your experience does not match mine. Even great devs will forget some of the basics and make mistakes and I wish every junior (hell even seniors) were as effective as the LLMs are turning out to be. Put the LLM in the hands of a seasoned engineer who also has the skills to manage projects and mentor junior devs and you have a powerful accelerator. I'm seeing the outcome of that every day on my team. The velocity is up AND the quality is up.
This is not my experience on a team of experienced SWEs working on a product worth 100m/year.
Agents are a great search engine for a codebase and really nice for debugging but anytime we have it write feature code it makes too many mistakes. We end up spending more time tuning the process than it takes to just write the code AND you are trading human context with agent context that gets wiped.
We've spent years reducing old debt and modernizing our application and processes. The places where we've made that investment are where we are currently seeing the additional acceleration. The places where we haven't are still stuck in the mud, but per your "search engine for a codebase" comment our engineers are starting to engage with systems they would not have previously touched.
There are areas for sure where LLMs would fall down. That's where we need the experts to guide them and restructure the project so that it is LLM friendly (which also just happens to be the same things that make the app better for human engineers).
And I'm serious about the quality comment. Maybe there's a difference in how your team is using the tools, but I have individuals on my team who are learning to leverage the tools to create better outputs, not just pump out features faster.
I'm not saying LLMs solve everything, FAR from it. But it's giving a master weapon to an experienced warrior.
I would be curious to see if I'm just imaging this or it really is a trend.
Its amazing how right and wrong LLMs can be in the output produced. Personally the variance for me is too much... I cant stand when it gets things wrong on the most basic of stuff. I much prefer doing things without output from an LLM.
Another way you can read this is as a new cult member that his chiding himself whenever he might have an intrusive thought that Dear Leader may not be perfect, after all.
My pet theory is we haven't figured out what the best way to use these tools are, or even seen all the options yet. But that's a bigger topic for another day.
The only time I've felt something akin to this with a compiler is when I was learning Rust. But that went away after a week or two.