You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
I think it goes without saying that they will be writing "good code" in short time.
I also wonder how much of this "I don't trust them yet" viewpoint is coming from people who are using agents the least.
Is it rare that AI one-shots code that I would be willing to raise as a PR with my name on it? Yes, extremely so (almost never).
Can I write a more-specified prompt that improves the AI's output? Also yes. And the amount of time/effort I spend iterating on a prompt, to shape the feature I want, is decreasing as I learn to use the tools better.
I think the term prompt-engineering became loaded to mean "folks who can write very good one-shot prompts". But that's a silly way of thinking about it imo. Any feature with moderate complexity involves discovery. "Prompt iteration" is more descriptive/accurate imo.
Think about it from a resource (calorie) expenditure stand point.
Are you expending more resources on writing the prompts vs just doing without it? Thats the real question.
If you are expending more, which is what Simon is indicating at - are you really better off? Id argue not, given that this cant be sustained for hours on end. Yet the expectation from management might be that you should be able to sustain this for 8 hours.
So again, are you better off? Not in the slightest.
Many things in life are counter-intuitive and not so simple.
P.s. youre not getting paid more for increasing productivity if you are still expected to work 8 hrs a day... lmao. Thankfully im not a SWE.
Whether you are "better off or not" is a separate topic, and I never suggested one way or the other.
Simon's point is that engineers can be so productive with these tools that it is tempting to work (much) longer.
Youre a time waster, stop posting and creating noise.
Does that sound familiar?
They did run out of human-authored training data (depending on who you ask), in 2024/2025. And they still improve.
It seemed to me that improvements due to training (i.e. the model) in 2025 were marginal. The biggest gains were in structuring how the conversation with the LLM goes.
But what asymptote are they approaching? Average code? Good code? Great code?
Let me repeat myself.
I think it goes without saying that they will be writing "good code" in short time.
On top of that, the models people use have been heavily shaped by reinforcement learning, which rewards something quite different from the most likely next token. So I don’t think it’s clarifying to say “the model is basically a complex representation of the average of its training data.”
The average thing points to the real phenomenon of underspecified inputs leading to generic outputs, but modern agentic coding tools don’t have this problem the way the chat UIs did because they can take arbitrary input from the codebase.
Managers are crossing their fingers that devs they hire are no worse than average, and average isn't very good.
> Managers are crossing their fingers that devs they hire are no worse than average, and average isn't very good.
The problem is that that's the same skill required to safely use AI tools. You need to essentially audit its output, ensure that you have a sensible and consistent design (either supplied as input or created by the AI itself), and 'refine' the prompts as needed.
AI does not make poor engineers produce better code. It does make poor engineers produce better-looking code, which is incredibly dangerous. But ultimately, considering the amount of code written by average engineers out there, it actually makes perfect sense for AI to be an average engineer — after all, that's the bulk of what it was trained on! Luckily, there's some selection effect there since good work propagates more, but that's a limited bias at best.
From what I've found it's very easy to ask the AI to look at code and suggest how to make the code maintainable (look for SRP violations, etc, etc). And it will go to work. Which means that we can already build this "quality" into the initial output via agent workflows.
The former is always preferred in the context of product development but poses a key person risk. Apple in its current form is a representation of this - Steve did enough work to keep the company going for a decade after his death. Now its sort-of lost on where to go next. But on the flip side look at its market cap today vs 2000.
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
I - of course - am talking about perfect approach with everyone focused to not f** it up ;)
So everything stays exactly the same?
No, we get applications so hideously inefficient that your $3000 developer machine feels like it it's running a Pentium II with 256 MB of RAM.
We get software that's as slow as it was 30 years ago, for no reason other than our own arrogance and apathy.
Except some very well maintained software, some of the mundane things we do today waste so much resources it makes me sad.
Heck, the memory use of my IDE peaks at VSCode's initial memory consumption, and I'd argue that my IDE will draw circles around VSCode while sipping coffee and compiling code.
> for no reason other than our own arrogance and apathy.
I'll add greed and apparent cost-reduction to this list. People think they win because they reduce time to market, but that time penalty is delegated to users. Developers gain a couple of hours for once, we lose the same time every couple of days while waiting our computers.
Once I have read a comment by a developer which can be paraphrased as "I won't implement this. It'll take 8 hours. That's too much". I wanted to plant my face to my keyboard full-force, not kidding.
Heck, I tuned/optimized an algorithm for two weeks, which resulted in 2x-3x speedups and enormous memory savings.
We should understand that we don't own the whole machine while running our code.
Thanks for sharing the demo!
Haha, I know. Just worded like that to mean that even a P-II can do many things if software is written well enough.
You're welcome. That demo single-handedly thrown me down the high performance computing path. I thought, if making things this efficient is possible, all the code I'll be writing will be as optimized as it can be as the constraints allow.
Another amazing demo is Elevated [1]. I show its video to someone and ask about the binary and resources size. When they hear the real value, they generally can't believe it!
Cheers!
> We get software that's as slow as it was 30 years ago, for no reason other than our own arrogance and apathy.
I feel like I read this exact same take on this site for the past 15 years.
I do feels things in general are more "snappy" at the OS level, but once you get into apps (local or web), things don't feel much better than 30 years ago.
The two big exceptions for me are video and gaming.
I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
I’d let you know how I feel but I’m too busy restarting Solidworks after its third crash of the day. I pay thousands a year for the privilege.
I would assume (generally speaking) that CAD and video editing applications are carefully designed for efficiency because it's an important differentiator between different applications in the same class.
In my experience, these applications are some of the most exciting to use, because I feel like I'm actually able to leverage the power of my hardware.
IMO the real issue are bloated desktop apps like Slack, Discord, Spotify, or Claude's TUI, which consume massive amounts of resources without doing much beyond displaying text or streaming audio files.
My point here is not to roast Deltek, although that's certainly fun (and 100% deserved), but to point out that the bar for how bad software can be and still, somehow, be commercially viable is already so low it basically intersects the Earth's centre of gravity.
The internet has always been a machine that allows for the ever-accelerated publishing of complete garbage of all varieties, but it's also meant that in absolute terms more good stuff also gets published.
The problem is one of volume not, I suspect, that the percentages of good versus crap change that much.
So we'll need better tools to search and filter but, again, I suspect AI can help here too.
Validation was always the hard part because great validation requires great design. You can't validate garbage.
What is the point of even mentioning this? We live in reality. In reality, there are countless companies with thousands of engineers making each piece of software. Outside of reality, yes you can talk about a million hypothetical situations. Cherry picking rare examples like Winamp does nothing but provide an example of an exception, which yes, also exists in the real world.
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
I’ve seen many people (even myself) thinking the same: if I quit/something happens to me, there will be no one who knows how this works/how to do this. Turned out the businesses always survived. There was a tiny inconvenience, but other than that: nothing. There is always someone who is willing to pick up/take over the task in zero amount of time.
I mean I agree with you, in theory. But that’s not what I’ve seen in practice.
The people making the buying decisions may not have a good idea of what maximises "meaningful value" but they compare feature sets.
To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can sometimes upset users who have come to depend on the bug as a feature.
> In agile methodologies we measure the output of the developers.
No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?
There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.
Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments. Even in Scrum, you still have roles with accountability, and teams still need some form of prioritization and product decision-making (otherwise you just get activity without direction).
So yeah: agile ideals don’t say “measure dev output.” But many implementations incentivize output/throughput, and that’s the misconception I was pointing at.
That sounds more like scrum or something in that wheelhouse, which isn't agile, but what I earlier called pre-agile. They are associated with agile as they are intended to be used as a temporary transitionary tool. One day up and telling your developers "Good news, developers. We fired all the managers. Go nuts!" obviously would be a recipe for disaster. An organization wanting to adopt agile needs to slowly work into it and prove that the people involved can handle it. Not everyone can.
> Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments.
That's the pre-agile step. You don't get rid of managers immediately, you put them to work stepping in when necessary and helping developers learn how to manage without a guiding hand. "Business people" remain involved in agile. Perhaps you were thinking of that instead? Under agile they aren't managers, though, they are partners who work together with the developers.
I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?
Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.
/s
Consider two scenarios:
1) I try to build an interpreter. I go and read some books, understand the process, build it in 2 weeks. Results: I have a toy interpreter. I understand said toy interpreter. I learnt how to do it, Learnt ideas in the field, applied my knowledge practically.
2) I try to build an interpreter. I go and ask claude to do it. It spits out something which works: Result: I have black box interpreter. I dont understand said interpreter. I didnt build any skills in building it. Took me less than an hour.
Toy interpreter is useless in both scenarios but Scenario one pay for the 2 week effort, while Scenario 2 is a vanity project.
I think there will be a lot of slop and a lot of usefull stuff. But also, what i did was just an experiment to see if it is possible, i don't think it is usable, nor do i have any plans to make it into new language. And it was done in less than 3 hours total time.
So for example, if you want to try new language features. Like let's say total immutability, or nullability as a type. Then you can build small language and try to write a code in it. Instead of writing it for weeks, you can do it in hours.
Also using double equals to mutate variables, why?
Previously, I'd have an idea, sit on it for a while. In most cases, conclude it's not a good idea worth investing in. If I decided to invest, I'd think of a proper strategy to approach it.
With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
I still need to figure out how to deal with that, for now I just time box these sessions.
But I feel I'm trading thinking time for execution time, and understanding time for testing time. I'm not yet convinced I like those tradeoffs.
Edit: Just a clarification: I currently work in two modes, depending on the project. In some, I use agentic development. In most, I still do it "old school". That's what makes the side effects I'm noticing so surprising. Agentic development pulls me down rabbit holes and makes me loose the plot and focus. Traditional development doesn't, its side effects apparently keep me focused and in control.
Now I sit on an idea for a long time, writing documentation/specs/requirements because I know that the code generation side of things is automated and effortlessly follows from exhaustive requirements.
The size of the chunk varies heavily on what I’m doing ofc.
How much of this is because you don't trust the result?
I've found this same pattern in myself, and I think the lack of faith that the output is worth asking others to believe in is why it's a throwaway for me. Just yesterday someone mentioned a project underway in a meeting that I had ostensibly solved six months ago, but I didn't even demo it because I didn't have any real confidence in it.
I do find that's changing for myself. I actually did demo something last week that I 'orchestrated into existence' with these tools. In part because the goal of the demo was to share a vision of a target state rather than the product itself. But also because I'm much more confident in the output. In part because the tools are better, but also because I've started to take a more active role in understanding how it works.
Even if the LLMs come to a standstill in their ability to generate code, I think the practice of software development with them will continue to mature to a point where many (including myself) will start to have more confidence in the products.
My experience with LLMs is that they will call any idea a good idea, one feasible enough to pursue!
Their training to be a people-pleaser overrides almost everything else.
> With agentic development, I have an idea, waste a few hours chasing it,
What's the difference between these 2 periods? Weren't you wasting time when sitting on it and thinking about your idea?
When you jump straight into execution because it’s easy to do so, you lose the distinction.
More importantly, As the problem becomes more complex, it then matters more if you know where the AI falls short.
Case study: Security researchers were having a great time finding vulnerabilities and security holes in Openclaw.
The Openclaw creators had a very limited background in security even when the AI entirely built Openclaw and the authors had to collaborate with the security experts to secure the whole project.
That describes the majority of cases actually worth working on as a programmer in the traditional sense of the word. You build something to begin to discover the correct requirements and to picture the real problem domain in question.
You lose that if the agent builds it for you, though; there is no iteration cycle for you, only for the agent. This means you are missing out on a bunch of learning that you would previously had gotten from actually writing something.
Prior to agents, more than once a week I'd be writing some code and use some new trick/technique/similar. I expect if you feel that there is no programming skills and tricks left for you to learn, then sure, you aren't missing out on anything.
OTOH, I've been doing this a long time, and I still learn new things (for implementation, not design) on each new non-trivial project.
That's one way, another way is to keep the idea in your head (both actively and "in the background) for days/weeks, and then eventually you sit down and write a document, and you'll get 99% of the requirements down perfectly. Then implementation can start.
Personally I prefer this hammock-style development and to me it seems better at building software that makes sense and solves real problems. Meanwhile "build something to discover" usually is best when you're working with people who need to be able to see something to believe there is progress, but the results are often worse and less well-thought out.
It's better to have a solid concrete idea written down of the entire system that you know you want to build which has ironed out the limitations, requirements and the constraints first before jumping into the code implementation or getting the agent to write it for you.
The build-something-to-discover approach is not for building robust solutions in the long run. By starting with the code first without knowing what it is you are solving or just getting the AI to generate something half-working but breaks easily and changing it once again for it to become even more complicated just wastes more time and tokens.
Someone still has to read the code and understand why the project was built on a horrible foundation and needs to know how to untangle the AI vibe-coded mess.
before, I would narrow things down to only the most potentially economically viable, and laugh at ideas guys that were married to the one single idea in their life as if it was their only chance, seemingly not realizing they were competing with people that get multiple ideas a day
back to the aforementioned epiphany, it reminds me of the world of Star Trek where everything was developed for its curiosity and utility instead of money
Those things don't excite you any more. Plus, the fact that you no longer exercise your brain at work any more. Plus, the constant feeling of FOMO.
It deflates you, faster.
But I've found my way to what, for me, is a more durable and substantial source of satisfaction, if not excitement, and that is value. Excuse the cliche, but its true.
My life has been filled with little utilities that I've been meaning to put together for years but never found the time. My homelab is full of various little applications that I use, that are backed up and managed properly. My home automation does more than it ever did, and my cabin in the countryside is monitored and adaptive to conditions to a whole new degree of sophistication. I have scripts and workflows to deal with a fairly significant administrative load - filing and accounting is largely automated, and I have a decent approximation of an always up-to-date accountant and lawyer on hand. Paper letters and PDFs are processed like its nothing.
Does all the code that was written at machine-speed to achieve these things thrill me? No, that's the new normal. Is the fact that I'm clawing back time, making my Earthly affairs orderly in a whole new way, and breathing software-life into my surroundings without any cloud or big-tech encroachment thrilling? Yes, sometimes - but more importantly it's satisfying and calming.
As far as using my brain - I devote as much of my cognitive energy to these things as I ever have, but now with far more to show for it. As the agents work for me, I try to learn and validate everything they do, and I'm the one stitching it all into a big cohesive picture. Like directing a film. And this is a new feeling.
See "Variable Ratio Schedule" https://www.simplypsychology.org/schedules-of-reinforcement....
Many of programmers became programmers because they find the idea of programming fascinating, probably in their middle school days. And then they went to be professionals. Then they burned out and if they were lucky, transited to management.
Of course not everyone is like that, but you can't say it isn't common, right.
But as far as output - we all have different reasons for enjoying software development but for me it's more making something useful and less in the coding itself. AI makes the fun parts more fun and the less fun parts almost invisible (at small scale).
We'll all have to wrestle with this going forward.
On a separate note, I have the intensification problem in my personal work as well. I sit down to study, but, first, let me just ask Claude to do some research in the background... Oh, and how is my Cursor doing on the dashboard? Ah, right, studying... Oh, Claude is done...
Nah. You can definitely do both. A labor organization of any meaningful size needs management. A labor union is effectively a business in its own right, after all. Some unions even opt to register as corporations, and some unions even see unions rise up to protect workers from the larger union!
And certainly a tech union, to be effective, would have to be humongous given how easy it is to move the work around.
Definitely not by posting on right-wing social media websites.
> I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
It is.
This problem of efficiency gains never translating to more free time is a problem deep in our economic system. If we want a fix, we need to change the whole system.
"This time, its going to be the correct version of socialism."
While I'd agree most of its proponents (like myself) also favor other left-wing policies, I'm just saying it doesn't need to be.
And the initial gut reaction is to resist by organizing labor.
Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.
Repeat.
Yeah like tech workers have similar rights to union workers. We literally have 0 power compared to any previous group of workers. Organizing of labour cant even happen in tech as tech has large percentage of immigrant labour who have even less rights than citizens.
Also there is no shared pain like union workers had, we all have been given different incentives, working under different corporations so without shared pain its impossible to organize. AI is the first shared pain we had, and even this caused no resistance from tech workers. Resistance has come from the users, which is the first good sign. Consumers have shown more ethics than workers and we have to applaud that. Any resistance to buying chatbot subscriptions has to be celebrated.
This isn't the place to kvetch about this; you will literally never see a unionization effort on this website because the accounts of the people posting about it will be [flagged] and shadowbanned.
I'm also curious as to what you do, where you do it, and who you work for that makes you feel like you have zero power.
The only winners here are CEOs/founders who make obscene money, liquidate/retire early while suckers are on the infinite treadmill justifying their existence.
I can harvest crops by hand, but a machine can do it 100x faster. I'm not paid 100x though so it's a bad deal - destroy the machines.
The real advantage now that code is cheaper to write is who can imagine the best product.
I'm happy to compete at that level.
https://ers.usda.gov/sites/default/files/_laserfiche/publica...
> The real advantage now that code is cheaper to write is who can imagine the best product. I'm happy to compete at that level.
I’d probably be as happy as you, if I had such big ego.
It moved outside the US.
What is wrong with a the status quo? A competitive market that keeps labor here.
Tell me about what’s the LLM impact on your work, on account your work is not wiring about AI.
Or if one wish for a more explicit noise filter: Don’t tell me what AI can do. Show me what you shipped with it that isn’t about AI.
From this weekend: https://github.com/simonw/sqlite-history-json and https://github.com/datasette/datasette-sqlite-history-json
When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.
When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.
* Made Termux accessible enough for me to use.
* Made an MUD client for Emacs.
Gotten Emacs and Emacspeak working on Termux.
Gotten XFCE to run with Orca and AT-SPI communicating to make the desktop environment accessible.
None of this would have happened without AI. Of course, it's only useful for a few people that are blind, use Android, and love Linux and Emacs and such. But it's improved my life a ton. I can do actual work on my phone. I've got Org-mode, calendar, Org-journal, desktop chromium, ETC. all on my phone. And if AI dies tomorrow, I'll still have it. The code is all there for me to learn from, tweak, and update.
I just use one agent, Codex. I don't do the agent swarms yet.
This is actually a really good point that I have kind of noticed when using AI for side project, so being on my own time. The allure of thinking "Oh I wonder how it will perform with this feature request if I give it this amount of info".
Can't say I would put off sleep for it but I get the sentiment for sure.
I find the latter a lot more challenging to cut my losses when it's on a good run (and often even when I know I could just write this by hand), especially because there's as much if not more intrigue about whether the tool can accomplish it or not. These are the moments where my mind has drifted to think about it the exact way you describe it here.
Sometimes it looks like some of that comes from AI generally being very very sure of its initial idea "The issue is actually very simple, it's because..." and then it starts running around in circles once it tries and fails, you can pull it out with a bit more prompting, but it's tough. The thing is, it is sometimes actually right, from the very beginning, but if it isn't...
This is just my own perspective after working with these agents for some time, I've definitely heard of people having different experiences.
Long story short, it was ugly and didn't really work as I wanted. So I'm learning Hugo myself now... The whole experience was kind of frustrating tbh.
When I finally settled in en did some hours of manual work I felt much better because of it. I did benefit from my planning with Claude though...
You can watch what it's doing and eyeball the code to know if it's going in the right direction, and then steer it towards what you want.
If you have it doing something you have no clue about then it's total gamble
I think these companies have been manipulating social media sentiment for years in order to cover up their bunk product.
The workflow and responsibilities are very different. It can be a painful transition.
There has always been a strong undercurrent of developers feeling superior to managers and PMs and now those develoeprs are being forced to confront the reality of a manager or PM's experience.
Work is changing, and the change is only going to accelerate.
That changes if you get it to write code for you. I tried vibe-coding an entire project once, and while I ended up with a pretty result that got some traction on Reddit, I didn't get any sense of accomplishment at all. It's kinda like doomscrolling in a way, it's hard to stop but it leaves you feeling empty.
We just saw the productivity growth in the vibe coded GitHub outages.
Edit: Not to mention, this is what you get for not unionizing earlier. Get good or get cut.
The worst part is that it’s so convincing: not only does everyone who can’t make it work feel gaslit about it, but some people even pretend that it works for them so they don’t feel like they’re missing out.
I remember the last time this happened and people were convinced (for like 2 years) that a gif of an ape could somehow be owned and was worth millions of dollars.
I'm chalking my poor experience to being too cheap to pay $200 a month for Claude Max 20x so I can run the multiple agents that need to supervise each other.
Overheard a couple of conversations in the office how one IC spent all weekend setting up OpenClaw, another was vibe coding some bullshit application.
I see hundreds of crazy people in our company Slack just posting/reposting twitter hype threads and coming up with ridiculous ideas how to “optimize” workflow with AI.
Once this becomes the baseline, you’ll be seen as the slow one, because you’re not doing 5x work for the same pay.
> Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
Just like with CNC though, you need to feed it with the correct instructions. It's still on you for the machined output to do the expected thing. CNC's are also not perfect and their operators need to know the intricacies of machining.
What domains do you work in? This description does not match my experience whatsoever.
Still not as accurate as CNC machine, maybe early model typewriter?.
What did you try to do and the LLM failed you?
CNC relies on precise formal languages like G-code, whereas an LLM relies on the imprecise natural languages
It's about presenting externally as a "bad ass" while:
A) Constantly drowning out every moment of your life with low quality background noise.
B) Aggressively polluting the environment and depleting our natural resources for no reason beyond pure arrogance.
It seems perfectly fitting to me that Anthropic is using a wildly overcomplicated React renderer in their TUI.
React devs are the perfect use case for "AI" dev tools. It is perfectly tolerated for them to write highly inefficient code, and these frameworks are both:
A) Arcane and inconsistently documented
B) Heavily overrepresented in open-source
Meaning there are meaningful gains to be had from querying these "AI" tools for framework development.
In my opinion, the shared problem is the acceptance of egregious inefficiency.
I prompt and sit there. Scrolling makes it worse. It's a good mental practice to just stay calm and watch the AI work.
If you're going to stay single-minded, why wouldn't you just write the code yourself? You're going to have to double check and rewrite the AI's shitty work anyway
Previous discussion of the original article: https://news.ycombinator.com/item?id=46945755
Yeah, good luck with that.
Corporations have tried to reduce employee burnout exactly never times.
That’s something that starts at the top. The execs tend to be “type A++” personalities, who run close to burnout, and don’t really have much empathy for employees in the same condition.
But they also don’t believe that employees should have the same level of reward, for their stress.
For myself, I know that I am not “getting maximum result” from using LLMs, but I feel as if they have been a real force multiplier, in my work, and don’t feel burnt out, at all.
What I personally find exhausting is Simon¹ constantly discovering the obvious. Time after time after time it’s just “insights” every person who smoked one blunt in college has arrived at.
Stop for a minute! You don’t have to keep churning out multiple blog posts a day, every day. Just stop and reflect. Sit back in your chair and let your mind rest. When a thought comes to you, let it go. Keep doing that until you regain your focus and learn to distinguish what matters from what is shiny.
Yes, of course, you’re doing too much and draining yourself. Of course your “productivity” doesn’t result in extra time but is just filled with more of the same, that’s been true for longer than you’ve been alive. It’s a variation of Parkinson’s law.
https://en.wikipedia.org/wiki/Parkinson%27s_law
¹ And others, but Simon is particularly prevalent on HN, so I bump into these more often.
You don’t know that. For all you know, your life would’ve been richer if you’ve read those thoughts after they’ve been left to stew for longer. For all you know, if that happened you would’ve said “most” instead of “many”. Or maybe not, no one can say for sure until it happens
> that felt “obvious.”
It’s not about feeling obvious. There is value in exploring obvious concepts when you’ve thought about them for longer, maybe researched what others before you had to say on the matter, and you can highlight and improve on all of that. Everyone benefits from a thoughtful approach.
> I’ve picked up an incredible number of useful tips and tricks. (…) I also love how he documents small snippets and gists of code that are easy to link to and cross-reference.
That is (I think clearly, but I may be wrong) not what I’m talking about. A code snippet is very far removed in meaning from a human insight. What I wrote doesn’t just concern Simon’s readers, but Simon as a person. Being constantly “on” isn’t good, it leads to exhaustion (as reported), which leads to burnout. While my first paragraph in the previous comment was a criticism, it was merely an introduction to the rest of the post which was given in empathy. I want us all to do and be better.
That's exactly what I try to do.
I wrote more about my approach to that here: https://simonwillison.net/2024/Dec/22/link-blog/#trying-to-a...
I have a link blog which is links plus commentary. Each post takes 10-30 minutes to write. They're exactly like social media, though I try to add something new rather than just broadcast other people's content. https://simonwillison.net/blogmarks/
I collect quotations, which are the quickest form of content, probably just two minutes each. https://simonwillison.net/quotations/
I recently added "notes" which are effectively my link blog without a link. Very social media! I use those for content that doesn't deserve more than a couple of paragraphs: https://simonwillison.net/notes/
And then there are "entries". That's my long form content, each taking one to several hours (or occasionally more, eg ny annual LLM roundups). Those are the pieces of long-form writing where I aim to "reflect on a concept thoroughly": https://simonwillison.net/entries/
https://news.ycombinator.com/item?id=46955703#46958713
> There is value in exploring obvious concepts when you’ve thought about them for longer, maybe researched what others before you had to say on the matter, and you can highlight and improve on all of that. Everyone benefits from a thoughtful approach.
I’m not saying “don’t share the obvious”, because what is obvious to one person won’t be for someone else. What I am advocating for is thinking more before doing so. In your posts I have repeatedly seen you advocate for opposing ideas at different (but not too distant) points in time. Often you also share a half-baked thought which only later gets the nuance it requires.
More often than not, it’s clear the thoughts should have been stewed for longer to develop into better, more powerful and cohesive ideas. Furthermore, that approach will truly give you back time and relaxation. I take no pleasure in you being exhausted, that is a disservice to everyone.
One of my core beliefs is that "two things can be true at the same time". I write about opposing ideas because they have their own merits.
I believe that most of the criticisms of generative AI are genuine problems. I also believe that generative AI provides incredible value to people who learn how to use it effectively.
I like to think I'm consistent about most of the topics I write about though. Got any examples that stood out to you of my inconsistency?
Which is, of course, true in some cases and false in others. But again, not what I’m talking about.
> Got any examples that stood out to you of my inconsistency?
Sorry, I don’t. You publish too often and obviously I’m not going to trawl through a sea of posts to find specific examples. I’m not trying to attack you. Again, my initial post was written in empathy; you’re of course free to take it in earnest and reflect on it or ignore it.
Also, I haven’t called you inconsistent. You’re using that word. I’m not saying you’re constantly flip-flopping or anything like that, and it’s not inconsistent to change one’s mind or evolve one’s ideas.
It feels like you’re doing in these comments what I have just described: going in too fast with the replies without really thinking it through, without pausing to understand what the argument is. It’s difficult to have a proper honest conversation if I’m trying to be deliberate towards you but you’re being solely reactive. That is, frankly, exhaustive, and that’s precisely what I’m advocating against.
Your primary argument here is that it's better to sit with ideas for a while before writing about them.
My counter-argument is that's what I do... for my long form writing (aka "entries"). My link blog is faster reactions and has different standards - while I try to add value to everything I write there it's still a high volume of content where my goal is to be useful and accurate and interesting but not necessarily deep and thoughtful.
And yeah, you're absolutely right that the speed at which I comment here is that same thing again. I treat comments like they were an in-person conversation. They're how I flesh out ideas.
I wrote about my philosophy around blogging in one of my long-form pieces a few years ago: https://simonwillison.net/2022/Nov/6/what-to-blog-about/
> I’ve definitely felt the self-imposed pressure to only write something if it’s new, and unique, and feels like it’s never been said before. This is a mental trap that does nothing but hold you back.
That's why I like having different content types - links and quotes and notes and TILs - that reduce the pressure to only publish if I have something deep, thoughtful and unique to say.
The way Simon offers to send you less content if you sign up for their paid newsletter always made me suspicious that goal could be to overwhelm on purpose.
You pay for the filter before FOMO sets in.
How do you know that? You don't think he's being paid for all this marketing work?
I also make ~$600/month from the ads on my site - run by EthicalAds.
I don't take payment to write about anything. That goes against my principals. It would also be illegal in the USA (FTC rules) if I didn't disclose it - and most importantly it would damage my credibility as a writer, which is the thing I value most.
The big potential money maker here is private consulting based on the expertise (and credibility) I've developed and demonstrated over time. I should do more of that!
I have a set of disclosures here: https://simonwillison.net/about/#disclosures
And, who wants to be working on 3 projects simultaneously? This is the new "multitasking" agenda from generations ago with a new twist: now I just manage prompts and agents, bro! But the reality is: you think you're doing more than you actually are. Maybe Simon is just placating to his inevitable AGI overlords that he will still be useful in the coming Altmania revolution? No idea. Either way half the time I read his posts (only because they're posted here and I'm excited for his new discoveries) I can't stand to stomach his drivel.
Here are some of my recent posts which I self-evaluate as "novel and compelling".
- Running Pydantic’s Monty Rust sandboxed Python subset in WebAssembly https://simonwillison.net/2026/Feb/6/pydantic-monty/ - demonstrating how easy and useful it is to be able to turn Rust code into WASM that can run independently or be used inside a Python wheel for Pyodide in order to provide interactive browser demos of Rust libraries.
- Distributing Go binaries like sqlite-scanner through PyPI using go-to-wheel https://simonwillison.net/2026/Feb/4/distributing-go-binarie... - I think my go-to-wheel utility is really cool, and distributing Go CLIs through PyPI is a neat trick.
- ChatGPT Containers can now run bash, pip/npm install packages, and download files https://simonwillison.net/2026/Jan/26/chatgpt-containers/ - in which I reverse engineered and documented a massive new feature of ChatGPT that OpenAI hadn't announced or documented anywhere
I remain very proud of my current open source projects too - https://datasette.io and https://llm.datasette.io and https://sqlite-utils.datasette.io and a whole lot more: https://github.com/simonw/simonw/blob/main/releases.md
Are you ready to say none of that is "novel or compelling", in good faith?
My is answer right now is: you can't answer that question yet and the fact that you are looking for immediate validation showcases you're just building random things. Which is great if that's what you want to do. But is it truly novel or compelling? Given you just move on to the next thing, there seems to be a lack of direction and in that regard I would say: no.
Just because you're doing more doesn't mean anything unless it's truly useful for you or others. I just don't think that's the case here. It's a new of form of move fast and break things. And while that can have net positives, we also are very aware it has many net negatives.
I don't think you're familiar with my work at all.
With friends like you, who needs enemies? Imagine if we said that about everything. Go ahead and start a garment factory with unlocked exit doors and see if you can compete against these bad garment companies. Go ahead and start your own coal mines that pay in real money and not funny money only redeemable at the company store. Go ahead and start your own factory and guarantee eight hours work, eight hours sleep, eight hours recreation. It is called a market, BRO‽
It’s insane how productive I am.
I used to have “breaks” looking for specific keywords or values to enter while crafting a yaml.
Now the AI makes me skip all of that, essentially.
alpha sigma grindset
In the book The researcher explains that when washing machines were invented the women faced a whole new expectation of clean clothes all the time because washing clothes was much less of a labor. And statistics pointed out that women actually were washing clothes more often than doing more work after the washing machine was invented then before.
This happens with any technology. AI is no different.
Also this post should link to the original source as well.
As per the submission guidelines [1]:
”Please submit the original source. If a post reports on something found on another site, submit the latter.”
[0] https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...
Literal work junkies.
And what’s the point? If you’re working on your own project then “just one more feature, bro” isn’t going to make next Minecraft/Photopea/Stardew Valley/name your one man wonder. If you’re working for someone, then you’re a double fool, because you’re doing work of two people for the pay of one.
In reality, it's a partner who helps with the dishes by bringing home 3 neighbours worth of dirty dishes. Then says, "You're doing a great job with how fast you're scrubbing those dishes."
it's good that people so quickly see it as impulsive and addicting, as opposed to the slow creep of doomscrolling and algorithm feeds
at least I won't be vegetating at a laptop, or shirking other possible responsibilities to get back to a laptop
The beauty of work laptop is that either I work or I don’t. Laptop open - work time, laptop closed - goodbye, see you on Monday.
People keep on making the same naive assumption that the total amount of work is a constant when you mess with the cost of that work. The reality is that if you make something cheaper, people will want more of it. And it adds up to way more than what was asked before.
That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.
If you look at the history of computers and software engineering including compilers, ci/cd, frameworks/modules/etc. functional and OO programming paradigms, type inference, etc. There's something new every few years. Every time we make something easier and cheaper, demand goes up and the amount of programmers increases.
And every time you have people being afraid to lose their jobs. Sometimes jobs indeed disappear because that particular job ceases to exist because technique X got replaced with technique Y. But mostly people just keep their jobs and learn the new thing on the job. Or they change jobs and skill up as they go. People generally only lose their jobs when companies fail or start shrinking. It's more tied to economical cycles than to technology. And some companies just fail to adapt. AI is going to be similar. Lots of companies are flirting with it but aren't taking it seriously yet. Adoption cycles are always longer than people seem to think.
AI prompting is just a form of higher level programming and being able to program is a non optional skill to be able to prompt effectively. I'd use the word meta programming but of course that's one of those improvements we already had.
Right. Because the demand for hand-written code will be high enough that you will keep your job?
Or did you mean that you expect to lose the current job (writing software) and have a new job (directing an agent to write software)?
You really expect to get paid the same doing a low-skill job (directing an agent) as you did the high-paid one (writing software)?
After all, your examples
> If you look at the history of computers and software engineering including compilers, ci/cd, frameworks/modules/etc. functional and OO programming paradigms, type inference, etc. There's something new every few years. Every time we make something easier and cheaper, demand goes up and the amount of programmers increases.
Were all, with the exception of the invention of high-level languages, an increase in skill requirements from the practitioners, not a decrease in skill requirements.
You might be right, but some of us haven't quite warmed to the idea that our new job description will be something like "high-level planner and bot-wrangler," with nary a line of code in sight.