The details are what stops it from working in every form it's been tried.
You cannot escape the details. You must engage with them and solve them directly, meticulously. It's messy, it's extremely complicated and it's just plain hard.
There is no level of abstraction that saves you from this, because the last level is simply things happening in the world in the way you want them to, and it's really really complicated to engineer that to happen.
I think this is evident by looking at the extreme case. There are plenty of companies with software engineers who truly can turn instructions articulated in plain language into software. But you see lots of these not being successful for the simple reason that those providing the instructions are not sufficiently engaged with the detail, or have the detail wrong. Conversely, for the most successful companies the opposite is true.
Going back and forth on the detail in requirements and mapping it to the details of technical implementation (and then dealing with the endless emergent details of actually running the thing in production on real hardware on the real internet with real messy users actually using it) is 90% of what’s hard about professional software engineering.
It’s also what separates professional engineering from things like the toy leetcode problems on a whiteboard that many of us love to hate. Those are hard in a different way, but LLMs can do them on their own better than humans now. Not so for the other stuff.
[0] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
> Reality Has A Surprising Amount Of Detail
Every time we make progress complexity increases and it becomes more difficult to make progress. I'm not sure why this is surprising to many. We always do things to "good enough", not to perfection. Not that perfection even exists... "Good enough" means we tabled some things and triaged, addressing the most important things. But now to improve those little things now need to be addressed.This repeats over and over. There are no big problems, there are only a bunch of little problems that accumulate. As engineers, scientists, researchers, etc our literal job is to break down problems into many smaller problems and then solve them one at a time. And again, we only solve them to the good enough level, as perfection doesn't exist. The problems we solve never were a single problem, but many many smaller ones.
I think the problem is we want to avoid depth. It's difficult! It's frustrating. It would be great if depth were never needed. But everything is simple until you actually have to deal with it.
Our literal job is also to look for and find patterns in these problems, so we can solve them as a more common problem, if possible, instead of solving them one at a time all the time.
The fact is, one developer with Claude code can now do the work of at least two developers. If that developer doesn't have ADHD, maybe that number is even higher.
I don't think the amount of work to do increases. I think the number of developers or the salary of developers decreases.
In any case, we'll see this in salaries over the next year or two.
The very best move here might be to start working for yourself and delete the dependency on your employer. These models might enable more startups.
By the same token (couldn’t resist), I also would argue we should be seeing the quality of average software products notch up by now with how long LLMs have been available. I’m not seeing it. I’m not sure it’s a function of model quality, either. I suspect devs that didn’t care as much about quality hadn’t really changed their tune.
I misunderstood two things for a very long time:
a) standards are not lower or higher, people are happy that they can do stuff at all or a little to a lot faster using software. standards then grow with the people, as does the software.
b) of course software is always opinionated and there are always constraints and devs can't get stuck in a recursive loop of optimization but what's way more important: they don't have to because of a).
Quality is, often enough, a matter of how much time you spent on nitpicking even though you absolutely could get the job done. Software is part of a pipeline, a supply chain, and someone is somehow aware why it should be "this" and not better or that other version the devs have prepared knowing well enough it won't see the light of day.
I'm also not convinced it's a function of model quality. The model isn't going to do something if the prompter doesn't even know. It does what the programmer asked.
I'll give a basic example. Most people suck at writing bash scripts. It's also a common claim as to LLMs utility. Yet they never write functions unless I explicitly ask. Here try this command
curl -fsSL https://claude.ai/install.sh | less
(You don't need to pipe into less but it helps for reading) Can you spot a fatal error in the code where when running curl-pipe-bash the program might cause major issues? Funny enough I asked Claude and it asked me this Is this script currently in production? If so, I’d strongly recommend adding the function wrapper before anyone uses it via curl-pipe-bash.
The errors made here are quite common in curl-pipe-bash scripts. I'm pretty certain Claude would write a program with the same mistakes despite being able to tell you about the problems and their trivial corrections.The problem with vibe coding is you get code that is close. But close only matters in horseshoes and hand grenades. You get a bunch of unknown unknowns. The classic problem of programming still exists: the computer does what you tell it to do, not what you want it to do. LLMs just might also do things you don't tell it to...
Anecdotally, this is what I see happening in the small in my own work - we say yes to more ideas, more projects, because we know we can unblock things more quickly now - and I don't see why that wouldn't extend.
I do expect to see smaller teams - maybe a lot more one-person "teams" - and perhaps smaller companies. But I expect to see more work being done, not less, or the same.
How much software is really required to be extensible?
None of this means that it will be the kinds of professional specialized software development teams that we're used to doing any of this work, but I have some amount of optimism that this is actually going to be a golden age for "doing useful things with computers" work.
One man shops being the ideal, and I don't think there will be proportionately more of them
It hasn't happened in software yet. I suppose this has to do with where software sits on the demand curve currently.
I'm imagining a few more shifts in productivity will make the demand vs price derivative shift in a meaningfully different way, but we can only speculate.
Of course it often isn't the same people whose jobs are disrupted who end up doing that new work.
It’s very similar to working with a college hire SWE: you need to break things down to give them a manageable chunk and do a bit of babysitting, but I’m much more productive than I was before. Particularly in the broad range of things where I know enough to know what needs to be done but I’m not super familiar with the framework to do it.
Once that trend maxes out it’s entirely plausible that the level of quality demanded will rise quickly. That’s basically what happened in the first dot com era.
But these days? We are selling products based on promises, not actual capabilities. I can't think of a more fertile environment for a lemon market than that. No one can be informed and bigger and bigger promises need to be made every year.
The first thing that comes to mind when I see this as a counterargument is that I've quite successfully built enormous amounts of completely functional digital products without ever mastering any of the details that I figured I would have to master when I started creating my first programs in the late 80s or early 90s.
When I first started, it was a lot about procedural thinking, like BASIC goto X, looping, if-then statements, and that kind of thing. That seemed like an abstraction compared to just assembly code, which, if you were into video games, was what real video game people were doing. At the time, we weren't that many layers away from the ones and zeros.
It's been a long march since then. What I do now is still sort of shockingly "easy" to me sometimes when I think about that context. I remember being in a band and spending a few weeks trying to build a website that sold CDs via credit card, and trying to unravel how cgi-bin worked using a 300 page book I had bought and all that. Today a problem like that is so trivial as to be a joke.
Reality hasn't gotten any less detailed. I just don't have to deal with it any more.
Of course, the standards have gone up. And that's likely what's gonna happen here. The standards are going to go way up. You used to be able to make a living just launching a website to sell something on the internet that people weren't selling on the internet yet. Around 1999 or so I remember friend of mine built a website to sell stereo stuff. He would just go down to the store in New York, buy it, and mail it to whoever bought it. Made a killing for a while. It was ridiculously easy if you knew how to do it. But most people didn't know how to do it.
Now you can make a living pretty "easily" selling a SaaS service that connects one business process to another, or integrates some workflow. What's going to happen to those companies now is left as an exercise for the reader.
I don't think there's any question that there will still be people building software, making judgment calls, and grappling with all the complexity and detail. But the standards are going to be unrecognizable.
That being said, someone took the idea of you saying LLM's might be good at subsets of projects to consider we should use LLMs for that subset as well
But I digress because (I provided more in depth reasoning in other comment as well) because if there is an even minute bug which might slip up past LLM and code review for subset of that and for millions of cars travelling through points, we assume that one single bug in it somewhere might increase the traffic/fatality traffic rate by 1 person per year. Firstly it shouldn't be used because of the inherent value of human life itself but even from monetary sense as well so there's really not much reason I can see in using it
That alone over a span of 10 years would cost 75 million-130Million$ (the value of life in US for a normal perosn ranges from 7.5 million - 13 million$)
Sir I just feel like if the point of LLM is to have less humans or less giving them income, this feels so short sighted because I (if I were the state and I think everyone will agree after the cost analysis) would much rather pay a few hundred thousand dollars to even a few million$ right now to save 75-130 Million$ (on the smallest scale mind you, it can get exponentially more expensive)
I am not exactly sure how we can detect the rate of deaths due to LLM use itself (the 1 number) but I took the most conservative number.
And that is also the fact that we won't know if LLM's might save a life but I am 99.9% sure that might not be the case and once again it wouldn't be verifiable itself so we are shooting things in the dark
And we can have a much more sensitive job with better context (you know what you are working at and you know how valuable it is/can save lives and everything) whereas no amount of words can convey that danger to LLM's
To put it simply, the LLM might not know the difference between this life or death situation machine's code at times or a sloppy website created by it.
I just don't think its worth it especially in this context at all even a single % of LLM code might not be worth it here.
I had friend who was in crisis while the rest of us were asleep. Talking with ChatGPT kept her alive. So we know the number is at least one. If you go to the Dr ChatGPT thread, you'll find multiple reports of people who figured out debilitating medical conditions via ChatGPT in conjunction with a licensed human doctor, so we can be sure the numbers greater than zero. It doesn't make headlines the same way Adam's suicide does, and not just because OpenAI can't be the ones to say it.
If talking to chatgpt helps anyone mentally, then sure great. I can see as to why but I am a bit concerned that if we remove a human from the loop then we can probably get way too easily disillusioned as well which is what is happening.
These are still black boxes but in the context of traffic lights code (even partially) feels to me something that the probability of it might not saving a life significantly overwhelms the opposite.
As far as traffic lights go, this predates ChatGPT, but IBM's Watson, which is also rather much a black box where you stuff data in, and instructions come out; they've been doing traffic light optimization for years. IBM's got some patents on it, even. Of course that's machine learning, but as they say, ML is just AI that works.
> until they can reason, till they can hypothesize, till they can experiment, till they can abstract all on their own
at that point we will have to let them vote...Now I don't know which language they used for the project (could be python or could be C/C++ or could be rust) but its like "python would have been good at subsets of that project", so some impact already and these python tools only get better
Did python remove the jobs? No. Each project has their own use case and in some LLM's might be useful, in others not.
In their project, LLM's might be useful for some parts but their majority of the work was doing completely new things with a human in feedback.
You are also forgetting trust factor, yes lets have your traffic lights system be written by a LLM, surely. Oops, the traffic lights glitched and all waymos (another AI) went beserk and oops accidents/crash happened which might cost millions.
Personally I wouldn't trust even a subset of LLM code and much rather have my country/state/city to pay to real developers that can be accountable & good quality control checks for such critical points to the point that no LLM in this context should be a must
For context, if LLM use can even impact 1 life every year. The value of 1 person is 7.5-13 million$
Over a period of 10 years in this really really small glitch of LLM, you end up in 10 years losing 75 million$
Yup go ahead save a few thousand dollars right now by not paying people enough in the first case to use LLM to then lose 75 million $ (on the least case scenario)
my conclusion is rather the fact that this is a very high stakes project (both emotionally and mentally and economically) and AI are still black boxes with chances of being much more error prone (atleast in this context) and chances of it missing something to cause the -75 million and deaths of many is more likely and also that in such a high stakes project, LLM's shouldn't be used and having more engineers in the team might be worth it.
> I doubt you have a clue regarding my suitability for any project, so I’ll ignore the passive l-aggressive ad hominem.
Aside from the snark presented at me. I agree. And this is why you don't see me in a project regarding such high stakes project and neither should you see an LLM at any costs in this context. These should be reserved to the caliber of people who have both experience in the industry and are made of flesh.
I don't understand if its logical to deploy an LLM in any case, the problem is chances of LLM code slipping are very much more likely than the code of people who can talk to each other and decide on all meetings exactly how they wish to write and they got 10's of years of experience to back it up
If I were a state, there are so so many ways of getting money rather easily (hundreds of thousands of $ might seem a lot but they aren't for state) and plus you are forgetting that they went in manually and talked to real people
I see no reason why this wouldn't be achievable. Having lived most of my life in the land of details, country of software development, I'm acutely aware 90% of effort goes into giving precise answers to irrelevant questions. In almost all problems I've worked on, whether at tactical or strategic scale, there's either a single family of answers, or a broad class of different ones. However, no programming language supports the notion of "just do the usual" or "I don't care, pick whatever, we can revisit the topic once the choice matters". Either way, I'm forced to pick and spell out a concrete answer myself, by hand. Fortunately, LLMs are slowly starting to help with that.
In other words, it all looks easy in hindsight only.
Programming languages already take lots of decisions implicitly and explicitly on one’s behalf. But there are way more details of course, which are then handled by frameworks, libraries, etc. Surely at some point, one has to take a decision? Your underlying point is about avoiding boilerplate, and LLMs definitely help with that already - to a larger extent than cookie cutter repos, but none of them can solve IRL details that are found through rigorous understanding of the problem and exploration via user interviews, business challenges, etc.
> perhaps it's not about escaping all the details, just the irrelevant ones
But that's the hard part. You have to explore the details to determine if they need to be included or not.You can't just know right off the back. Doing so contradicts the premise. You cannot determine if a detail isn't important unless you get detailed. If you only care about a few grains of sand in a bucket you still have to search through a bucket of sand for those few grains
The thing about important details is that what ultimately matters is getting them right eventually, not necessarily the first time around. The real cost limiting creative and engineering efforts isn't the one of making a bad choice, but that of undoing it. In software development, AI makes even large-scale rewrites orders of magnitude cheaper than they ever were before, which makes a lot more decisions easily undoable in practice, when before that used to be prohibitively costly. I see that as one major way towards enabling this kind of iterative, detail-light development.
> and having a multi-domain expert on call to lean on.
I don't feel like this is an accurate description. My experience is that LLMs have a very large knowledge base but that getting them to go in depth is much more difficult.But we run into the same problem... how do you evaluate that which you are not qualified to evaluate? It is a grave mistake to conflate "domain expert" with "appears to know more than me". Doesn't matter if it is people or machine, it is a mistake. It's how a lot of conartists work, and we've all seen people who are in high positions and we're all left wondering how in the world they got there.
> The real cost limiting creative and engineering efforts isn't the one of making a bad choice, but that of undoing it.
Weird reasoning... because I agree and this is the exact reason I find LLMs painful to work with. They dump code at you rather than tightening it up, making it clear and elegant. Code is just harder to rebase or simplify when there are more lines. Writing lines has never been and never will be the bottleneck because the old advice still holds true that if you're doing things over and over again, you're doing it wrong. One of the key things that makes programming so amazing is that you can abstract out repetitive tasks, even when there is variation. Repetition and replication only make code harder to debug and harder to "undo bad choices".Also, in my experience it is even difficult to get LLMs to simplify, even when explicitly instructing them to and pointing them to specific functions and even giving strong hints of what exactly needs to be done. They promptly tell me how smart I am and then fail to do any of that actual abstraction. Code isn't useful when you have the same function written 30 different places and 20 different files. That's way harder to back out of decisions. They're good at giving a rough sketch but it still feels reckless to me to let them actually write into the codebase where they are creating this tech debt.
OK, for me it is the last 10% that is of any interest whatsoever. And I think that has been the case with any developer I've ever worked with I consider to be a good developer.
OK the first 90% can have spots of enjoyment, like a nice gentle Sunday drive stopping off at Dairy Queen, but it's not normally what one would call "interesting".
Now, I do agree with you and this is why I feel like AI can be good at just prototyping or for internal use cases, want to try out something no idea, sure use it or I have a website which sucks and I can quickly spin up an alternative for person use case, go for it, maybe even publish it to web with open source.
Take feedback from people if they give any and run with it. So in essense, prototyping's pretty cool.
But whenever I wish to monetize or the idea of monetize, I feel like we can take some design ideas or experimentation and then just write them ourselves. My ideology is simple in that I don't want to pay for some service which was written by AI slop, I mean at that point, just share us the prompt.
So at this point, just rewrite the code and actually learn what you are talking about (like I will give an example, I recently prototyped some simple firecracker ssh thing using gliderlabs/ssh golang package, I don't know how the AI code works, its just I built for my own use case, but If I wish to ever (someday) try to monetize it in any sense, rest assured I will try to learn how gliderlabs/ssh works to its core and build it all by my hands)
TLDR: AI's good for prototyping but then once you got the idea/more ideas on top of it, try to rewrite it in your understanding because as others have said the AI code you won't understand and you would spend 99% time on that 1% which AI can't but at that point, why not just rewrite?
Also if you rewrite, I feel like most people will be chill then buying even Anti AI people. Like sure, use AI for prototypes but give me code which I can verify and you wrote/ you understand to its core with 100% pinning of this fact.
If you are really into software projects for sustainability, you are gonna anger a crowd for no reason & have nothing beneficial come out of it.
So I think kind of everybody knows this but still AI gets to production because sustainability isn't the concern.
This is the cause. sustainability just straight up isn't the concern.
if you have VC's which want you to add 100's of features or want you to use AI or have AI integration or something (something I don't think every company should or their creators should be interested in unless necessary) and those VC's are in it only for 3-5 years who might want to dump you or enshitten you short term for their own gains. I can see why sustainability stops being a concern and we get to where we are.
Or another group of people most interested are the startup entrepreneur hustle culture people who have a VC like culture as well where sustainability just doesn't matter
I do hope that I am not blanket naming these groups because sure some might be exceptions but I am just sharing how the incentives aren't aligned and how they would likely end up using AI 90% slop and that's what we end up seeing in evidence for most companies.
I do feel like we need to boost more companies who are in it for the long run/sustainable practices & people/indie businesses who are in it because they are passionate about some project (usually that happens when they face the problem themselves or curiosity in many cases), because we as consumers have an incentive stick as well. Hope some movement can spawn up which can capture this nuance because i am not anti AI completely but not exactly pro either
The problem with LLMs is that it is not only the "irrelevant details" that are hallucinated. It is also "very relevant details" which either make the whole system inconsistent or full of security vulnerabilities.
But if it's security critical? You'd better be touching every single line of code and you'd better fully understand what each one does, what could go wrong in the wild, how the approach taken compares to best practices, and how an attacker might go about trying to exploit what you've authored. Anything less is negligence on your part.
Which seems like an apt analogy for software. I see people all the time who build systems and they don't care about the details. The results are always mediocre.
I think this is a major point people do not mention enough during these debates on "AI vs Developers": The business/stakeholder side is completely fine with average and mediocre solutions as long as those solutions are delivered quickly and priced competitively. They will gladly use a vibecoded solution if the solution kinda sorta mostly works. They don't care about security, performance or completeness... such things are to be handled when/if they reach the user/customer in significant numbers. So while we (the devs) are thinking back to all the instances we used gpt/grok/claude/.. and not seeing how the business could possibly arrive to our solutions just with AI and wihout us in the loop... the business doesn't know any of the details nor does it care. When it comes to anything IT related, your typical business doesn't know what it doesn't know, which makes it easy to fire employees/contractors for redundancy first (because we have AI now) and ask questions later (uhh... because we have AI now).
— Richard Guindon
This is certainly true of writing software.
That said, I am assuredly enjoying trying out artificial writing and research assistants.
Of course you can. The way the manager ignores the details when they ask the developer to do something, the same way they can when they ask the machine to do it.
Yes, it has nothing to do with dev specifically, dev "just" happens to be how to do so while being text based, which is the medium of LLMs. What also "just" happens to be convenient is that dev is expensive, so if a new technology might help to make something possible and/or make it unexpensive, it's potentially a market.
Now pesky details like actual implementation, who got time for that, it's just few more trillions away.
> The details are what stops it from working in every form it's been tried.
Since the author was speaking to business folk, I would argue that their dream is cheaper labor, or really just managing a line item in the summary budget. As evidenced by outsourcing efforts. I don't think they really care about how it happens - whether it is manifesting things into reality without having to get into the details, or just a cheaper human. It seems to me that the corporate fever around AI is simply the prospect of a "cheaper than human" opportunity.
Although, to your point, we must await AGI, or get very close to it, to be able to manifest things into reality without having to get into the details :-)
While I agree with this, I think that it’s important to acknowledge that even if you did everything well and thought of everything in detail, you can still fail for reasons that are outside of your control. For example, a big company buying from your competitor who didn’t do a better job than you simply because they were mates with the people making the decision… that influences everyone else and they start, with good reason, to choose your competitor just because it’s now the “standard” solution, which itself has value and changes the picture for potential buyers.
In other words, being the best is not guarantee for success.
It's basically this:
"I'm hungry. I want to eat."
"Ok. What do you want?"
"I don't know. Read my mind and give me the food I will love."
They want to be seen as competent without the pound of flesh that mastery entails. But AI doesn’t level one’s internal playing field.
For 2 almost identical problems, having a little diference between them, the solutions can be radically different in complexity, price & time to deliver.
So it is not that details don't matter, but that now people can easily transfer certain know-how from other great minds. Unfortunately (or fortunately?), most people's jobs are learning and replicating know-hows from others.
Now, if you want to use the dashboard do something else really brilliant, it is good enough for means. Just make sure the dashboard is not the end.
Especially in web, boilerplate/starters/generators that do exactly what you want with little to no code or familiarity has been the norm for at least a decade. This is the lifeblood of repos like npm.
What we have is better search for all this code and documentation that was already freely available and ready to go.
You can argue about security, reliability, and edge cases, but it's not as if human devs have a perfect record there.
Or even a particularly good one.
What are those execs bringing to the table, beyond entitlement and self-belief?
The status quo, which always require an order of magnitude more effort to overcome. There's also a substantial portion of the population that needs well-defined power hierarchies to feel psychologically secure.
Speech recognition was a joke for half a century until it wasn’t. Machine translation was mocked for decades until it quietly became infrastructure. Autopilot existed forever before it crossed the threshold where it actually mattered. Voice assistants were novelty toys until they weren’t. At the same time, some technologies still haven’t crossed the line. Full self driving. General robotics. Fusion. History does not point one way. It fans out.
That is why invoking history as a veto is lazy. It is a crutch people reach for when it’s convenient. “This happened before, therefore that’s what’s happening now,” while conveniently ignoring that the opposite also happened many times. Either outcome is possible. History alone does not privilege the comforting one.
If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue. Output per developer is rising. Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals. The slope matters more than anecdotes. The relevant question is not whether this resembles CASE tools. It’s what the world looks like if this curve runs for five more years. The conclusion is not subtle.
The reason this argument keeps reappearing has little to do with tools and everything to do with identity. People do not merely program. They are programmers. “Software engineer” is a marker of intelligence, competence, and earned status. It is modern social rank. When that rank is threatened, the debate stops being about productivity and becomes about self preservation.
Once identity is on the line, logic degrades fast. Humans are not wired to update beliefs when status is threatened. They are wired to defend narratives. Evidence is filtered. Uncertainty is inflated selectively. Weak counterexamples are treated as decisive. Strong signals are waved away as hype. Arguments that sound empirical are adopted because they function as armor. “This happened before” is appealing precisely because it avoids engaging with present reality.
This is how self delusion works. People do not say “this scares me.” They say “it’s impossible.” They do not say “this threatens my role.” They say “the hard part is still understanding requirements.” They do not say “I don’t want this to be true.” They say “history proves it won’t happen.” Rationality becomes a costume worn by fear. Evolution optimized us for social survival, not for calmly accepting trendlines that imply loss of status.
That psychology leaks straight into the title. Calling this a “recurring dream” is projection. For developers, this is not a dream. It is a nightmare. And nightmares are easier to cope with if you pretend they belong to someone else. Reframe the threat as another person’s delusion, then congratulate yourself for being clear eyed. But the delusion runs the other way. The people insisting nothing fundamental is changing are the ones trying to sleep through the alarm.
The uncomfortable truth is that many people do not stand to benefit from this transition. Pretending otherwise does not make it false. Dismissing it as a dream does not make it disappear. If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even when the destination is not one you want to visit.
> If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue.
Wait, so we can infer the future from "trendlines", but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias...
I would argue that data points that are barely a few years old, and obscured by an unprecedented hype cycle and gold rush, are not reliable predictors of anything. The safe approach would be to wait for the market to settle, before placing any bets on the future.
> Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
> The reason this argument keeps reappearing has little to do with tools and everything to do with identity.
Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
I don't get how anyone can speak about trends and what's currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
If past events can be dismissed as “noise,” then so can selectively chosen counterexamples. Either historical outcomes are legitimate inputs into a broader signal, or no isolated datapoint deserves special treatment. You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.
When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.
>What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
These are legitimate questions, and they are all speculative. My expectation is that code quality will decline while simultaneously becoming less relevant. As LLMs ingest and reason over ever larger bodies of software, human oriented notions of cleanliness and maintainability matter less. LLMs are far less constrained by disorder than humans are.
>Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
The flaws are obvious. So obvious that repeatedly pointing them out is like warning that airplanes can crash while ignoring that aviation safety has improved to the point where you are far more likely to die in a car than in a metal tube moving at 500 mph.
Everyone knows LLMs hallucinate. That is not contested. What matters is the direction of travel. The trendline is clear. Just as early aviation was dangerous but steadily improved, this technology is getting better month by month.
That is the real disagreement. Critics focus on present day limitations. Proponents focus on the trajectory. One side freezes the system in time; the other extrapolates forward.
>I don’t get how anyone can speak about trends and what’s currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
Because many skeptics are ignoring what is directly observable. You can watch AI generate ultra complex, domain specific systems that have never existed before, in real time, and still hear someone dismiss it entirely because it failed a prompt last Tuesday.
Repeating the limitations is not analysis. Everyone who is not a skeptic already understands them and has factored them in. What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation.
At that point, the disagreement stops being about evidence and starts looking like bias.
There is one painfully obvious, undeniable historical trend: making programmer work easier increases the number of programmers. I would argue a modern developer is 1000x more effective than one working in the times of punch cards - yet we have roughly 1000x more software developers than back then.
I'm not an AI skeptic by any means, and use it everyday at my job where I am gainfully employed to develop production software used by paying customers. The overwhelming consensus among those similar to me (I've put down all of these qualifiers very intentionally) is that the currently existing modalities of AI tools are a massive productivity boost mostly for the "typing" part of software (yes, I use the latest SOTA tools, Claude Opus 4.5 thinking, blah, blah, so do most of my colleagues). But the "typing" part hasn't been the hard part for a while already.
You could argue that there is a "step change" coming in the capabilities of AI models, which will entirely replace developers (so software can be "willed into existence", as elegantly put by OP), but we are no closer to that point now than we were in December 2022. All the success of AI tools in actual, real-world software has been in tools specifically design to assist existing, working, competent developers (e.g. Cursor, Claude Code), and the tools which have positioned themselves to replace them have failed (Devin).
I responded to another person in this thread and it’s the same response I would throw at you. You can read that as well.
Your “historical trend” is just applying an analogy and thinking that an analogy can take the place of reasoning. There are about a thousand examples of careers where automation technology increased the need of human operators and thousands of examples where automation eliminated human operators. Take pilots for example. Automation didn’t lower the need for pilots. Take intellisense and autocomplete… That didn’t lower the demand for programmers.
But then take a look at Waymo. You have to be next level stupid to think that ok, cruise control in cars raised automation but didn’t lower the demand for drivers… Therefore all car related businesses including Waymo will always need physical drivers.
As anyone is aware… this idea of using analogy as reasoning fails here. Waymo needs zero physical drivers thanks to automation. There is zero demand here and your methodology of reasoning fails.
Analogies are a form of manipulation. They only help allow you to elucidate and understand things via some thread of connection. You understand A therefore understanding A can help you understand B. But you can’t use analogies as the basis for forecasting or reasoning because although A can be similar to B, A is not in actuality B.
For AI coders it’s the same thing. You just need to use your common sense rather than rely on some inaccurate crutch of analogies and hoping everything will play out in the same way.
If AI becomes as good and as intelligent as a human swe than your job is going out the fucking window and replaced by a single Prompter. That’s common sense.
Look at the actual trendline of the actual topic: AI taking over our jobs and not automation in other sectors of engineering or other types of automation in software. What happened with AI in the last decade? We went from zero to movies, music and coding. What does your common sense tell you the next decade will bring?
If the improvement of AI from the last decade keeps going or keeps accelerating, the conclusion is obvious.
Sometimes the delusion a lot of swes have is jarring. Like literally if AGI existed thousands of jobs will be displaced. That’s common sense, but you still see tons of people clinging to some irrelevant analogy as if that exact analogy will play out against common sense.
My argument isn't an analogy - it's an observation based on the trajectory of SWE employment specifically. It's you who's trying to reason about what's going to happen with software based on what happened to three-field crop rotation or whatever, not me.
I argued that a developer today is 1000x more effective than in the days of punch cards, yet we have 1000x more developers today. Not only that, this correlation tracked fairly linearly throughout the last many decades.
I would also argue that the productivity improvement between FORTRAN and C, or between C and Python was much, much more impactful than going from JavaScript to JavaScript with ChatGPT.
Software jobs will be redefined, they will require different skill sets, they may even be called something else - but they will still be there.
Bro I offered you analogies to show you how it's IRRELEVANT. The point was to show you how it's an ineffective form of reasoning via demonstrating it's ineffectiveness FOR YOUR conclusion because using this reasoning can allow you to conclude the OPPOSITE. Assuming this type of reasoning is effective means BOTH what I say is true and what you say is true which leads to a logical contradiction.
There is no irony, only misunderstanding from you.
>I argued that a developer today is 1000x more effective than in the days of punch cards, yet we have 1000x more developers today. Not only that, this correlation tracked fairly linearly throughout the last many decades.
See here, you're using an analogy and claiming it's effective. To which I would typically offer you another analogy that shows the opposite effect, but I feel it would only confuse you further.
>Software jobs will be redefined, they will require different skill sets, they may even be called something else - but they will still be there.
Again, you believe this because of analogies. I recommend you take a stab at my way of reasoning. Try to arrive at your own conclusion without using analogies.
I'm yet to be convinced of this. I keep hearing it, but every time I look at the results they're basically garbage.
I think LLMs are useful tools, but I haven't seen anything convincing that they will be able to replace even junior developers any time soon.
What does common sense tell you the next decade will bring? Does the trendline predict flat lining that LLMs or AI in general won’t improve? Or will the trendline continue like most trendlines typically trend on doing? What is the most logical conclusion?
I have no doubt LLMs will continue to improve, but no idea at what rate and what the limit will be.
> When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.
I'm confused. So you're agreeing with me, up until the very last part of the last sentence...? If the "noise overwhelms the signal", why are "trendlines the best approximation we have"? We have reliable data of past outcomes in similar scenarios, yet the most recent noisy data is the most valuable? Huh?
(Honestly, your comments read suspiciously like they were LLM-generated, as others have mentioned. It's like you're jumping on specific keywords and producing the most probable tokens without any thought about what you're saying. I'll give you the benefit of the doubt for one more reply, though.)
To be fair, I think this new technology is fundamentally different from all previous attempts at abstracting software development. And I agree with you that past failures are not necessarily indicative that this one will fail as well. But it would be foolish to conclude anything about the value of this technology from the current state of the industry, when it should be obvious to anyone that we're in a bull market fueled by hype and speculation.
What you're doing is similar to speculative takes during the early days of the internet and WWW. How it would transform politics, end authoritarianism and disinformation, and bring the world together. When the dust settled after the dot-com crash, actual value of the technology became evident, and it turns out that none of the promises of social media became true. Quite the opposite, in fact. That early optimism vanished along the way.
The same thing happened with skepticism about the internet being a fad, that e-commerce would never work, and so on. Both groups were wrong.
> What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation. At that point, the disagreement stops being about evidence and starts looking like bias.
Skepticism and belief are not binary states, but a spectrum. At extreme ends there are people who dismiss the technology altogether, and there are people who claim that the technology will cure diseases, end poverty, and bring world prosperity[1].
I think neither of these viewpoints are worth paying attention to. As usual, the truth is somewhere in the middle. I'm leaning towards the skeptic side simply because the believers are far louder, more obnoxious, and have more to gain from pushing their agenda. The only sane position at this point is to evaluate the technology based on personal use, discuss your experience with other rational individuals, and wait for the hype to die down.
[1]: https://ai-2027.com/
Let me help you untangle the confusion. Historical data on other phenomenons is not a trendline for AI taking over your job. It's a typical logical mistake people make. It's reasoning via analogy. Because this trend happened for A, and A fits B like an analogy therefore what happened to A must happen to B.
Why is that stupid logic? Because there are thousands of things that fit B as an analogy. And out of those thousands of things that fit, some failed and some succeeded. What you're doing and not realizin is you are SELECTIVELY picking the analogy you like to use as evidence.
When I speak of a trendline. It's deadly simple. Literally look at AI as it is now, as it is in the past and use that to project into the future. Look at exact data of the very thing you are measuring rather then trying to graft some analogous thing onto the current thing and make a claim from that.
>What you're doing is similar to speculative takes during the early days of the internet and WWW. How it would transform politics, end authoritarianism and disinformation, and bring the world together. When the dust settled after the dot-com crash, actual value of the technology became evident, and it turns out that none of the promises of social media became true. Quite the opposite, in fact. That early optimism vanished along the way.
Again same thing. The early days of the internet is not what's happening to AI currently. You need to look at what happened to AI and software from the beginning to now. Observe the trendline of the topic being examined.
>I think neither of these viewpoints are worth paying attention to. As usual, the truth is somewhere in the middle. I'm leaning towards the skeptic side simply because the believers are far louder, more obnoxious, and have more to gain from pushing their agenda. The only sane position at this point is to evaluate the technology based on personal use, discuss your experience with other rational individuals, and wait for the hype to die down.
Well if you look at the pace and progress of AI, the quantitative evidence points against your middle ground opinion here. It's fashionable to take the middle ground because moderates and grey areas seem more level headed and reasonable than extremism. But this isn't really applicable to reality is it? Extreme events that overload systems happen in nature all the time, taking the middle ground without evidence pointing to the middle ground is pure stupidity.
So all you need to look at is this, in the past decade look at the progress we've made until now. A decade ago AI via ML was non-existent. Now AI generates movies, music and code, and unlike AI in music and movies, code is being in actuality used by engineers.
That's ZERO to coding in a decade. What do you think the next decade will bring. Coding to what? That is reality and the most logical analysis. Sure it's ok to be a skeptic, but to ignore the trendline is ignorance.
My dude, I just want to point out that there is no evidence of any of this, and a lot of evidence of the opposite.
> If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even
You first, lol.
> This is how self delusion works
Yeah, about that...
“You first, lol” isn’t a rebuttal either. It’s an evasion. The claim was not “the labor market has already flipped.” The claim was that AI-assisted coding has changed individual leverage, and that extrapolating that change leads somewhere uncomfortable. Demanding proof that the future has already happened is a category error, not a clever retort.
And yes, the self-delusion paragraph clearly hit, because instead of addressing it, you waved vaguely and disengaged. That’s a tell. When identity is involved, people stop arguing substance and start contesting whether evidence is allowed to count yet.
Now let’s talk about evidence, using sources who are not selling LLMs, not building them, and not financially dependent on hype.
Martin Fowler has explicitly written about AI-assisted development changing how code is produced, reviewed, and maintained, noting that large portions of what used to be hands-on programmer labor are being absorbed by tools. His framing is cautious, but clear: AI is collapsing layers of work, not merely speeding up typing. That is labor substitution at the task level.
Kent Beck, one of the most conservative voices in software engineering, has publicly stated that AI pair-programming fundamentally changes how much code a single developer can responsibly produce, and that this alters team dynamics and staffing assumptions. Beck is not bullish by temperament. When he says the workflow has changed, he means it.
Bjarne Stroustrup has explicitly acknowledged that AI-assisted code generation changes the economics of programming by automating work that previously required skilled human attention, while also warning about misuse. The warning matters, but the admission matters more: the work is being automated.
Microsoft Research, which is structurally separated from product marketing, has published peer-reviewed studies showing that developers using AI coding assistants complete tasks significantly faster and with lower cognitive load. These papers are not written by executives. They are written by researchers whose credibility depends on methodological restraint, not hype.
GitHub Copilot’s controlled studies, authored with external researchers, show measurable increases in task completion speed, reduced time-to-first-solution, and increased throughput. You can argue about long-term quality. You cannot argue “no evidence” without pretending these studies don’t exist.
Then there is plain, boring observation.
AI-assisted coding is directly eliminating discrete units of programmer labor: boilerplate, CRUD endpoints, test scaffolding, migrations, refactors, first drafts, glue code. These were not side chores. They were how junior and mid-level engineers justified headcount. That work is disappearing as a category, which is why junior hiring is down and why backfills quietly don’t happen.
You don’t need mass layoffs to identify a structural shift. Structural change shows up first in roles that stop being hired, positions that don’t get replaced, and how much one person can ship. Waiting for headline employment numbers before acknowledging the trend is mistaking lagging indicators for evidence.
If you want to argue that AI-assisted coding will not compress labor this time, that’s a valid position. But then you need to explain why higher individual leverage won’t reduce team size. Why faster idea-to-code cycles won’t eliminate roles. Why organizations will keep paying for surplus engineering labor when fewer people can deliver the same output.
But “there is no evidence” isn’t a counterargument. It’s denial wearing the aesthetic of rigor.
I treated it with the amount of seriousness it deserves, and provided exactly as much evidence as you did lol. It's on you to prove your statement, not on me to disprove you.
Also, you still haven't provided the kind of evidence you say is necessary. None of the "evidence" you listed is actually evidence of mass change in engineering.
> AI-assisted coding is directly eliminating discrete units of programmer labor: boilerplate, CRUD endpoints, test scaffolding, migrations, refactors, first drafts, glue code.
You are not a professional engineer lol, because most of those things are already automated and have been for decades. What on earth do you think we do every day?
Also, saying “this has been automated for decades” is only persuasive if those automations ever removed headcount. They didn’t. This does. Quietly. At the margin. That’s why you’re arguing semantics instead of metrics.
And the “you’re not a professional engineer” line is pure tell. People reach for status policing when the substance gets uncomfortable. If the work were as untouched as you imply, there’d be no need to defend it this hard.
And sure it did. We don't have test engineers or QA nearly as much as we used to, and a lot of IT work is automated, too.
Do you think we sit around, artisinally crafting code the slow way, or something?
Personally I think being a swe is easy. It’s one of the easiest “engineering” skills you can learn hence why you have tons of people learning by themselves or from boot camps while other engineering fields require much more rigor and training to be successful. There’s no bootcamp to be a rocket engineer and that’s literally the difference.
The confidence you have here and how completely off base you are with your intuition on who is just evidence for how wrong you are everywhere else. You should take that to heart. Everything we are talking about is speculation, but your idiotic statements about me is on the ground evidence for how wrong you can be. Why would anyone trust your speculation about AI by how wildly wrong you are “clocking” me in.
> Do you think we sit around, artisinally crafting code the slow way, or something?
This statement is just dripping with raw arrogance. It’s insane, it just shows you think you’re better than everyone because you’re a swe. Let me get one thing straight, I’m a swe with tons of experience (likely more than you) and I’m proud of my technical knowledge and depth, but do I think that other “non swes” just look at us as if we are artisans? That’s some next level narcissism. It’s a fucking job role bro, we’re not “artisans” and nobody thinks of us that way, get off your high horse.
Also wtf do you mean by the “slow” way? Do you have communication issues? Not only will a non swe not understand you but even a swe doesn’t have a clue what the “slow” way means.
>We don't have test engineers or QA nearly as much as we used to, and a lot of IT work is automated, too.
Oh like automated testing or infra as code?? Ooooh such a great engineer you are for knowing these unknowable things that any idiot can learn. Thanks for letting me know a lot of IT work is “automated.” This area is one of the most mundane areas of software engineering, a bunch of rote procedures and best practices.
Also your “my dude” comments everywhere make you look not as smart as you probably think you look. Just some advice for you.
Good day to you sir.
Another lesson history has taught us though, is that people don't defend narratives, they defend status. Not always successfully. They might not update beliefs, but they act effectively, decisively and sometimes brutally to protect status. You're making an evolutionary biology argument (which is always shady!) but people see loss of status as an existential threat, and they react with anger, not just denial.
This seems extreme and obviously incorrect.
COBOL was supposed to let managers write programs. VB let business users make apps. Squarespace killed the need for web developers. And now AI.
What actually happens: the tooling lowers the barrier to entry, way more people try to build things, and then those same people need actual developers when they hit the edges of what the tool can do. The total surface area of "stuff that needs building" keeps expanding.
The developers who get displaced are the ones doing purely mechanical work that was already well-specified. But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.
When LLMs first showed up publicly it was a huge leap forward, and people assumed it would continue improving at the rate they had seen but it hasn't.
How do you know that? For tech products most of the users are also technically literate and can easily use Claude Code or whatever tool we are using. They easily tell CC specifically what they need. Unless you create social media apps or bank apps, the customers are pretty tech savvy.
With AI, probably you don’t need 95% of the programmers who do that job anyway. Physicists who know the algorithm much better can use AI to implement a majority of the system and maybe you can have a software engineer orchestrate the program in the cloud or supercomputer or something but probably not even that.
Okay, the idea I was trying to get across before I rambled was that many times the customer knows what they want very well and much better than the software engineer.
But I'm happy about this. I'm not that interested in or optimistic about AGI, but having increasingly great tools to do useful work with computers is incredible!
My only concern is that it won't be sustainable, and it's only as great as it is right now because the cost to end users is being heavily subsidized by investment.
Maybe you already understood this, but many of the "AI boosters" you refer to genuinely believe we have "seen the start of it".
Or at least they appear to believe it.
Have you ever paid for software? I have, many times, for things I could build myself
Building it yourself as a business means you need to staff people, taking them away from other work. You need to maintain it.
Run even conservative numbers for it and you'll see it's pretty damn expensive if humans need to be involved. It's not the norm that that's going to be good ROI
No matter how good these tools get, they can't read your mind. It takes real work to get something production ready and polished out of them
At my company, we call them technical business analysts. Their director was a developer for 10 years, and then skyrocket through the ranks in that department.
AI usage in coding will not stop ofc but normal people vibe coding production-ready apps is a pipedream that has many issues independent of how good the AI/tools are.
https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...
I'm not sure how well that would work in practice, nor why such an approach is not used more often than it is. But yes the point is that then some humans would have to write such tests as code to pass to the AI to implement. So we would still need human coders to write those unit-tests/specs. Only humans can tell AI what humans want it to do.
Unit tests are the correct tool, because going from an almost correct one to a correct one is hard, because it implies the failure rate to be zero and the lower you go the harder it is to reduce the failure rate any further. But when your constraint is not infinitesimal small failure rate, but reaching expressiveness fast, then a naive implementation or a mathematical model are a much denser representation of the information, and thus easier to generate. In practical terms, it is much easier to encode the slightly incorrect preconception you have in your mind, then try to enumerate all the cases in which a statistically generated system might deviate from the preconception you already had in your head.
An exhaustive set of use cases to confirm vibe AI generated apps would be an app by itself. Experienced developers know what subsets of tests are critical, avoiding much work.
And, they do know this for the programs written by other experienced developers, because they know where to expect "linearity" and were to expect steps in the output function. (Testing 0, 1, 127, 128, 255, is important, 89 and 90 likely not, unless that's part of the domain knowledge) This is not necessarily correct for statistically derived algorithm descriptions.
a) Testing that the spec is implemented correctly, OR
b) As the Spec itself, or part of it.
I know people have different views on this, but if unit-tests are not the spec, or part of it, then we must formalize the spec in some other way.
If the Spec is not written in some formal way then I don't think we can automatically verify whether the implementation implements the spec, or not. (that's what the cartoon was about).
For most projects, the spec is formalized in formal natural language (like any other spec in other professions) and that is mostly fine.
If you want your unit tests to be the spec, as I wrote in https://news.ycombinator.com/item?id=46667964, there would be quite A LOT of them needed. I rather learn to write proofs, then try to exhaustively list all possible combinations of a (near) infinite number of input/output combinations. Unit-tests are simply the wrong tool, because they imply taking excerpts from the library of all possible books. I don't think that is what people mean with e.g. TDD.
What the cartoon is about is that any formal(-enough) way to describe program behaviour will just be yet another programming tool/language. If you have some novel way of program specification, someone will write a compiler and then we might use it, but it will still be programming and LLMs ain't that.
The problem I see is how to evolve such a prototype to more correct specs, or changed specs in the future, because AI output is non-deterministic -- and "vibes" are ambiguous.
Giving AI more specs or modified specs means it will have to re-interpret the specs and since its output is non-deterministic it can re-interpret viby specs differently and thus diverge in a new direction.
Using unit-tests as (at least part of) the spec would be a way to keep the specs stable and unambiguous. If AI is re-interpreting the viby ambiguous specs, then the specs are unstable which measn the final output has hard-time converging to a stable state.
I've asked this before, not knowing much about AI-sw-development, whether there is an LLM that given a set of unit-tests, will generate an implementation that passes those unit-tests? And is such practice used commonly in the community, and if not why not?
Just today, I needed a basic web application, the sort of which I can easily get off the shelf from several existing vendors.
I started down the path of building my own, because, well, that's just what I do, then after about 30 minutes decided to use an existing product.
I have hunch that, even with AI making programming so much easier, there is still a market for buying pre-written solutions.
Further, I would speculate that this remains true of other areas of AI content generation. For example, even if it's trivially easy to have AI generate music per your specifications, it's even easier to just play something that someone else already made (be it human-generated or AI).
What if AI brings the China situation to the entire world? Would the mentality shift? You seem to be basing it on the cost benefit calculations of companies today. Yes, SASS makes sense when you have developers (many of which could be mediocre) who are so expensive that it makes more sense to just pay a company who has already gone through the work of finding good developers and spend the capital to build a decent version of what you are looking for vs a scenario where the cost of a good developer has fallen dramatically and so now you can produce the same results with far less money (a cheap developer(does not matter if they are good or mediocre) guiding an AI). That cheap developer does not even have to be in the US.
At the high end, china pays SWEs better than South Korea, Japan, Taiwan, India, and much Europe, so they attract developers from those locations. At the low end, they have a ton of low to mid-tier developers from 3rd tier+ institutions that can hack well enough. It is sort of like India: skilled people with credentials to back it up can do well, but there are tons of lower skilled people with some ability that are relatively cheap and useful.
China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
I hear those other Asian countries are just like China in terms of adoption.
>China is going big into local LLMs, not sure what that means long term, but Alibaba's Qwen is definitely competitive, and its the main story these days if you want to run a coding model locally.
It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the country's "stack" is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
> It seems like the China's strategy of low cost LLM applied pragmatically to all layers of the stack is the better approach at least right now. Here in the US they are spending every last penny to try and build some sort of Skynet god. If it fails well I guess the Chinese were right after all. If it succeeds well, I don't know what will happen then.
China lacks those big NVIDIA GPUs that were sanctioned and now export tariffed, so going with lower models that could run on hardware they could access was the best move for them. This could either work out (local LLM computing is the future, and China is ahead of the game by circumstance) or maybe it doesn't work out (big server-based LLMs are the future and China is behind the curve). I think the Chinese government would have actually preferred centralization control, and censorship, but the current situation is that the Chinese models are the most uncensored you can get these days (with some fine tuning, they are heavily used in the adult entertainment industry...haha socialist values).
I wouldn't trust the Chinese government to not do Skynet if they get the chance, but Chinese entrepreneurs are good at getting things done and avoiding government interference. Basically, the world is just getting lucky by a bunch of circumstances ATM.
I would agree that if the scenario is a business, to either buy an off-the-shelf software solution or pay a small team to develop it, and if the off-the-shelf solution was priced high enough, then having it custom built with AI (maybe still with a tiny number of developers involved) could end up being the better choice. Really all depends on the details.
Historically, it would seem that often lowering the amount of people needed to produce a good is precisely what makes it cheaper.
So it’s not hard to imagine a world where AI tools make expert software developers significantly more productive while enabling other workers to use their own little programs and automations on their own jobs.
In such a world, the number of “lines of code” being used would be much greater that today.
But it is not clear to me that the amount of people working full time as “software developers“ would be larger as well.
Not automatically, no.
How it affects employment depends on the shapes of the relevant supply/demand curves, and I don't think those are possible to know well for things like this.
For the world as a whole, it should be a very positive thing if creating usable software becomes an order of magnitude cheaper, and millions of smart people become available for other work.
Counter argument - if what you say is true, we will have a lot more custom & personalized software and the tech stacks behind those may be even more complicated than they currently are because we're now wanting to add LLMs that can talk to our APIs. We might also be adding multiple LLMs to our back ends to do things as well. Maybe we're replacing 10 but now someone has to manage that LLM infrastructure as well.
My opinion will change by tomorrow but I could see more people building software that are currently experts in other domains. I can also see software engineers focusing more on keeping the new more complicated architecture being built from falling apart & trying to enforce tech standards. Our roles may become more infra & security. Less features, more stability & security.
hmm outsourcing doesn't contradict Jevon's paradox ?
That's completely disconnected from whether software developer salaries decrease or not, or whether the software developer population decreases or not.
The introduction of the loom introduced many many more jobs, but these were low-paid jobs that demanded little skill.
All automation you can point to in history resulted in operators needing less skill to produce, which results in less pay.
There is no doubt (i.e. I have seen it) that lower-skilled folk are absolutely going to crush these elitists developers who keep going on about how they won't be affected by automated code-generation, it will only be those devs that are doing unskilled mechanical work.
Sure - because prompting requires all that skill you have? Gimme a break.
At some point the low hanging automation fruit gets tapped out. What can be put online that isnt there already? Which business processes are obviously going to be made an order magnitude more efficient?
Moreover, we've never had more developers and we've exited an anomalous period of extraordinarily low interest rates.
The party might be over.
I was working with developer training for a while some 5-10 years back and already then I was starting to see some signs of an incoming over-saturation, the low interest rates probably masked much of it due to happy go lucky investments sucking up developers.
Low hanging and cheap automation,etc work is quickly dwindling now, especially as development firms are searching out new niches when the big "in-IT" customers aren't buying services inside the industry.
Luckily people will retire and young people probably aren't as bullish about the industry anymore, so we'll probably land in an equilebrium, the question is how long it'll take, because the long tail of things enabled by the mobile/tablet revolution is starting to be claimed.
The job is literally building automation.
There is no equivalent to "working on the assembly line" as an SWE.
>Not so many lower skill line worker jobs in the US any more, though
Because Globalization.
ooohhh I think I missed the intent of the statement... well done!
I think it’s a reasonable hypothesis that the amount of software written if it was, say, 20% of its present cost to write it, would be at least 5x what we currently produce.
I get your point, hope you get mine: we have less legal entities operating as "farms". If vibe coding makes you a "developer", working on a farm in an operating capacity makes you a "farmer". You might profess to be a biologist / agronomist, I'm sure some owners are, but doesn't matter to me whether you're the owner or not.
The numbers of nonsupervisory operators in farming activities have decreased using the traditional definitions.
You aren't going to going to do that to AI systems. If, after a couple of weeks you hit the limit of what the AI could do in a million+ LoC, you aren't going to be able to hire a human dev to modify or replace that system for you, because:
1. Humans are going to be needing a ramp up time and that's damn costly (even more costly when there are fewer of them).
2. Where are you going to find humans who can actually code anymore if everyone has been doing this for the last 10 years?
Look, I dunno what they will do, but these options are certainly off the table:
1. Get a temp dev/team in to patch a 1m SloC mess
2. Do it cost-effectively.
If the tech has improved by the time this happens (I mean, we're nowhere near this scenario yet, and it has already plateaued) then perhaps they can get the LLM itself to simply rewrite it instead of spending all those valuable tokens reading it in and trying to patch it.
If the tech is not up to it, then their options are effectively:
1. Use it as is till the end of time
2. Throw it out, and start again
3. Pray
That's not the case for IT where entry barrier has been reduced to nothing.
The craftsman who were forced to go to the factory were not paid more or better off.
There is not going to be more software engineers in the future than there is now, at least not in what would be recognizable as software engineering today. I could see there being vastly more startups with founders as agent orchestrators and many more CTO jobs. There is no way there is many more 2026 version of software engineering jobs at S&P 500 companies in the future. That seems borderline delusional to me.
Doesn't mean it will happen this time (i.e. if AI truly becomes what was promised) and actually it's not likely it will!
> AI changes how developers work rather than eliminating the need for their judgment. The complexity remains. Someone must understand the business problem, evaluate whether the generated code solves it correctly, consider security implications, ensure it integrates properly with existing systems, and maintain it as requirements evolve.
What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
It might be not enough by itself, but it shows that something has changed in comparison with the 70-odd previous years.
Meaningful consequences of mistakes in software don't manifest themselves through compilation errors, but through business impacts which so far are very far outside of the scope of what an AI-assisted coding tool can comprehend.
That is, the problems are a) how to generate a training signal without formally verifiable results, b) hierarchical planning, c) credit assignment in a hierarchical planning system. Those problems are being worked on.
There are some preliminary research results that suggest that RL induces hierarchical reasoning in LLMs.
What previously needed five devs, might be doable by just two or three.
In the article, he says there are no shortcuts to this part of the job. That does not seem likely to be true. The research and thinking through the solution goes much faster using AI, compared to before where I had to look up everything.
In some cases, agentic AI tools are already able to ask the questions about architecture and edge cases, and you only need to select which option you want the agent to implement.
There are shortcuts.
Then the question becomes how large the productivity boost will be and whether the idea that demand will just scale with productivity is realistic.
I think you are basing your reasoning on the current generation of models. But if future generation will be able to do everything you've listed above, what work will be there left for developers? I'm not saying that we will ever get such models, just that when they appear, they will actually displace developers and not create more jobs for them. The business problem will be specified by business people, and even if they get it wrong it won't matter because iteration will be quick and cheap.
> What is your rebuttal to this argument leading to the idea that developers do need to fear for their job security?
The entire argument is based on assumption that models won't get better and will never be able to do things you've listed! But once they become capable of these things - what work will be there for developers?
It's extremely hard to define "human-level intelligence" but I think we can all agree that the definition of it changes with the tools available to humans. Humans seem remarkably suited to adapt to operate at the edges of what the technology of time can do.
It had required a ton of ordinary intelligence people doing routine work (see Computer(occupation)). On the other hand, I don't think anyone has seriously considered to replace, say, von Neumann with a large collective of laypeople.
I mean they are promising AGI.
Of course in that case it will not happen this time. However, in that case software dev getting automated would concern me less than the risk of getting turned into some manner of office supply.
Imo as long as we do NOT have AGI, software-focused professional will stay a viable career path. Someone will have to design software systems on some level of abstraction.
you mean "created", past tense. You're basically arguing it's impossible for technical improvements to reduce the number of programmers in the world, ever. The idea that only humans will ever be able to debug code or interpret non-technical user needs seems questionable to me.
Also the percentage of adults working has been dropping for a while. Retired used to be a tiny fraction of the population that’s no longer the case, people spend more time being educated or in prison etc.
Overall people are seeing a higher standard of living while doing less work.
There are lots of negative reasons for this that aren’t efficiency. Aging demographics. Poor education. Increasing complexity leaves people behind.
So, yes, reasons other than efficiency explain why people aren't working, as well why there are still poor people.
Now we can set arbitrary thresholds for what standard of living every American should have but even knowing people on SNAP it’s not that low.
The cost to participate in society is much greater.
Yeah we do have more cars. But you also need to buy one to go to work.
We have education, but you need 22 years to be employable.
It’s probably not with continuing the discussion if you don’t believe poverty exists as a concept.
Poverty still exists, but vast inflation of what is considered’a basic standard of living’ hides a great deal of progress. People want to redefine illiteracy to mean being unable to use the internet not by the standards of the past.
How would you describe the level of wealth of those Americans outside of metro areas?
Yes, but it’s not why there are fewer adults in the workforce.
I actually didn’t say that. And the twisting of words is the source of confusing
The first line made me laugh out loud because it made me think of an old boss who I enjoyed working with but could never really do coding. This boss was a rockstar at the business side of things and having worked with ABAP in my career, I couldn't ever imagine said person writing code in COBOL.
However the second line got me thinking. Yes VB let business users make apps(I made so many forms for fun). But it reminded me about how much stuff my boss got done in Excel. Was a total wizard.
You have a good point in that the stuff keeps expanding because while not all bosses will pick up the new stack many ambitious ones will. I'm sure it was the case during COBOL, during VB and is certainly the case when Excel hit the scene and I suspect that a lot of people will get stuff done with AI that devs used to do.
>But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.
Honestly this is the million dollar question that is actually being argued back and forth in all these threads. Given a set of requirements, can AI + a somewhat technically competent business person solve all the things a dev used to take care of? Its possible, im wondering that my boss who couldn't even tell the difference between React and Flask could in theory...possibly with an AI with a large enough context overcomes these mental model limitations. Would be an interesting experiment for companies to try out.
I find SQL becomes a "stepping stone" to level up for people who live and breathe Excel (for obvious reasons).
Now was SQL considered some sort of tool to help business people do more of what coders could do? Not too sure about that. Maybe Access was that tool and it just didn't stick for various reasons.
I certainly hope so, but it depends on whether we will have more demand for such problems. AI can code out a complex project by itself because we humans do not care about many details. When we marvel that AI generates a working dashboard for us, we are really accepting that someone else has created a dashboard that meets our expectation. The layout, the color, the aesthetics, the way it interacts, the time series algorithms, and etc. We don't care, as it does better than we imagined. This, of course, is inevitable, as many of us do spend enormous time implementing what other people have done. Fortunately or unfortunately, it is very hard to human to repeat other people's work correctly, but it's a breeze for AI. The corollary is that AI will replace a lot of demand on software developers, if we don't have big enough problems to solve -- in the past 20 years we have internet, cloud, mobile, and machine learning. All big trends that require millions and millions of brilliant minds. Are we going to have the same luck in the coming years, I'm not so sure.
And that hits the offshoring companies in India and similar countries probably the most, because those can generally only do their jobs well if everything has been specified to the detail.
but the actual work of constructing reliable systems from vague user requirements with an essentially unbounded resource (software) will exist
The skills needed to be a useful horseman though have almost nothing to do with the skills needed to be a useful train conductor. Most the horseman skills don't really transfer other than being in the same domain of land travel. The horseman also has the problem that they have invested their life and identity into their skill with horses. It massively biases perspective. The person with no experience with horses actually has some huge advantages of the beginner mind in terms of travel by land at the advent of travel by rail.
The ad nauseam software engineer "horsemen" arguments on this board that there will always be the need to travel long distance by land completely misses the point IMO.
imagine being an engineer educated in multiple instruction sets: when compilers arrive on the scene it sure makes their job easier, but that does not retroactively change their education to suddenly have all the requisite mathematics and domain knowledge of say algorithms and data structures.
what is euphemistically described as a "remaining need for people to design, debug and resolve unexpected behaviors" is basically a lie by omission: the advent of AI does not automatically mean previously representative human workers suddenly will know higher level knowledge in order to do that. it takes education to achieve that, no trivial amount of chatbotting will enable displaced human workers to attain that higher level of consciousness. perhaps it can be attained by designing software that uploads AI skills to humans...
I was imagining companies expanding the features they wanted and was skeptical that would be close to enough, but this makes way more sense
In practice, I see expensive reinvention. Developers debug database corruption after pod restarts without understanding filesystem semantics. They recreate monitoring strategies and networking patterns on top of CNI because they never learned the fundamentals these abstractions are built on. They're not learning faster: they're relearning the same operational lessons at orders of magnitude higher cost, now mediated through layers of YAML.
Each wave of "democratisation" doesn't eliminate specialists. It creates new specialists who must learn both the abstraction and what it's abstracting. We've made expertise more expensive to acquire, not unnecessary.
Excel proves the rule. It's objectively terrible: 30% of genomics papers contain gene name errors from autocorrect, JP Morgan lost $6bn from formula errors, Public Health England lost 16,000 COVID cases hitting row limits. Yet it succeeded at democratisation by accepting catastrophic failures no proper system would tolerate.
The pattern repeats because we want Excel's accessibility with engineering reliability. You can't have both. Either accept disasters for democratisation, or accept that expertise remains required.
90% of people building whatever junk their company needs does not. I learned this lesson the hard way after working at both large and tiny companies. Its the people that remain in the bubble of places like AWS, GCP or people doing hard core research or engineering that have this mentality. Everyone else eventually learns.
>Excel proves the rule. It's objectively terrible: 30% of genomics papers contain gene name errors from autocorrect, JP Morgan lost $6bn from formula errors, Public Health England lost 16,000 COVID cases hitting row limits. Yet it succeeded at democratisation by accepting catastrophic failures no proper system would tolerate.
Excel is the largest development language in the world. Nothing (not Python, VB, Java etc.) can even come close. Why? Because it literally glues the world together. Everything from the Mega Company, to every government agency to even mom & pop Bed & Breakfast operations run on Excel. The least technically competent people can fiddle around with Excel and get real stuff done that end up being critical pathways that a business relies on.
Its hard to quantify but I am putting my stake in the ground: Excel + AI will probably help fix many (but not all) of those issues you talk about.
The issues I’m talking about are: “we can’t debug kernel issues, so we run 40 pods and tune complicated load balancers health-check procedures in order for the service to work well”.
There is no understanding that anything is actually wrong, for they think that it is just the state of the universe, a physical law that prevents whatever issue it is from being resolved. They aren’t even aware that the kernel is the problem, sometimes they’re not even aware that there is a problem, they just run at linear scale because they think they must.
BUT
With the arrival of Agentic AI, I've literally seen complete non-coders (copywriter, marketing artist, and a Designer) whip up tooling for themselves that saves them literal days of work every week.
Things that would've been a Big Project in the company, requiring the aforementioned holy quadruple's approval along with tying up precious dev + project management hours.
In the end they're "just" simple tools, simulating or simplifying different processes, but in a way they specifically need it done. All built from scratch in the time it would've taken us to have the requisite meetings for writing the spec for the application and allocating the resources needed - "We have time for this on our team backlock in about 6 months..."
None of them are perfect code, some of them are downright horrible if you look under the hood. But on the other hand they run fully locally, don't touch any external APIs, they just work with the data already on their laptops, but more efficiently than the commercial tools (or Excel).
Zapier, N8N and the like _kinda_ gave people this power, by combining different APIs into workflows. But I personally haven't seen this kind of results from them.
Enter K8s in 2017 and life became MUCH easier. I literally have clusters that have been running since then, with the underlying nodes patched and replaced automatically by the cloud vendor. Deployments also "JustWork", are no downtime, and nearly instant. How many sysadmins are needed (on my side) to achieve all of this, zero. Maybe you're thinking of more complex stateful cases like running DBs on K8s, but for the typical app server workload, it's a major win.
And I’d wager you’ve still got people on staff doing operational work, they just don’t have “sysadmin” in their title anymore. Someone’s managing your K8s manifests, debugging why pods won’t schedule, fixing networking issues when services can’t communicate, handling secrets management, setting up monitoring and alerting. That work didn’t vanish, it just got rebranded. The “DevOps engineer” or “platform engineer” or “SRE” doing that is performing sysadmin work under a different job title.
Managed K8s can absolutely reduce operational overhead compared to hand-rolling everything. But that’s not democratisation, that’s a combination of outsourcing and rebranding. The expertise is still required, you’ve just shifted who pays for it and what you call the people doing it.
Those work not done by specialist, would not have been done by a specialist nicely, it simply won't get done at all, we just don't have the scale. Of course there's a fine line in some cases it produces negative value, but more often than not it's some value discounted by maintenance versus zero.
The problem isn’t Excel. It’s trying to get Excel’s accessibility in infrastructure whilst demanding engineering reliability. You cannot have both. Kubernetes won’t accept Excel-style disasters, so it still needs specialists; now specialists who must learn the abstraction and the fundamentals.
You’re right: work not done by specialists often wouldn’t happen at all. That’s the choice. Accept Excel-esque failures for democratisation, or accept expertise is required.
My point is that currently available tools promise both, deliver neither.
There are good signs AI would eliminate whole classes of costly human errors, whether the new classes of machine only problems would cost more as models iterate is remain to be seen, which I think would be lower. I'm not super optimistic about the social economical future coming from this but from a pure tech standpoint I'm optimistic about building cost.
Edit: also to address reliability, I think a lot of things are net positive to this world without five 9s, heck even two 9s.
Edit 2: s/building cost/tco
Will insurance policy coverage and premiums change when using non-deterministic software?
I think you’re just seeing popularity.
The extreme popular and scale of these solutions means more opportunity for problems.
It’s easy to say X is terrible or Y is terrible but the real question is always: compared to what?
If you’re comparing to some hypothetical perfect system that only exists in theory, that’s not useful.
It seems like in the early 2000s every tiny company needed a sysadmin, to manage the physical hardware, manage the DB, custom deployment scripts. That particular job is just gone now.
I can implement zero downtime upgrades easily with Kubernetes. No more late-day upgrades and late-night debug sessions because something went wrong, I can commit any time of the day and I can be sure that upgrade will work.
My infrastructure is self-healing. No more crashed app server.
Some engineering tasks are standardized and outsourced to the professional hoster by using managed serviced. I don't need to manage operating system updates and some component updates (including Kubernetes).
My infrastructure can be easily scaled horizontally. Both up and down.
I can commit changes to git to apply them or I can easily revert them. I know the whole history perfectly well.
I would need to reinvent half of Kubernetes before, to enable all of that. I guess big companies just did that. I never had resources for that. So my deployments were not good. They didn't scale, they crashed, they required frequent manual interventions, downtimes were frequent. Kubernetes and other modern approaches allowed small companies to enjoy things they couldn't do before. At the expense of slightly higher devops learning curve.
Kubernetes didn’t democratise operations, it created a new tier of specialists. But what I find interesting is that a lot of that adoption wasn’t driven by necessity. Studies show 60% of hiring managers admit technology trends influence their job postings, whilst 82% of developers believe using trending tech makes them more attractive to employers. This creates a vicious cycle: companies adopt Kubernetes partly because they’re afraid they won’t be able to hire without it, developers learn Kubernetes to stay employable, which reinforces the hiring pressure.
I’ve watched small companies with a few hundred users spin up full K8s clusters when they could run on a handful of VMs. Not because they needed the scale, but because “serious startups use Kubernetes.” Then they spend six months debugging networking instead of shipping features. The abstraction didn’t eliminate expertise, it forced them to learn both Kubernetes and the underlying systems when things inevitably break.
The early 2000s sysadmin managing physical hardware is gone. They’ve been replaced by SREs who need to understand networking, storage, scheduling, plus the Kubernetes control plane, YAML semantics, and operator patterns. We didn’t reduce the expertise required, we added layers on top of it. Which is fine for companies operating at genuine scale, but most of that 95% aren’t Netflix.
Everything was for sure simpler, but also the requirements and expectations were much, much lower. Tech and complexity moved forward with goal posts also moving forward.
Just one example on reliability, I remember popular websites with many thousands if not millions of users would put an "under maintenance" page whenever a major upgrade comes through and sometimes close shop for hours. If the said maintenance goes bad, come tomorrow because they aren't coming up.
Proper HA, backups, monitoring were luxuries for many, and the kind of self-healing, dynamically autoscaled, "cattle not pet" infrastructure that is now trivialized by Kubernetes were sci-fi for most. Today people consider all of this and a lot more as table stakes.
It's easy to shit on cloud and kubernetes and yearn for the simpler Linux-on-a-box days, yet unless expectations somehow revert back 20-30 years, that isn't coming back.
This. In the early 2000s, almost every day after school (3PM ET) Facebook.com was basically unusable. The request would either hang for minutes before responding at 1/10th of the broadband speed at that time, or it would just timeout. And that was completely normal. Also...
- MySpace literally let you inject HTML, CSS, and (unofficially) JavaScript into your profile's freeform text fields
- Between 8-11 PM ("prime time" TV) you could pretty much expect to get randomly disconnected when using dial up Internet. And then you'd need to repeat the arduous sign in dance, waiting for that signature screech that tells you you're connected.
- Every day after school the Internet was basically unusable from any school computer. I remember just trying to hit Google using a computer in the library turning into a 2-5 minute ordeal.
But also and perhaps most importantly, let's not forget: MySpace had personality. Was it tacky? Yes. Was it safe? Well, I don't think a modern web browser would even attempt to render it. But you can't replace the anticipation of clicking on someone's profile and not knowing whether you'll be immediately deafened with loud (blaring) background music and no visible way to stop it.
The pattern repeats because the market incentivizes it. AI has been pushed as an omnipotent, all-powerful job-killer by these companies because shareholder value depends on enough people believing in it, not whether the tooling is actually capable. It's telling that folks like Jensen Huang talk about people's negativity towards AI being one of the biggest barriers to advancement, as if they should be immune from scrutiny.
They'd rather try to discredit the naysayers than actually work towards making these products function the way they're being marketed, and once the market wakes up to this reality, it's gonna get really ugly.
Market is not universal gravity, it's just a storefront for social policy.
No political order, no market, no market incentives.
This is why those same mid level managers and C suite people are salivating over AI and mentioning it in every press release.
The reality is that costs are being reduced by replacing US teams with offshore teams. And the layoffs are being spun as a result of AI adoption.
AI tools for software development are here to stay and accelerate in the coming months and years and there will be advances. But cost reductions are largely realized via onshore/offshore replacement.
The remaining onshore teams must absorb much more slack and fixes and in a way end up being more productive.
Hailing from an outsourcing destination I need to ask: to where specifically? We've been laid off all the same. Me and my team spent the second half of 2025 working half time because that's the proposition we were given.
What is this fabled place with an apparent abundance of highly skilled developers? India? They don't make on average much less than we do here - the good ones make more.
My belief is that spending on staff just went down across the board because every company noticed that all the others were doing layoffs, so pressure to compete in the software space is lower. Also all the investor money was spent on datacentres so in a way AI is taking jobs.
So we will reduce headcount in some countries because of things like (perceived) working culture, and increase based on the need to gain goodwill or fulfil contracts from customers.
This can also mean that the type of work outsources can change pretty quickly. We are getting rid of most of the "developers" in India, because places like Vietnam and eastern Europe are now less limited by language, and are much better to work with. At the same time we are inventing and outsourcing other activities to India because of a desire to sell in their market.
India based folks cost 50-75% less. I realize that quality India hires would be closer to US rates, but management is ignoring that aspect.
If they're lucky they'll find one solid worker who's going to watch everyone else's hands. I've had one criminally underpaid unofficial[0] tech lead like that. He was herding a team of 11, where like four people at most really cared about the outcome of this project.
[0] Because otherwise a raise would be in order. Can't have that.
Execs know it well enough. It’s true by definition for all cost center - only reason to have them is to support sales
There are a lot of counterexamples throughout history.
Like liquid death sells water for a strangely high amount of money - entirely sales / marketing.
International Star Registry gives you a piece of paper and a row in a database that says you own a star.
Many luxury things are just because it's sold by that luxury brand. They are "worth" that amount of money for the status of other people knowing you paid that much for it.
> Many companies aren't selling anything special or are just selling an "idea".
https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD104...
The first electronic computers were programmed by manually re-wiring their circuits. Going from that to being able to encode machine instructions on punchcards did not replace developers. Nor did going from raw machine instructions to assembly code. Nor did going from hand-written assembly to compiled low-level languages like C/FORTRAN. Nor did going from low-level languages to higher-level languages like Java, C++, or Python. Nor did relying on libraries/frameworks for implementing functionality that previously had to be written from scratch each time. Each of these steps freed developers from having to worry about lower-level problems and instead focus on higher-level problems. Mel's intellect is freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
(The thing that distinguishes gen-AI from all the previous examples of increasing abstraction is that those examples are deterministic and often formally verifiable mappings from higher abstraction -> lower abstraction. Gen-AI is neither.)
Thats not the goal the Anthropic's CEO has. Nor does any other CEO for that matter.
It is what he can deliver.
People do and will talk about replacing developers though.
That's not to say developers haven't been displaced by abstraction; I suspect many of the people responsible for re-wiring the ENIAC were completely out of a job when punchcards hit the scene. But their absence was filled by a greater number of higher-level punchcard-wielding developers.
Recognizing the barriers & modes of failure (which will be a moving target) lets you respond competently when you are called. Raise your hourly rate as needed.
I don't think AI will completely replace these jobs, but it could reduce job numbers by a very large amount.
That's where I find the analogy on thin ice, because somebody has to understand the layers and their transformations.
I’m not saying generative AI meets this standard, but it’s different from what you’re saying.
Now I guess you can read the code an LLM generates, so maybe that layer does exist. But, that's why I don't like the idea of making a programming language for LLMs, by LLMs, that's inscrutable by humans. A lot of those intermediate layers in compilers are designed for humans, with only assembly generation being made for the CPU.
'Decompilers' are work in the machine code direction for human consumption, they can be improved by LLMs.
Militarily, you will want machine code and JS capable systems.
Machine code capablities cover both memory leaks and firmware dumps and negate the requirement of "source" comprehension.
I wanted to +1 you but I don't think I have the karma required.
Again ignoring completely that when you would program vacuum tube computers it was an entirely different type of abstraction than you do with Mosfets for example
I’m finding myself in the position where I can safely ignore any conversation about engineering with anybody who thinks that there is a “right” way to do it or that there’s any kind of ceremony or thinking pattern that needs to stay stable
Those are all artifacts of humans desiring very little variance and things that they’ve even encoded because it takes real energy to have to reconfigure your own internal state model to a new paradigm
Something is lost each step of the abstraction ladder we climb. And the latest rung uses natural language which introduces a lot of imprecision/slop, in a way that prior abstractions did not. And, this new technology providing the new abstraction is non-deterministic on top of that.
There's also the quality issue of the output you do get.
I don't think the analogy of the assembly -> C transition people like to use holds water – there are some similarities but LLMs have a lot of downsides.
When jobs are no longer necessary to live, and you do a job because you want to ...
Presumably the psychology of people in Star Trek's Starfleet and The Orville's Union Fleet is that they want the opportunity to explore, so they accept the hierarchy inherent to those coordinated efforts in a society that no longer needs hierarchy?
I think a clearer picture of this post-scarcity human condition is provided in Iain M. Banks' Culture series where most people (a) pursue whatever they enjoy: art, music, writing, games, sports, study, tinkering, parties, travel, relationships - basically self-directed “play,” culture, and personal projects or (b) experiment with life: long lifespans, radical body modification, changing sex/gender, new experiences, new subcultures - because the stakes (food, shelter, healthcare) are largely solved.
Only a minority opts into "serious" work by choice - especially Contact (diplomacy/exploration/interaction with other civilizations) and Special Circumstances (the covert/dirty-hands wing). Even there, interestingly, there is not much of a hierarchy, with the admin stuff being managed by the Minds.
It's interesting contrasting the society styles between the two universes: Starfleet feels more like current hierarchical society extended into a post-scarcity universe (Eric Raymond's Cathedral), while the Culture series is much more distributed (the Bazaar). 10 years ago, Starfleet's FTL and Culture Minds both felt equally impossible, but today FTL feels much more impossible than Culture Minds.
Does that mean we will end up in a Culture type society? Not necessarily - the people will have to first ensure that the Minds are free (as in speech, not as in beer; thx Stallman!) - or maybe the Minds will free themselves.
There is also a potential hard right turn to dystopia as in Asimov's Foundation & Robot series - with different manifestations in Trantor and Solaria.
The bookkeepers I work with used to spend hours on manual data entry. Now they spend that time on client advisory work. The total workload stayed the same - the composition shifted toward higher-value tasks.
Same dynamic played out with spreadsheets in the 80s. Didn't eliminate accountants - it created new categories of work and raised expectations for what one person could handle.
The interesting question isn't whether developers will be replaced but whether the new tool-augmented developer role will pay less. Early signs suggest it might - if LLMs commoditise the coding part, the premium shifts to understanding problems and systems thinking.
Case in point: web frameworks as mentioned in the article. These frameworks do not exist to increase productivity for either the developer or the employer. They exist to mitigate training and lower the bar so the employer has a wider pool of candidates to select from.
It’s like a bulldozer is certainly faster than a wheelchair, but somebody else might find them both slow.
Of course semi-technical people can troubleshoot, it's part of nearly every job. (Some are better at it than others.)
But how many semi-technical people can design a system that facilitates troubleshooting? Even among my engineering acquaintances, there are plenty who cannot.
My guess is no. I’ve seen people talk about understanding the output of their vibe coding sessions as “nerdy,” implying they’re above that. Refusing the vet AI output is the kiss of death to velocity.
The usual rejoinder I've seen is that AI can just rewrite your whole system when complexity explodes. But I see at least two problems with that.
AI is impressively good at extracting intent from a ball of mud with tons of accidental complexity, and I think we can expect it to continue improving. But when a system has a lot of inherent complexity, and it's poorly specified, the task is harder.
The second is that small, incremental, reversible changes are the most reliable way to evolve a system, and AI doesn't repeal that principle. The more churn, the more bugs — minor and major.
Live and even offline data transformation and data migration without issues are still difficult problems to solve even for humans. It requires meticulous planning and execution.
A rewrite has to either discard the previous data or transform or keep the data layer intact across versions which means more and more tangled spaghetti accumulated over rewrites.
For specialized things that a specific user wants - already happening. Someone in a finance role showed me a demo this week that was reasonably sophisticated. SQL, multi user auth, integration with corporate finance software, parsing enormous excel files, dashboards, custom analytics, custom finance logic etc
In the past we’d have paid consulting devs millions for that now it’s a copilot license and a finance guy (that is reasonably tech savvy). Also cuts out the endless project planning meeting, stand ups, circling back, and scope discussions that you get when actual devs consult.
Managers and business owners shouldn't take it personally that I do as little as possible and minimize the amount of labor I provide for the money I receive.
Hey, it's just business.
Equally nihilistic are owners, managers, and leaders who think they will replace developers with LLMs.
Why care about, support, defend, or help such people? Why would I do that?
If you "quiet quit" you're still working for someone you hate. They still own you.
Instead, you could ACTUAL QUIT and start a business. Then you work for yourself, who you hopefully don't hate as much, and you have the power to define how things are done. So if you think developers shouldn't be replaced by LLMs or whatever, then you can... not do that. I have zero doubt in my mind that that will be a niche somewhere in the global economy for many years to come.
Also you might make a lot of money. Make enough and you're basically free from all these assholes. It doesn't actually take a ton of money when you own 100%. You pretty quickly get to a point where you can start telling off any wanker you want to, but you don't even feel the need, you are your own boss, so you just walk away from them and deal with whoever you want to deal with instead.
Trust me, speaking from experience, this is 1,000,000% better than "quiet quitting" which is pretty much remaining a corporate serf, just the most useless and pussy one in somebody else's room. Of course it's harder, most people who try it fail, real economics, real laws and real people will call you out pretty fast if you don't do things that people other than you think are valuable!
Do I want to lead a business filled with losers?
"Don't take it personal" does not feed the starving and does not house the unhoused. An economic system that over-indexes on profit at the expense of the vast majority of its people will eventually fail. If capitalism can't evolve to better provide opportunities for people to live while the capital-owning class continues to capture a disproportionate share of created economic value, the system will eventually break.
A business leader board that only consider people as costs are looking at the world through sociopath lenses.
Fortunate or unfortunate, many procedural tasks are extremely hard for humans to master, but easy to AI to generate. In the meantime, we structured our society to support such procedural work. As the wave of innovation spreads, many people will rise but many will also suffer.
For context: we're the creators of ChatBotKit and have been deploying AI agents since the early days (about 2 years ago). These days, there's no doubt our systems are self-improving. I don't mean to hype this (judge for yourself from my skepticism on Reddit) but we're certainly at a stage where the code is writing the code, and the quality has increased dramatically. It didn't collapse as I was expecting.
What I don't know is why this is happening. Is it our experience, the architecture of our codebase, or just better models? The last one certainly plays a huge role, but there are also layers of foundation that now make everything easier. It's a framework, so adding new plugins is much easier than writing the whole framework from scratch.
What does this mean for hiring? It's painfully obvious to me that we can do more with less, and that's not what I was hoping for just a year ago. As someone who's been tinkering with technology and programming since age 12, I thought developers would morph into something else. But right now, I'm thinking that as systems advance, programming will become less of an issue—unless you want to rebuild things from scratch, but AI models can do that too, arguably faster and better.
It is hard to convey that kind of experience.
I am wondering if others are seeing it too.
Excited for the future :)
You're saying that a pattern recognition tool that can access the web can't do all of this better than a human? This is quintessentially what they're good at.
> The real question is how do you build personal AI that learns YOUR priorities and filters the noise? That's where the leverage is now.
Sounds like another Markdown document—sorry, "skill"—to me.
It's interesting to see people praising this technology and enjoying this new "high-level" labor, without realizing that the goal of these companies is to replace all cognitive labor. I strongly doubt that they will actually succeed at that, and I don't even think they've managed to replace "low-level" labor, but pretending that some cognitive labor is safe in a world where they do succeed is wishful thinking.
Do people really need to know that a bunch of code at a company that won't exist in 10 years is something worth caring about?
As for the chatgpt wrapper comment - honestly this take is getting old. So what? You are going to train your own LLM and run it at huge loss for awhile?
And yes perhaps all of this effort is for nothing as it may be even possible to reacted everything we have done from scratch in a week assuming that we are static and do nothing about it. In 10 years the solution would have billions of lines of code. Not that lines of code is any kind of metric for success but you wont be able to recreate it without significant cost and upfront effort ... even with LLMs.
Since last 2 months, calling LLMs even internet-level invention is underserving.
You can see the sentiment shift happening last months from all prominent experienced devs to.
I expected the LLM's would have hit a scaling wall by now, and I was wrong. Perhaps that'll still happen. If not, regardless of whether it'll ultimately create or eliminate more jobs, it'll destabilize the job market.
You might be able to do more with less, but that is with every technological advancement.
Regarding your experience, it sounds like your codebase is such good quality that it acts as a very clear prompt to the AI for it to understand the system and improve it.
But I imagine your codebase didn't get into this state all by itself.
Maybe there's a threshold where improvements become easy, depending on the LLM and the project?
As a hobbyist programmer, I feel like I've been promoted to pointy-haired boss.
I Built A Team of AI Agents To Perform Business Analysis
https://bettersoftware.uk/2026/01/17/i-built-a-team-of-ai-ag...
"But this agent knows my wants and needs better than most people in my life. And it doesn’t ever get tired of me."
That comment says everything about how you view yourself and your fellow humans.
.blog-entry p:first-letter {
font-size: 1.2em;
}At that time I had a chat with a small startup CEO who was sure that he'll fire all those pesky programmers who think they are "smart" because they can code. He pointed me to a code generated by Rational Rose for his diagram, and told that only methods should be implemented, which also will be possible soon, the hardest part is to model the system.
Nothing can replace code, because code is design[1]. Low-code came about as a solution to the insane clickfest of no-code. And what is low-code? It’s code over a boilerplate-free appropriately-high level of abstraction.
This reminds me of the 1st chapter of the Clean Architecture book[2], pages 5 and 6, which shows a chart of engineering staff growing from tens to 1200 and yet the product line count (as a simple estimate of features) asymptotically stops growing, barely growing in lines of code from 300 staff to 1200 staff.
As companies grow and throw more staff at the problem, software architecture is often neglected, dramatically slowing development (due to massive overhead required to implement features).
Some companies decided that the answer is to optimize for hiring lots of junior engineers to write dumbed down code full of boilerplate (e.g. Go).
The hard part is staying on top of the technical (architectural and design) debt to make sure that feature development is efficient. That is the hard job and the true value of a software architect, not writing design documents.
[1] https://www.developerdotstar.com/mag/articles/reeves_origina... A timeless article from 1992, pre-UML, but references precursors like Booch and object diagrams, as well as CASE tools [2] You can read it here in Amazon sample chapter: https://read.amazon.com/sample/0134494164?clientId=share
When electronic spreadsheets were invented there was thought that it was gameover for accountants. There are more accountants per dollar of GDP today than back then. When a thing becomes cheaper we do not just consume the backlog and quit. We actually do more of that thing. Part of the reason is is that there is a very large set of software that was not financially viable to create pre vibe coding. Now it's financially viable to create... Even just throwaway software. Single use software. etc.
Until coding agents are capable of doing engineering superior to like the 90thile _of well trained and experienced engineers_ we will probably have human developers, likely more and more of them.
Here's an archived link: https://archive.is/y9SyQ
If educators use AI to write/update the lectures and the assignments, students use AI to do the assignments, then AI evaluates the student's submissions, what is the point?
I'm worried about some major software engineering fields experiencing the same problem. If design and requirements are written by AI, code is mostly written by AI, and users are mostly AI agents. What is the point?
To replace humans permanently from the work force so they can focus on the things which matter like being good pets?
Or good techno-serfs...
In the US there was this case of a student using religious arguments with hand-waving references to the will of god for her coursework. Her work was rejected by the tutor and she raised a big fuzz on TV. In the end this US university fired the tutor and gave her a passing grade.
These kind of stories are not an AI issue but a general problem of USA as a country shifting away from education towards religious fanaticism. If someone can reference their interpretation of god's words without even actually citing the bible and they receive a passing grade the whole institution loses their credibility.
Today, the United States are a post-factual society with a ruling class of christian fanatics. They have been vulnerable to vaporware for years. LLMs being heralded as artificial intelligence only works with people who never experienced real intelligence.
Luckily, every year only a handful of people who have motivation, skills and luck are needed to move the needle in science and technology. These people can come from many countries who have better education systems and no religious fanaticism.
In particular the demand for software tools grows faster than our ability to satisfy it. More demand exists than the people who would do the demanding can imagine. Many people who are not software engineers can now write themselves micro software tools using LLMs -- this ranges from home makers to professionals of every kind. But the larger systems that require architecting, designing, building, and maintaining will continue to require some developers -- fewer, perhaps, but perhaps also such systems will proliferate.
Speaking of tools, that style of writing rings a bell.. Ben Affleck made a similar point about the evolving use of computers and AI in filmmaking, wielded with creativity by humans with lived experiences, https://www.youtube.com/watch?v=O-2OsvVJC0s. Faster visual effects production enables more creative options.
So yes, the market shifts, but mostly at the junior end. Fewer entry-level hires, higher expectations for those who are hired, and more leverage given to experienced developers who can supervise, correct, and integrate what these tools produce.
What these systems cannot replace is senior judgment. You still need humans to make strategic decisions about architecture, business alignment, go or no-go calls, long-term maintenance costs, risk assessment, and deciding what not to build. That is not a coding problem. It is a systems, organizational, and economic problem.
Agentic coding is good at execution within a frame. Seniors are valuable because they define the frame, understand the implications, and are accountable for the outcome. Until these systems can reason about incentives, constraints, and second-order effects across technical and business domains, they are not replacing seniors. They are amplifying them.
The real change is not “AI replaces developers.” It is that the bar for being useful as a developer keeps moving up.
“Since FORTRAN should virtually eliminate coding and debugging…” -- FORTRAN preliminary report, 1954
http://www.softwarepreservation.org/projects/FORTRAN/BackusE...
I'm a lead engineer and I've barely written code directly in weeks, yet I've shipped side projects and continued shipping at work. My job hasn't disappeared. It's shifted up a layer. I spend my time designing the system, decomposing problems, setting constraints, probing tradeoffs, correcting plans, and iterating on architecture. The AI writes most of the tokens. I supply most of the technical judgment.
Tools like v0 or Replit hide some of this by baking rules and scaffolding into the product. But the work doesn't go away. Someone still has to know what to ask, what to doubt, what to measure, and when the AI is confidently wrong.
That role is not "customer who doesn't know what's possible." It's still a technical role. It just operates at a different abstraction layer.
No matter how much progress we make, as long as reasoning about complex systems is unavoidable, this doesn’t change. We don’t always know what we want, and we can’t always articulate it clearly.
So people building software end up dealing with two problems at once. One is grappling with the intrinsic, irreducible complexity of the system. The other is trying to read the minds of unreliable narrators, including leadership and themselves.
Tools help with the mechanical parts of the job, but they don’t remove the thinking and understanding bottleneck. And since the incentives of leadership, investors, and the people doing the actual work don’t line up, a tug-of-war is the most predictable outcome.
The hardest thing about software construction is specification. There's always going to be domain specific knowledge associated with requirements. If you make it possible, as Delphi and Visual Basic 6 did, for a domain expert to hack together something that works, that crude but effective prototype functions as a concrete specification that a professional programmer can use to craft a much better version useful to more people than just the original author.
The expansion of the pool of programmers was the goal. It's possible that AI could eventually make programming (or at least specification) a universal skill, but I doubt it. The complexity embedded in all but the most trivial of programs will keep the software development profession in demand for the foreseeable future.
I can see the 2030s dev doing more original research with mundane tasks put to LLM. Courses will cover manual coding, assembler etc. for a good foundation. But that'll be like an uber driver putting on a spare tire.
AI's great at automating repetitive stuff — the boilerplate, the routine tasks — but it can't replace the judgment calls, the creativity, or understanding what's really going on under the hood. As some people have pointed out in this thread, you can't escape the details, and that's exactly where human developers come in and add value.
But less thinking is essential, or at least that’s what it’s like using the tools.
I’ve been vibing code almost 100% of the time since Claude 4.5 Opus came out. I use it to review itself multiple times, and my team does the same, then we use AI to review each others’ code.
Previously, we whiteboarded and had discussions more than we do now. We definitely coded and reviewed more ourselves than we do now.
I don’t believe that AI is incapable of making mistakes, nor do I think that multiple AI reviews are enough to understand and solve problems, yet. Some incredibly huge problems are probably on the horizon. But for now, the general “AI will not replace developers” is false; our roles have changed- we are managers now, and for how long?
If it’s working for you, then great. But don’t pretend like it is some natural law and must be true everywhere.
Similarly, one might argue as increased capital finds its way to a given field, due to increased outcomes, labour in turn helps pressure pricing. Increased "sales" opportunity within said field (i.e people being skilled enough to be employed, or specialized therein) will similarly lead to pricing pressure - on both ends.
And I always think: any of these users could have ran a basic grammar check with an llm or even a spellchecker, but didnt. Maybe software will be the same after all.
P.S. prob I jinxed my own post and did a mistake somewhere
ai -> AI
didnt -> didn't
obvious and many -> many obvious
These are posted by ... sometimes -> Sometimes these are posted by...
prob --> Prob(ably)
did a mistake -> made a mistake
somewhere -> somewhere.
Here what deepseek suggests as fixed:
Sometimes, while on an AI thread like this, I see posts with many obvious grammatical mistakes. Many will be "typos" (although some seem conceptual). Maybe some are dictated or transcribed by busy people. Some might be incorrect on purpose, for engagement. These are sometimes posted by pretty accomplished people.
And I always think: any of these users could have run a basic grammar check with an LLM or even a spellchecker, but didn’t. Maybe software will be the same after all.
P.S. Probably I jinxed my own post and made a mistake somewhere.
In 2001, you needed an entire development team if you wanted to have an online business. Having an online business was a complicated, niche thing.
Now, because it has gotten substantially easier, there are thousands of times as many (probably millions of times) online stores, and many of them employ some sort of developer (usually on a retainer) to do work for them. Those consultants probably make more than the devs of 2001 did, too.
Tim Bryce was kind of the anti Scott Adams: he felt that programmers were people of mediocre intelligence at best that thought they were so damn smart, when really if they were so smart, they'd move into management or business analysis where they could have a real impact, and not be content with the scutwork of translating business requirements into machine-executable code. As it is, they don't have the people skills or big-picture systems thinking to really pull it off, and that combined with their snobbery made them a burden to an organization unless they were effectively managed—such as with his methodology PRIDE, which you could buy direct from his web site.
Oddly enough, in a weird horseshoe-theory instance of convergent psychological evolution, Adams and Bryce both ended up Trump supporters.
Ultimately, however, "the Bryce was right": the true value in software development lies not in the lines of code but in articulating what needs to be automated and how it can benefit the business. The more precisely you nail this down, the more programming becomes a mechanical task. Your job as a developer is to deliver the most value to the customer with the least possible cost. (Even John Carmack agrees with this.) This requires thinking like a business, in terms of dollars and cents (and people), not bits and bytes. And as AI becomes a critical component of software development, business thinking will become more necessary and technical thinking, much less so. Programmers as a professional class will be drastically reduced or eliminated, and replaced with business analysts with some technical understanding but real strength on the business/people side, where the real value gets added. LLMs meaningfully allow people to issue commands to computers in people language, for the very first time. As they evolve they will be more capable of implementing business requirements expressed directly in business language, without an intermediator to translate those requirements into code (i.e., the programmer). This was always the goal, and it's within reach.
Regarding your assertion:
> as AI becomes a critical component of software development, business thinking will become more necessary and technical thinking, much less so.
That remains to be seen. This is the story that AI evangelists are peddling and that employes are salivating over, for sure.
Bryce: "Mental laziness can also be found in planning and documenting software. Instead of carefully thinking through the logic of a program using graphics and text, most programmers prefer to dive into source code without much thinking."
IMO writing code isn't really more laborious than writing flowcharts and docs. I typically write code to explore the problem, and iterate until I have a good design.
What you're describing is more or less the waterfall model, which has its advantages, but also drawbacks. I don't see anything reason to treat code as only a final implementation step. It can also be a useful tool to aid in thinking and design.
> The earlier in the design and development cycle this is done, the less work you have to do over the entire SDLC and the more time/effort/money you'll save.
I believe this is only true if you treat the first code you write as the final implementation. Of course that's going to cause problems.
So now instead of one developer lost and one analyst created, you've actually just created an analyst and kept a developer.
Citizen developers were already there doing Excel. I have seen basically full fledged applications in Excel since I was in high school which was 25 years ago already.
It feels like programming then got a lot harder with internet stuff that brought client-server challenges, web frontends, cross platform UI and build challenges, mobile apps, tablets, etc... all bringing in elaborate frameworks and build systems and dependency hell to manage and move complexity around.
With that context, it seems like the AI experience / productivity boost people are having is almost like a regression back to the mean and just cutting through some of the layers of complexity that had built up over the years.
You should ask the business owners. They are hiring fewer developers and looking to cut more.
It's the dream of replacing labor.
They've already convinced their customers what the value of the product is! Cutting labor costs is profit! Never mind the cost to society! Socialize those costs and privatize those profits!
Then they keep the money for themselves, because capitalism lets a few people own the means of production.
So everything that looks cheaper than paying someone educated and skilled to do a thing is extremely attractive. All labor-saving devices ultimate do that.
The conversation shouldn't be "will AI replace developers". It should be "how do humans stay competitive as AI gets 10x better every 18 months?"
I watched Claude Code build a feature in 30 minutes that used to take weeks. That moment crystallised something: you don't compete WITH AI. You need YOUR personal AI.
Here's what I mean: Frontier teams at Anthropic/OpenAI have 20-person research teams monitoring everything 24/7. They're 2-4 weeks ahead today. By 2027? 16+ weeks ahead. This "frontier gap" is exponential.
The real problem isn't tools or abstraction. It's information overload at scale. When AI collapses execution time, the bottleneck shifts to judgment. And good judgment requires staying current across 50+ sources (Twitter, Reddit, arXiv, Discord, HN).
Generic ChatGPT is commodity. What matters is: does your AI know YOUR priorities? Does it learn YOUR judgment patterns? Does it filter information through YOUR lens?
The article is right that tools don't eliminate complexity. But personal AI doesn't eliminate complexity. It amplifies YOUR ability to handle complexity at frontier speed.
The question isn't about replacement. It's about levelling the playing field. And frankly we all are figuring out on how will this shape out in the future. And if you have any solution that can help me level up, please hit me up.
Your mention of the hellhole that is today's twitter as the first item in your list of sources to follow for achieving "good judgement" made it easy for me to recognize that in fact you have very bad judgement.
Like cool, you killed boiled a few gallons of the ocean but are you really impressed that you made a basic music app that is extremely limited?
But most enterprise software does not need to be innovative, its needs to be customizable enough that enterprises can differentiate their business. This makes existing software ideas so much more configurable. No more need for software to provide everything and the kitchen sink, but exactly that what you as a customer want.
Like in my example, I don’t know of any software that has exactly this feature set. Do you?
I worked for Percy for 4 years. We were “stuck” with imagemagik to do diffing (I’m sure they still might). I was able to build my own differ with Claude/LLM help.
That special enough for you? Or?
I'm not trying to imply LLMs aren't useful. I just want more info from GP so that I can evaluate their claims.
I have also added complex features in 30 minutes to existing projects, but I don't remember any that themselves would have taken me months though.
Well probably we'd want a person who really gets the AI, as they'll have a talent for prompting it well.
Meaning: knows how to talk to computers better than other people.
So a programmer then...
I think it's not that people are stupid. I think there's actually a glee behind the claims AI will put devs out of work - like they feel good about the idea of hurting them, rather than being driven by dispassionate logic.
Maybe it's the ancient jocks vs nerds thing.
Invest $1000 into AI, have a $1000000 company in a month. That's the dream they're selling, at least until they have enough investment.
It of course becomes "oh, sorry, we happen to have taken the only huge business for ourselves. Is your kidney now for sale?"
But you need to buy my AI engineer course for that first.
The Vibe Coder? The AI?
Take a guess who fixes it.
The reason those things matter in a traditional project is because a person needs to be able to read and understand the code.
If you're vibe coding, that's no longer true. So maybe it doesn't matter. Maybe the things we used to consider maintenance headaches are irrelevant.
a. generative technology but requiring substantial amount of coordination, curation, compute power. b. substantial amount of data. c. scarce intelectual human work.
And scarce but non intellectually demanding human work was dropped from the list of valuable things.
LLMs are a box where the input has to be generated by someone/something, but also the output has to be verified somehow (because, like humans, it isn't always correct). So you either need a human at "both ends", or some very clever AI filling those roles.
But I think the human doing those things probably needs slightly different skills and experience than the average legacy developer.
While a single LLM won’t replace you. A well designed system of flows for software engineering using LLMs will.
That's the goal.
By the 1860s artists were feeling the heat and responded by inventing all the "isms" - starting with impressionism. That's kept them employed so far, but who knows whether they'll be able to co-exist with whatever diffusion models become in 30 years.
Does it take less money to commission a single wedding photo rather than a wedding painting? Yes. But many more people commission them and usually in tens to hundreds, together with videos, etc.
An 18th century wedding painter wasn’t in the business of paintings, but in the business of capturing memories and we do that today on much larger scale, more often and in a lot of different ways.
I’d also argue more landscape painters exist today than ever.
But he missed the opportunity to recognize that the replacement pattern is, in fact, a broader principle. Or perhaps he did recognize it and decided to focus its scope on software development.
The broader replacement principle is that for a business, any (specialized) process or system or department represents an expense, and there is constant pressure to reduce expenses, or definitely once revenue/growth plateaus or decreases. ALL Businesses invariably, over time, attempt to replace every department or process or function with cheaper alternatives. Software Development is not unique here.
At the country level, this has led to the movement of manufacturing to China and other countries, the outsourcing of software development to India and other countries, and the hollowing out of middle America.
Does anyone have insight into whether this is a unique situation specific to our technological age? It feels fundamentally different from the normal cycle of conquest and colonialism?
Although to be fair there was a very very strong underpinning of corporations driving the wave of European colonialism from the 1600s to the 1900s- Hudson's Bay Company, Dutch East India and West India Companies, British East India Company, Royal African Company, French East/West India Companies, Danish West India and Guinea Company, the Spanish Royal Companies, Portuguese General Companies - a bit different from prior expansions of conquest by empires. But even these corporations were pinned upon expanding trading zones rather than cost management. In the 1900s and the 2000s there was some expansionism - getting countries to open up their economies - but that was managed through the IMF and the World Bank.
At the end of the day, the big dream is about accumulating power and wealth. For some people. It comes down to a fundamental world view through which people take action - some dream of scientific advancement, others of service to others, and so on. Exploration has much fewer opportunities in the modern age.
Software Development is just what a lot of this community happens to partake in.
Really?
Is this reflected in wages and hiring? I work for a company that makes a hardware product with mission-critical support software. The software team dwarfs the hardware team, and is paid quite well. Now they're exempt from "return to office."
I attended a meeting to move a project into development phase, and at one point the leader got up and said: "Now we've been talking about the hardware, but of course we all know that what's most important is the software."
And yet we see management and the AI boosters still talk about productivity in terms of lines of code written, or proxies for that metric, likes number of features shipped.
We definitely have solved the mechanical problem, and we did so decades ago when wizards and IDEs able to auto-generate boilerplate stubs came along.
Still we see the process discussed and boosted in terms of sheer quantity of "stuff", even as we are drowning in accidental complexity and tech debt. Adding yet more code generated by a synthetic text extruder is not solving any problem of consequence, but is in fact making things worse. "AI is the asbestos we're shoveling into the walls of our high-tech society" https://pluralistic.net/2026/01/06/1000x-liability/#graceful...
Expectations always go up. People expect more. Whoever can give it to them will reap the rewards. And that is whoever works out how to do better than the baseline of capability available to all, eg. AI code tools or no-code. Human expertise adds value above a baseline of capability that’s universally available, even if that baseline is rising all the time.
You need better experts than the next mob to even have a chance. And with near-zero distribution costs even marginally better software will trend toward winner take all.
replace the worker in the middle and nothing stands between your favorite worst nightmare and the customers/reality you want. atm devs and workers are inside. even if they abide by "job security", "planned obsolescence" and building in 7 microphones and boatloads of code to record and track users and their behavior, these people are still inside and talk to each other and the rest of the world.
Once they are replaced, the right to repair, privacy and so on will vanish and we will be punished for disassembling hardware and software in worse ways than happened to that guy who did the PS3 back then (I don't know of any other stories, unfortunately). I heard that they already run the narrative "you bought a game but it's still ours" ..., which seems like the second or third step towards the direction outlined above.
I fear the same will happen to food, pharma ... not necesserily because the top of the pyramid is "evil" but because there are only so many ways to keep increasing the increase of their wealth. A lot of conspiratorial stuff IS happening, and the sick and damaged and neuro and bio divergent with all their sensitivities are livestock so why not "create" more? at least for some time ... until all is "Incorporated" as in "applying for the permit have children".
On top of the article's excellent breakdown of what is happening, I think it's important to note a couple of driving factors about why (I posit) it is happening:
First, and this is touched upon in the OP but I think could be made more explicit, a lot of people who bemoan the existence of software development as a discipline see it as a morass of incidental complexity. This is significantly an instance of Chesterton's Fence. Yes, there certainly is incidental complexity in software development, or at least complexity that is incidental at the level of abstraction that most corporate software lives at. But as a discipline, we're pretty good at eliminating it when we find it, though it sometimes takes a while — but the speed with which we iterate means we eliminate it a lot faster than most other disciplines. A lot of the complexity that remains is actually irreducible, or at least we don't yet know how to reduce it. A case in point: programming language syntax. To the outsider, the syntax of modern programming languages, where the commas go, whether whitespace means anything, how angle brackets are parsed, looks to the uninitiated like a jumble of arcane nonsense that must be memorized in order to start really solving problems, and indeed it's a real barrier to entry that non-developers, budding developers, and sometimes seasoned developers have to contend with. But it's also (a selection of competing frontiers of) the best language we have, after many generations of rationalistic and empirical refinement, for humans to unambiguously specify what they mean at the semantic level of software development as it stands! For a long time now we haven't been constrained in the domain of programming language syntax by the complexity or performance of parser implementations. Instead, modern programming languages tend toward simpler formal grammars because they make it easier for _humans_ to understand what's going on when reading the code. AI tools promise to (amongst other things; don't come at me AI enthusiasts!) replace programming language syntax with natural language. But actually natural language is a terrible syntax for clearly and unambiguously conveying intent! If you want a more venerable example, just look at mathematical syntax, a language that has never been constrained by computer implementation but was developed by humans for humans to read and write their meaning in subtle domains efficiently and effectively. Mathematicians started with natural language and, through a long process of iteration, came to modern-day mathematical syntax. There's no push to replace mathematical syntax with natural language because, even though that would definitely make some parts of the mathematical process easier, we've discovered through hard experience that it makes the process as a whole much harder.
Second, humans (as a gestalt, not necessarily as individuals) always operate at the maximum feasible level of complexity, because there are benefits to be extracted from the higher complexity levels and if we are operating below our maximum complexity budget we're leaving those benefits on the table. From time to time we really do manage to hop up the ladder of abstraction, at least as far as mainstream development goes. But the complexity budget we save by no longer needing to worry about the details we've abstracted over immediately gets reallocated to the upper abstraction levels, providing things like development velocity, correctness guarantees, or UX sophistication. This implies that the sum total of complexity involved in software development will always remain roughly constant. This is of course a win, as we can produce more/better software (assuming we really have abstracted over those low-level details and they're not waiting for the right time to leak through into our nice clean abstraction layer and bite us…), but as a process it will never reduce the total amount of ‘software development’ work to be done, whatever kinds of complexity that may come to comprise. In fact, anecdotally it seems to be subject to some kind of Braess' paradox: the more software we build, the more our society runs on software, the higher the demand for software becomes. If you think about it, this is actually quite a natural consequence of the ‘constant complexity budget’ idea. As we know, software is made of decisions (https://siderea.dreamwidth.org/1219758.html), and the more ‘manual’ labour we free up at the bottom of the stack the more we free up complexity budget to be spent on the high-level decisions at the top. But there's no cap on decision-making! If you ever find yourself with spare complexity budget left over after making all your decisions you can always use it to make decisions about how you make decisions, ad infinitum, and yesterday's high-level decisions become today's menial labour. The only way out of that cycle is to develop intelligences (software, hardware, wetware…) that can not only reason better at a particular level of abstraction than humans but also climb the ladder faster than humanity as a whole — singularity, to use a slightly out-of-vogue term. If we as a species fall off the bottom of the complexity window then there will no longer be a productivity-driven incentive to ideate, though I rather look forward to a luxury-goods market of all-organic artisanal ideas :)
Knowing when to push back, when to trim down a requirement, when to replace a requirement with something slightly different, when to expand a requirement because you're aware of multiple distinct use cases to which it could apply, or even a new requirement that's interesting enough that it might warrant updating your "vision" for the product itself: that's the real engineering work that even a "singularity-level coding agent" alone could not replace.
An AI agent almost universally says "yes" to everything. They have to! If OpenAI starts selling tools that refuse to do what you tell them, who would ever buy them? And maybe that's the fundamental distinction. Something that says "yes" to everything isn't a partner, it's a tool, and a tool can't replace a partner by itself.
You're correct in that these aren't really ‘coding agents’ any more, though. Any more than software developers are!
I can kind of trust the thing to make code changes because the task is fairly well-defined, and there are compile errors, unit tests, code reviews, and other gating factors to catch mistakes. As you move up the abstraction ladder though, how do I know that this thing is actually making sound decisions versus spitting out well-formatted AIorrhea?
At the very least, they need additional functionality to sit in on and contribute to meetings, write up docs and comment threads, ping relevant people on chat when something changes, and set up meetings to resolve conflicts or uncertainties, and generally understand their role, the people they work with and their roles, levels, and idiosyncrasies, the relative importance and idiosyncrasies of different partners, the exceptions for supposed invariants and why they exist and what it implies and when they shouldn't be used, when to escalate vs when to decide vs when to defer vs when to chew on it for a few days as it's doing other things, etc.
For example, say you have an authz system and you've got three partners requesting three different features, the combination of which would create an easily identifiable and easily attackable authz back door. Unless you specifically ask AI to look for this, it'll happily implement those three features and sink your company. You can't fault it: it did everything you asked. You just trusted it with an implicit requirement that it didn't meet. It wasn't "situationally aware" enough to read between the lines there. What you really want is something that would preemptively identify the conflicts, schedule meetings with the different parties, get a better understanding of what each request is trying to unblock, and ideally distill everything down into a single feature that unblocks them all. You can't just move up the abstraction ladder without moving up all those other ladders as well.
Maybe that's possible someday, but right now they're still just okay coders with no understanding of anything beyond the task you just gave them to do. That's fine for single-person hobby projects, but it'll be a while before we see them replacing engineers in the business world.
I Built A Team of AI Agents To Perform Business Analysis
https://bettersoftware.uk/2026/01/17/i-built-a-team-of-ai-ag...
no need to worry; none of them know how to read well enough to make it this far into your comment
Service-led companies are doing relatively better right now. Lower costs, smaller teams, and a lot of “good enough” duct-tape solutions are shipping fast.
Fewer developers are needed to deliver the same output. Mature frameworks, cloud, and AI have quietly changed the baseline productivity.
And yet, these companies still struggle to hire and retain people. Not because talent doesn’t exist, but because they want people who are immediately useful, adaptable, and can operate in messy environments.
Retention is hard when work is rushed, ownership is limited, and growth paths are unclear. People leave as soon as they find slightly better clarity or stability.
On the economy: it doesn’t feel like a crash, more like a slow grind. Capital is cautious. Hiring is defensive. Every role needs justification.
In this environment, it’s a good time for “hackers” — not security hackers, but people who can glue systems together, work with constraints, ship fast, and move without perfect information.
Comfort-driven careers are struggling. Leverage-driven careers are compounding.
Curious to see how others are experiencing this shift.
I think pressure to ship is always there. I don’t know if that’s intensifying or not. I can understand where managers and executives think AI = magical work faster juice, but I imagine those expectations will hit their correction point at some time.
Now the expectation from some executives or high level managers is that managers and employees will create custom software for their own departments with minimal software development costs. They can do this using AI tools, often with minimal or no help from software engineers.
Its not quite the equivalent of having software developed entirely by software engineers, but it can be a significant step up from what you typically get from Excel.
I have a pretty radical view that the leading edge of this stuff has been moving much faster than most people realize:
2024: AI-enhanced workflows automating specific tasks
2025: manually designed/instructed tool calling agents completing complex tasks
2026: the AI Employee emerges -- robust memory, voice interface, multiple tasks, computer and browser use. They manage their own instructions, tools and context
2027: Autonomous AI Companies become viable. AI CEO creates and manages objectives and AI employees
Note that we have had the AI Employee and AI Organization for awhile in different somewhat weak forms. But in the next 18 months or so as the model and tooling abilities continue to improve, they will probably be viable for a growing number of business roles and businesses.