- AI will make developers irrelevant
- Why?
- Because LLMs can write code
- Do you know what I do for a living?
- Yes, write code?
- Yes, about 2-5% of the time. Less now.
- But you said you are a developer?
- I did
- So what do you do 95-98% of the time?
- I understand things and then apply my ability to formulate solutions
- But I can do that!
- So why aren't you?
The developers who still think their job is about writing code will perhaps not have a job in the future. Brutal as it may sound: I'm fine with that. I'm getting old and I value my remaining time on the planet.Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together. They went in with a plan, but the reality didn't agree and they are on a tight schedule.
More broadly, it's well understood that experiments are not a replacement for design and UX. Google is famously great at the former and terrible at the latter. Sure the AI maxxers will say the machines are coming for all creative endeavours as well, but I'm going to need more evidence. So far, everything good I've seen come from AI still had a human at the wheel, and I don't see that changing any time soon.
It's never been easier to replace chunks of code with sane software patterns, but you have to have a feel for those patterns. And also understand what's under the hood.
You folks speak like the only function of the agent is to spit code and features. Get a grip and treat your deliverables with care, otherwise you only have yourself to blame, not the AI.
No, because no amount of experimentation can solve many of the problems that have been solved by thinking. Even your claim about "experiments are cheap" requires thinking to decide what experiments to do. No one is generating all possible solutions that fit in X megabytes; you have to think to constrain the solution space.
LLMs evaporated 90% of the "moments of despair" when you have an error and googling it isn't helping, or googling it made you realize you have to read 30min of documentation.
Coding is a joy now. LLMs shaved off all the rough edges.
A year ago I would've told my boss “can't be done” about my work today. I'd tell him to get me the right person to talk to (our partner, not an alien) who could give me some insight into what the hell I'm supposed to be doing to consume their API. Or to at least explain why it is that this can't be done.
Nowadays, I spent a couple of weeks reverse engineering their terrible ideas. Yeah, it worked. But it's a complete waste of my time, and tokens, energy, chips and RAM. And worst of all, it will lead to a terrible design.
That will work, but will eventually colapse under its own weight, as we use our increased power to increase our sloppiness and take it a little further. Because we can manage it. For now.
I am terrified of allowing these things to complete tasks end-to-end with nothing intervening. Maybe that's why I don't run into many of these issues. I mostly delegate grunt work and manual tedium, not reasoning or design choices to the LLM. I may consult the LLM and ask for criticism, but there is no way I'm going to allow it to quietly make design decisions that I don't know about.
It's getting hard to keep up with trying to teach new devs what bad code looks like. And I swear sometimes they just copy my PR comments into their AI tool to fix the mistakes without any of the learning.
How have you set yours up that works well for you?
And then condensed an equal quantity of despair out of the ether via confident confabulations.
These LLMs can already incorporate our entire cultural corpus yet your "professional experience" is the threshold they won't cross?
Regarding such formal reasoning we have already seen marked improvement in the last year or two alone. The question is how this weighs on your prediction re their capabilities in the next two, five, ten, etc years.
The present notions of harnesses, structured output or looping in the LLM to some external state or sandbox be it debugger output or embedding into a runtime already show early promising results along these lines. I see no reason to believe these gains will not continue over the next five years.
If you have some theories in the converse in that regard I am all ears.
If you think the potential of LLMs is overblown feel free to short the market. I don't pretend to know the future. But if I may, I don't think you are framing the debate in the correct terms. Evidence is an important facet of human affairs. So is risk. Best of luck with your predictions.
Most likely because you haven't constrained their behavior in your prompt. You're making the assumption that they "understand" that using best practices is what you want. You have to tell them that, and tell them which practices they should use.
If incorrect LLM output is a prompt issue then demand for experienced developers will remain, and demand may actually increase as time passes.
If LLMs get good enough, one might be tempted to ask so what if most humans can't understand the output? Human civilization has by and large been a constant exercise in us collectively accomplishing more and more while individually comprehending less and less.
Our ancestors likely understood more about hunting live game or murdering each other than we do. Most of us do not consider that a great loss. Most of us living in the modern world depend on things we don't fully comprehend. I'm just not sure how this would lead to being reassured re the human as SWE.
Software specialization might look very different in 10 years but I doubt that technically specialized humans will be completely removed from their professions. We might not be carrying bows and arrows anymore but we will be carrying the equivalent of a rope and a Stetson.
I appreciate your points. I agree with you that not all "technically specialized humans will be completely removed" but let's not pretend the comparison is going from a caveman with a spear to a cowboy with a lasso. If you concede it is likely to be very different at some point calling it SWE is no longer useful.
I think SWEs would be better off realizing they have enjoyed a relatively extreme level of privilege, and rather than trying to hold onto it, use what time they still have to advocate for a more egalitarian society, even if that means giving up some of their gains. Otherwise speaking of farming, the mass layoffs to come when software has been disrupting blue collar jobs for decades will really be a chickens coming home to roost moment.
No need for specialized commercial software, if everyone can just explain to the computer what they want in English.
The article you shared has little to do with this. Questions of how to divide up gains technology creates are a separate question from that of the technology itself. Tbh I found what you shared so boring I could barely finish it. I already in this thread made an exhortation to support politicans who commit to erasing inequality. The idea that LLMs can only exist with inequality is nonsensical. The only thing grim about what you shared is the lack of political imagination. It's boring.
> Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.
I don't think I've experienced this to a large degree. Maybe early in my career. Most of my time now is spent formulating a solution, and time spent coding is mostly spent trying to compose my changes with the existing code in a way that is performant, reliable and meets the specifications.
When working as a SWE, the longer I did it (~30 years) more of my time was spent understanding the problem, the edge cases, how to handle the edge cases, how to do all of it affordable, on time, and within budget.
That's engineering.
What you're describing is "writing code". That's lower value than "solving the problem".
I imagine a response, "But agile development, etc."
Yep. Part of solving the often sometimes involves creating prototypes to determine the essential viability of the solution. But that's only part of it. Which prototypes do you write? How much time do you allocate to same before accepting it's a dead end (at least for now) and punting on it?
That's engineering.
Me probably coming across as a dick today? Well, I was diagnosed autistic a year ago, and I'm on extended sabbatical/unemployment (3 years now) due to autistic burnout. And masking is part of how I got the burnout.**
* Why would someone be paying for that when there is likely someone else already doing it? Unless you're the rare person who hopes to "disrupt" the competition).
** has me begging the question of why I write here at all. SMH. Why do I do what I do? No idea sometimes.
There's the saying "Any idiot can build a bridge; it takes an engineer to build a bridge that barely stands."
To put this another way, any idiotic LLM can write code. It takes a person with domain experience to understand what code to write, rewrite, or not write.
I've seen lots of organizations hollow out their internal competence in favor of outsourcing the skills. LLMs are the ultimate expression of that. There are people who say "you need to have people in your organization who understand how things work because they're the ones who solve problems!" and there are other people who say "focus on your core competencies! These problems you're worried about aren't your core competencies, so get rid of those experts, they're expensive and annoying; we can just sign a contract with an organization that'll know things for us."
At some point we all will identify exactly how much "seed corn" you need for the next season. We'll figure that out because we're starving, but at least we'll all know.
got an email address in my profile if you'd be interested in talking at some point about something, or even talking about nothing in particular. (i don't normally do this sort of HN networking stuff, i find it super cringe. but there we go).
Maybe you will still be needed. That is one question. How well you will be paid and treated when the barrier to entry is now "I can think" is another. As the parent indicates, most people doing software are not doing things akin to pure math. I don't think most SWEs want that lifestyle anyway.
It's ok. You shouldn't fight the coming change. Instead use the time we still have to fight for more equal outcomes (vote for politicians that support UBI, Medicare for all). The longer you delude yourself that you are uniquely needed in an increasingly mechanized world the worse all our outcomes will be.
The progress models have made in the last 5 years aren't convincing me they'll bridge that gap too soon, although I can see how some people are convinced by how decent agentic harnesses make things. I know it's really easy to get very hyped with the current state of the technology, but try to have a bit of skepticism.
Or are you saying that I'm lying. That I am secretly hammering away at my keyboard while pretending not to?
No, writing code hasn't been how I spend most of my time for many decades now.
Write proper API documentation laying out the assumptions and intent, generate some good API docs, write a design and architecture document (which people find they need for LLMs to work at all anyway). You’ll find that you spend a lot less time reading code.
Everything we have to do for AI to function well, would help humans to function better too.
If you take the things for AI, but do them for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written.
This only works on high-trust teams and organizations. A lot of AI productivity gains are from SWE putting the extra effort because the results will be attributed to them. Being a force-multiplier for others isn't always recognized, instead, your perfomance will likely judged solely on the metrics directly attributed to you. I learned this lesson the hard way by being idealistic, and overestimating the level of trust that had been built after joining a new team. Companies pay lip service to software quality, no one gives a shit if your code has the lowest SEV rates.
Writing code just isn't what takes time.
This is how big corporations look like, not some SV startups.
Those two formulations represent different developers' approaches to the same task. The former being developers who are much better at planning than the latter.
There are also those for whom that percentage is higher, let’s say 6-50%.
> I understand things and then apply my ability to formulate solutions
The AI is coming for that too.
You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.
People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
One very tiny slice of speciality/ rare industries where code is critical but overall small part of project costs. I can see if code / software is 5% of overall cost even heavy use of AI for code part is not moving the needle. So people in this group can feel confident in their indispensability.
Second group is much larger and peddling CRUD / JS Frontends and other copy/paste junk. But as per industry classification they are just part of same Coder/Developer/IT Engineer group. And their bleak prospects is not some future scenario, it is playing out right now with tons of them getting laid off. And whole lot of people with IT degrees, certifications are not finding any jobs in this field.
So far it appears that LLMs still require constant hand-holding, even for a small educational CRUD app.
I don't mean this as a snarky jab. It's coming for anything software. I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend APIs, systems programming, embedded programming, they all seem equally threatened it's just a matter of time. Front end is easy to see in the AI web front ends but everything else is still easy pickings.
That is not hard. It’s just tedious and very slow to do manually. The hard part would be about designing a usb dongle and ensuring that the associated software has good UX. The reason you don’t see kernel devs REing devices is not because it’s impossible or that it requires expert knowledge. It’s because it’s like counting sands on the beach.
I think it will be far fewer to have any positive impact on IT engineers' overall job prospects.
The thrust was overall job prospects for people in software field. It is not that frontend is easy but it is definitely easy to get into. Considering there are far more frontend developers then say C++ system engineers or database designers so in sheer numbers they will be affect more.
That may be true I’m not gonna say one way or the other, but if AI comes for that then almost all knowledge work is effectively dead, so all that’s left would be sales or physical labor.
Developer community: Wow, we truly have become obsolete now!
My bet is something _like_ assembly, but not assembly.
That being said, I think humans will still program for fun. Just like we paint portraiture in a world with cameras.
I expect that to continue.
(And in all of those transitions millions where left behind without work or with very worse prospects. The people that took the new jobs were often a different group, not people who knew the old jobs and were already in their 30s and 40s).
And what would be the new professions that uniquely require humans, when even thinking and creative jobs are eaten by AI? Would there be a boom of demand for dancers and chefs, especially as millions lose their service jobs?
Even if this still holds true ("past performance is no guarantee of future results") the part about it that people handwave away without thinking about or addressing is how awful the transitional period can be.
The industrial revolution worked out well for the human labor force in the long term, but there were multiple generations of people who suffered through a horrendous transition (one that was only alleviated by the rise of a strong labor movement that may not be replicable in the age of AI, given how it is likely to shift the leverage of labor vs. capital).
If you want to lean on history as an indication that massive sudden productivity changes will make things better for humanity in the long run, then fine, but then you have to acknowledge that (based on that same history) the transition could still be absolutely chaotic and awful for the lifespan of anyone who is currently alive.
However, even out of that 80% of my time, what fraction is actually spent "writing code"?
AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback
There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.
- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback
What part of those you think it doesn't help with?
Consider hash tables. Nobody implements a hash table by hand any more. I've written some, but not in this century. Optimal hash table design is a specialist subject. Do you know about robin hood algorithms? Changing the random number generator's seed to discourage collision attacks? A basic hash table starts to slow down around 70% full. Modern hash tables can get above 90% full before they have to expand.
Who keeps Knuth's Fundamental Algorithms handy any more? I own both the original edition and the revised edition. They're boxed up in the garage. I once read that book cover to cover. That was a long time ago.
That's not AI. That's solving the problem and putting it in a black box. That's how technology progresses.
Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.
> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away.
I'm not sure anyone is actually working on those. People talk about spending all day writing CRUD apps here, but if you suggest there are already low code tools to build those, they will promptly tell you it's too complex for that to work.
Yes. Yes, that's exactly what we're going to see, and more swiftly than people are generally comfortable with. What are we going to do with all those cubicle dwellers?
Current AI tech giants prove over and over and over again that this is not the case
I am very optimistic. I just wish I was younger to take advantage like Junior high, high school age with my current resources damn… The oldest lament in the books.
There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated. Since around the end of 2025 and models like Opus 4.6, the SOTA has gotten good enough to work agentically on all sorts of dev tasks with pretty good degrees of success (harnesses and how you use them still matters, ofc).
And how much revenue do they generate?
Long term its bleak, but short/medium term - not so much, if I get fired it won't be llm replacing me but rather company politics, budget changes etc. Which was the only real (very real) risk for past 15 years too, consistently. But it helps to not work for US company.
5+ years in the software world is like 30 years in others...So...given lacking use-cases and humongous amounts of capital already wasted on chatbots...It's more like "we" are closer to closing curtains than to "just started"...
This is definitely an agent problem instead of an LLM problem. Anybody got something explorative like this working?
> The AI is coming for that too.
If this is true, then you'd have to conclude that AI is coming for everything. I'm still not convinced by that. But I am convinced that the part of software development that involves typing code manually into an IDE all day is likely gone forever.
Now you’re getting it
So of 10% of lawyers get AI-d away, let’s say, the remaining 90% are 1.1x+ efficient and also up against other lawyers enjoying the same… work might go up. And on the customer side there is sooooo much BS with lawyers, but if both lawyer and customer can communicate faster or better with the LLMs, we should see more better cases with better dialog and case handling. Again, the total amount of lawyering could go up a lot. And then we have the cases prohibitive without the LLMs, now possible for big money. Better LLM empowered lawyers should be able to create new and more lawyer work.
As it stands I see people selling services that are subsidized by VC, template jobs we’d be doing faster with copy paste but it’s not copyright infringement when OpenAI does it, and a rush for valuations to soak up VC because the business model isn’t there. I’m seeing a huge uptick in visual bugs on large commercial platforms and customer facing apps, and don’t feel OpenAI is gonna kill Office anytime soon… or Chromium… or Steam… or emacs…
Call me an optimist, but I think those LLM pump and dumpers are creating a wave of fear that would be quite different if they weren’t lying and trying to boost an IPO. Chat GPT 2 was too dangerous to release, lul, and the class action suits are just getting started.
An actual lawyer replacing tech company should sell lawyering for infini-money, not pens that’ll totally 10x your lawyering (bro).
So.... they just starve in the streets?
Even if some other, arguably better job comes along, would they retrain for it? (You can say yes, but take a look at the long history of people choosing to join a cult and vote for an orange moron instead of learning a new skill).
Either you're convinced you won't be too badly affected and will gladly watch huge swaths of people suffer, or you're deluded enough to think that it will really, truly be different this time. In any case, I hope you get the worst results of what you preach.
I don't know if it makes sense to call that person an SWE, and some people currently employed as SWEs either won't be good at this or aren't interested in doing it. But the existing pool of SWEs is probably the largest concentration of people who'll end up doing this job, because it's the largest concentration of people who've thought a lot about, and developed taste with respect to, how software should work.
That's where we fundamentally disagree about.
Yes, AI is coming for solution formulation, absolutely, but not all of it, because it is actually a statistical machine with context limit.
Until the day LLMs are not statistical machine with a context limit, this will hold. Someone need to make something that has intent and purpose, and evidently now not by adding another 10T to the LLM parameter count.
So are humans.
Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)
And I argue that current LLMs have surpassed many of my capabilities already.
For example GPT/Opus can understand and document some ancient legacy project I never saw before in minutes. I would take a week+ to do the same and my report would probably have more mistakes and oversights than the one generated by the LLM.
AI advocates are _way_ too confident about the nature human cognition. Questions that have been debated by philosophers and cognitive scientists for decades are now "obvious" according to you people, though you never provide any argument to support your statements.
We are much more limited, but we fundamentally work differently. Hence adding more parameter like certain companies are doing isn't necessarily going to help. We need to rethink how LLM work, or how it work in tandem with something that's completely different.
I think it's doable, I just don't believe it's LLM, and I don't think anyone now knows what it is.
But we are? That's our education system.
The only reason school doesn't try to shove more information in our brains is because we hit bandwidth limits.
That is not what the education system does. That's an obvious distortion of reality. People train over billions of documents to statistically predict the next word to gain and understanding of language. LLMs do this statistical processing in order to mimic humans natural language learning ability. And there has been continued evidence of the limitations of this approach to accurately mimic the totality of human cognition.
And the human mind is not?
To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.
This was something I learnt in my very first job in the 1980s. I worked for someone who did industrial automation beyond PLCs and suchlike. He spent 6 months working in the company. On the factory floor, in the logistics department, in procurement, in accounting and even shadowing the board. Then he delivered a proposal for how to restructure the parts of the company, change manufacturing processes, and show how logistics and procurement could be optimized if you saw them as two parts in a bigger dance.
He redesigned the company so that it could a) be automated, and b) leverage automation to increase the efficiency of several parts of the business. THEN, he started planning how to write the software (this was the 80s after all), and then we started implementing it.
Now think about what went into this. For instance we changed a lot of what happened on the factory floor. Because my boss had actually worked it. So he knew what pain points existed. Pain points even the factory workers didn't know how to address because they didn't know that they could be addressed.
I was naive. I thought this was how everyone approached "software projects". People generally don't. But it did teach me that the job isn't writing code. It is reasoning about complex systems that often are not even known to those who are parts of it.
And this is for _boring_ software that requires very little creativity and mostly zero novelty. Now imagine how you do novel things.
> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
You make it sound like it is a bad thing that certain tasks become easier.
I spent a lot of time writing CRUD stuff. Because the things i really want to work on depend on them. I don't enjoy what is essentially boilerplate. Who does? If you can do the same job in 1/20 the time, then how is this a bad thing?
It is only a bad thing if writing CRUD webapps is the limit of your ability. We don't argue for banning excavators because it puts people with shovels out of work. We find more meaningful things for them to do and become more productive. New classes of work becomes low-skilled jobs.
If you have been doing software for a while, you are probably doing some subset of this. But these things are hard to articulate. It is hard to articulate because it is not something we think about. Like walking: easy for us to do, hard to program a robot to do it.
1 person needs to do that. The other 100 not doing that currently to begin with, but doing the AI-automatable work?
We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? I think it's mostly a cope.
We have robots walking just fine now, by the way.
Imagine 45% of higher than average paying jobs gone.
If that happens we’ll either figure out a new economic system, or society will collapse.
Also saying robots are walking just fine is misleading for any definition of just fine that is anywhere near as good as a human.
"Automating half the jobs" is the same as "double productivity per worker".
When the doubling happens in 5 years rather than 50, it might be more disruptive, but I'm convinced we're on the verge of huge improvements in human standard of living!
If we double productivity per worker, we have twice as much wealth on average.
I know there are angry people convinced that this will all be consumed by billionaires and jews, but historically that is not at all the track record of the last 250 years, and I expect that to continue.
As harsh it may sound, it seems rather likely to me. It is not like s/w engineers have helped struggling workers in other sectors other than sanctimonious "Learn to code" advice. So software folks can't expect any solidarity or help from others.
It should be something for us to celebrate, because it means greater freedom for humans to pursue something else rather than spending time doing drudgery.
Or they're planning to build an Elysium-like colony in the ocean or space, to keep the billionaire class far from danger.
More likely they figure out what to do with a bunch of idle talent. Or the coming generation of trillionaires will.
Heck, even long before LLMs about 10% to 30% of my code was already automatically generated. By tooling, by IDLs and by my editor just being able to infer what my most likely input would be.
> We have robots walking just fine now, by the way.
I don't think you got the point I was trying to make.
Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs.
I get the feeling we can already spot the next AI Winter. Which is okay, we need a breather, and the current technology is useful enough on its own.
Why do you believe they wont? I think it's reasonable to assume that we will hit a ceiling that current models will not be able to break.
> We have robots walking just fine now, by the way.
Walking and reasoning are unrelated abilities.
What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.
The AI is coming for those too.
Anecdotal evidence to support this.
I work with both dev and design teams. Upper management has already gone through several layoffs and offshoring of the two dev teams I work with. The devs they did keep were exactly what you said. The capable ones who reliably closed their Jira tickets. Never missed a deadline for building their features or components. And now? Their work has tripled and now the only help they get from management? "Start to figure out how to leverage AI, we're going to be a in hiring freeze for the next 10 months."
The double whammy of losing onshore team members and not getting any help from management to fix the problem they just created and essentially just telling them to figure out how to use AI to keep up is pretty staggering.
I would echo what one of the devs told me, "If this is the new "AI era" than you can count me right the fuck out of it."
I am sure AI bros are same people who were convinced consumer grade fully automated driving was going happen "by end of the year" for last 7 years.
Is it a handy tool? Yep! I use it every day. But it is laughable to think this is the path to AGI. The most common counterargument on HN is some variation of "but you can't prove that this isn't just like how a human thinks". A conspiracy theory at best, just reinforcing the fact that we know very little about how even simple non-human brains function.
> The AI is coming for that too.
In that case all [1] non manual work is doomed, until robotics has an LLM moment.
[1] With the exception of all fields protected by politics or nepotism.
All work in general. Knowledge workers can still do manual work, and will compete to do so when there is no option to continue what they do today.
I interviewed a ton of people in my career and when I ask "how much time writing code on your last job?". The more junior the person the more they would overestimate the time writing code (Some would say 90%!). Once they joined I was able to see how much time they really wrote code and it is almost never more than 30%.
Mostly because the code is only the final output. You spend most of your time doing research, talking to people. Working on Quarterly OKRs, going to meetings etc.
If you just write code you are either an extremely junior person that works on things trivial enough to not have to research or your are disillusioned and you don't realize you spend most of your time doing other things
So while AI will change the industry I don't see any reputable company firing the smartest ones in the room for junior level intelligence.
Even with it advancing someone has to be responsible for when it screws up which we know it will.
Seeing him type really reinforced this idea.
If I gave them a task and they immediately started typing it out, I would tell them to stop typing and ask them to explain to me what they were doing; they'd often just spit out what they thought the code should do, and I'd often point out edge cases they missed and would have missed had they just spit out code and a PR, wasting everyone's time. I would also insulate them from upper management to give them time to actually think (e.g. I wouldn't be coding so they could think then code).
To your point and to the GP's point, and one point I keep raising with LLM's: "typing is not where my time sinks are"
- Understanding code without writing it is as viable as understanding code that you've worked with directly or indirectly
- Businesses care that you understand code
I really doubt the first one. Traditionally, understanding a code base in large part came from working with it intimately and building that muscle memory. The idea that understanding code by reading it is as good as understanding it from writing it, in my opinion, is not realistic.
Whether businesses care that their engineers (which they are increasingly viewing as monkeys at LLM typewriters) to understand the code remains to be seen. I don't think they particularly care whether their code runs slow and is buggy so long as it works just enough to churn out features and continue to pull income.
As one of those developers who has written almost no significant code by hand since November 2025, but has produced a great deal of working software, I still understand the majority of the code I've produced just as well as if I'd typed it myself.
I may not be typing it myself, but I'm manipulating it constantly. It's not as simple as "reading" it - I'm reading it, executing it, figuring out refactorings for it, having tests built for it, having documentation built for it, sometimes writing that documentation myself, spinning up example scripts that use it, then building new code that depends on that previous code.
It's that act of exercising the code that gives me confidence that I understand it.
On the surface it sounds weird - why would this be?
Possibly because building a system is not a one-shot step, but a process of many iterations, each of which involves experiments in production, and gaining more learnings. So at the end of the process, you don't just have N lines of working code, but also N lessons learned along the way. So presumably with the AI process we miss out on half the value.
Now the going thesis is that this extra value is unnecessary if we take the plunge and don't look back. My gut says the answer is somewhere halfway, I guess we'll see.
Theres another, different loop I keep seeing which is:
- Company A lays off engineers citing AI efficiencies
- People say its because of over hiring during 2020
- Company B lays off engineers citing AI efficiencies
- People say its because it was never a good business
- Company C lays off engineers citing AI efficiencies
- People say its because theres a recession
I guess to cite a counter example, unemployment is still super low, software jobs are still holding up, but the bear case is that eventually 5% of people will be able to do what people do today, and the demand for software won't grow at the same pace.May I ask if you could estimate how you spend the other 95% of the time?
- Meetings
- Reading papers
- Understanding legacy code
- Reading internal news
- Ad hoc chats with coworkers
- Writing docs
- Editing configs
- Thinking about solutions
- Slacking off
- Analyzing results
- Testing code
- Reviewing PRs
- Understanding others' ongoing projectsI just don't think you've utilized the most recent versions of codex or claude.
I never got that argument. Compilers are formally proven, deterministic algorithms . If you understand what compiler does, you can have pretty good idea what it will produce. If it doesn't do that, its a bug. Definition of correctness is well defined by semantic equivalence.
LLMs are none of that. Its a fuzzy system that approximates your intent and does its best. I can make my intent more and more specific to get closer to what I want, but given all that is just regular spoken language its still open to interpretation. And all that is still quite useful, but I don't get the assembly language comparison here.
I did some contracting work for a severely dysfunctional meeting heavy organization and it was about 2 hours of meetings for every hour of real technical work!
But i guess if we mean actual time tapping your keyboard making code, then it's true some days for senior+ devs, but definitely not technical work overall.
And that would be where we disagree. I don’t read code to look at code. When I’m reading code, I’m looking for the contracts to follow when interacting with a system. It would be nice if it were documented, but more often than not you have to rely on code.
It’s very rare that I plan with a technical mindset. Yes I use the jargon, but it’s all about the business needs. Which again create contracts.
Same with writing code. Code is like English for me. If I don’t have a clear idea on what to write, I stop and do research (or ask someone). But when I do, it’s as straightforward as writing a sentence.
We all do the same stuff, the disagreement would just be what you feel coding is and if you think technical work is the same thing or a superset. If you as software dev aren't hands on with planning or working more than 5% of your time, you are basically a PO with a programing hobby
I believe 99% of requests are not about what’s technically feasible. And the rare time I encountered one of those, my answer has mostly been “you don’t have enough resources to try solving that problem”.
If you know your fundamentals well, very often you will find the same common blocks everywhere. People much more smarter than me has solved a lot of fundamental issues and it’s rare that I see a business request that doesn’t reuse the same familiar stuff.
That’s why coding is mostly boring. You follow the same pattern again and again. But what dictates the flows are the business parameters. And that’s why most senior spend so much time gathering good requirements. Because the code is straightforward after that.
Now those juniors whose job is to implement those solutions, they will have a hard time.
On my 50s, I also don't write as much code as I used to, even less nowadays with serverless, managed services, low code/no code tools, agent orchestration workflows, and with it I keep seeing development teams getting smaller.
I think, much sooner than that, you'll have AI pumping out practically complete implementations that meet the requirements of function, set by the people who desire that function. THOSE people will be the developers, and will be more akin to technical "creatives", more on the product side, than the developer side.
AI really depends on long winters and rare breakthroughs. Deep neural network was the most recent breakthrough.
The iterations you currently see it just adding more storage, but the fundamental neural network structure doesn't change.
I'm confident AGI will not be achieved by the LLM architecture, and when the next AI breakthrough is, is anyones guess. But if you take history into account, it will take a while.
- Meetings
- Code reviews
- Manual testing
- Deployments and more testing
- Triaging issues
- WTF how did this bug happen?
- JIRA in general
- Whiteboarding sessions / Design docs
- Interviews
- 1:1s (mandatory ones)
- 1:1s (networking / problem solving / political alignment)
- Whatever your company's version of corporate extracurriculars is
I recognize, in some capacity, that this isn't the norm and in the US "professional engineer" is protected and not simply "engineer", but it feels akin to stolen valor to me.
If you are a licensed engineer of some kind, you’d state that outright.
The equivalent of stolen valor would be claiming to be a licensed software engineer; except there is no such license so it would also be fraud, misrepresentation, etc.
(I know this is different elsewhere)
Yeah, that is basically the thing in my country. You can't call yourself an engineer without passing a test, but I can't take it because there isn't one for software engineering.
Same thing for freelancing. Freelance jobs are defined in a list, and other jobs cannot benefit from the simplified tax rules that freelancers enjoy, but that list was written before software development was a thing.
I agree. Engineers have to clear a much higher bar. Even though my career was spent in medical diagnostic software where we had to get 510k clearance, I was still keenly aware that this was a fundamentally different activity from actual engineering.
On the other hand, with the modern division of labour in a lot of companies and with the rhetoric I see here in HN and in other places: a lot of developers are indeed not even close to being engineers.
- Compilers will make developers irrelevant
...
- Compilers can write assembly language code
- Compilers have -O3 now
etc...Maybe we should rejoice. I remember dreading writing documentation, and now I would happily hand that off to AI.
So those ex-developers are free to do most interesting things in the world with little change of not relying on nice, steady paychecks every month.
Thing is, natural selection will take care of you at the same time. Because you'll also come to rely on products they make, or services they offer, either directly or indirectly. So eventually, you too, will suffer the consequences of the enshloppification.
> I understand things and then apply my ability to formulate solutions
AI is coming for that too. Don't be naive
Being able to produce code is a huge unlock for many non-programmers. So in a way, it doesn't matter how much time existing developers spend on coding. It's about helping anyone become a developer.
By removing all the junior engineers, you've fundamentally changed the market forces longer term and most people expect that to negatively impact you in the supply demand curve regardless of whether or not the statements you've made above are true, which they most likely are for senior engineers.
Saying otherwise is sort of like reducing the task of writing a novel to typing.
As someone 35 years into my career I agree this is the most exciting part of my career. I love programming and I do it all the time but I do it by reading code and course correction and explaining how to think about the problems and herding cats - just like working with a team of 100 engineers. But the engineers I’m working with now by and large listen, don’t snipe me on perf reviews, aren’t hallucinating intent based on hallway conversations with someone else, etc. This team of AI engineers I have can explain to me their work, mistakes, drift, etc without ego and it’s if not always 100% correct it’s at least not maliciously so. It understands me no matter how complex the domain I reach into, in fact it understands the domain better than I do, so instead of spending a few months convincing people with little knowledge or experience that X is a good idea, I can actually discuss X and explore if it’s a good idea or not and make a better informed decision. I’ve learned more in these discussions than I’ve learned in decades of convincing overly egoistic juniors and managers to listen to me about something I’m an industry authority on.
However I see very clearly we will need very few of the team of 100 human engineers I can leave behind in my work. Some of will be there in a decade, but maybe less than 1:10. This is going to be a more brutal time than the Dotcom bust for CS grads, and I don’t think it will ever improve. Mostly because we simply won’t need the “my parents told me this makes money” people, just the passionate folks remain. But even then, we face a situation where the value of any software developed is very low because so much software is being developed. It’s going to turn into YouTube where software that is paid for is very small relative to the quantity of software developed. We already see this in the last few months with the rate of GitHub projects created. If the value of any software created is low, the compensation of the creator will be low unless they’re very rare talents.
The first is that AI is achieving human-level expertise and capability, but since they're now being increasingly trained on their own output they are fighting an uphill battle against model collapse. In that case, perhaps AI is going to just sort of max out at "knowing everything" and maybe agentic coding is just another massive paradigm shift in a long line of technological paradigm shifts and the tooling has changed but total job market collapse is unlikely.
The other possibility is that we're going to continue to see escalating AI capability with regard to context, information retrieval, and most importantly "cognition" (whatever that means). Maybe we overcome the challenges of model collapse. Maybe we figure out better methodologies for training that don't end up just producing a chatbot version of Stack Overflow + Wikipedia + Reddit. Maybe we actually start seeing AI create and not just recreate.
If it's the latter, then I think engineers who think they are going to stay ahead of AI sound an awful lot like saddle makers who said "pffft, these new cars can only go 5 miles per hour."
I'll also add another factor: it's become increasingly clear at our company that AI-enabled humans are getting to the bottom of the backlog of feature ideas much quicker. This makes the 'good ideas' part of the business the rate limiting step. And those are definitely not increasing with AI, beyond that generated by the AI churn itself ("let's bolt on a chat experience or an MCP!")
So maybe the coding assistants don't get a 10x improvement any time soon, but we see engineering job market contraction because there aren't really enough good ideas to turn into code.
Simple marginal thinking: When you lower the price of something, it gets more use cases. A rich person might not take even more flights because they are cheaper, but more people will consider flying when they wouldn't have at old prices
LLMs know nothing but are great at giving the illusion that they know stuff. (It's "mansplaining as a service"; it is easier to give confident answers every time, even if they are wrong, than to program actual knowledge.) Even your first case seems wildly optimistic. The second case is a lot of "maybes" and "we don't know how but we might figure it out" that seems like a lot to bet an entire farm on, much less an entire industry of farms.
We sure are looking at a shift in the job market, but I don't think it is a fork in the road so much as a Slow/Yield sign. Companies are signalling they are willing to take promises/hope to cut labor costs whether or not the results are real. I don't think anything about current AI can kill the software development industry, but I sure do think it can do a lot to make it a lot more miserable, lower wages, and artificially reduce job demand. I don't think this has anything to do with the real capabilities of today's AI and everything to do with the perception is enough of an excuse and companies were always looking for that excuse. (Just as ageism has always existed. AI is also just a fresh excuse for companies to carry on aging out experience from their staff, especially people with long enough memories/well schooled enough memories to remember previous AI booms and busts.)
But also, yeah if some magic breakthrough makes this a real "buggy whip manufacturer moment" and not just an illusion of one, I don't mind being the engineer on that side of it. There's nothing wrong about lamenting the coming death of an industry that employs a lot of good people and tries to make good products. This is HN, you celebrate the failures, learn from them, and then you pivot or you try something new. If evidence tells me to pivot then I will pivot, I'm already debating trying something entirely new, but learning from the failures can also mean respecting "what went right?" and acknowledging how many people did a lot of good, hard work despite the outcome.
It has no "semantic understanding" as we would define it. It's just increasingly good at winning cluster lotteries because we've increased the amount of training data to incredible heights.
So they surely know a lot, but you are never sure if the info is correct or not.
I'm sure we'll reach AGI at some point, but looking at AI history, I don't see that coming any time soon.
I spent 2nd half of my 30y career fixing organizations and process where this was the case. so many things are wrong in places where this is the case (or alternatively you need a different job title :) )
Yeah technically drawing lines on canvases may be an very important part of being a painter, but it is hardly the core of what makes or breaks great art.
Junior developers spend most of their time writing code (when they're not forced to attend pointless standups, because Agile/blah/blah)
> The developers who still think their job is about writing code will perhaps not have a job in the future.
So you're saying the same thing everyone else is saying. SWEs won't go away, but they will be greatly reduced, because those whose job is about writing code -- junior devs -- will be replaced.
(How will Sr Devs in the future be created? That's the question, isn't it.)
As an extreme example, maybe we’ll see long-running internships and trainings like doctors experience. Doctors don’t start their career until ~12+ years of prep and training.
Pragmatically, software development has a lot of examples of teenagers making apps and college students building software companies. In the 12 years it takes for training, low-knowledge workers could be vibe coding continuously replacements of most commercial software products they’d be hired to build. So I doubt we’ll treat software development as a rarified high skill job.
Dude - look what happened in the last 2 years on software.
Now project out another 10.
I totally agree with you 'as of now, in the current paradigm'.
But that could very well change.
Really? I mean, good on you if it's true and you like the attention but that's sounds like an implausible amount of interest in someone and their relatively mundane profession.
- Well, and AI can do part of that too, maybe more of it soon.
- ...
- Besides, you don't need 10 guys in a team to do that. A couple of them will do, then AI will do the coding. What will happen to the rest?
- ...Perceptions of what knowledge counts as 'low level' are constantly shifting. These days, if you write C, you're a low-level, close to the metal programmer. In the 70s, a lot of people made fun of Unix for being implemented in a high-level programming language (i.e. C) rather than assembly.
Then you won’t have this just world of the deserving workers at all. Just formerly deserving workers and idiot billionaires like Musk (while the robots do all of the work).
The general progression of a Hollywood writing career is from PA (production assistant), which often starts off as a volunteer "intern" position, to writer's assistant. Assistant here usually means doing any meanial task anyone wants from fetching drycleaning to taking a dog for a grooming appointment. When you're a writer's assistant, you will oten spend time in a writer's room. You will see how the process works. You probably won't contribute anything but you may get feedback on tehings you've written from whomever you're working for.
The next step is as a staff writer. You will be paid to produce scripts and stories for a TV show, for example. That writer's room will have a head writer. On a TV show the head writer is almost always the showrunner. The showrunner is effectively the leader of the entire project and is responsible for breaking up a season intoo storylines and making sure those scripts make sense as a collective. They might one or more of those scripts or maybe not. The showrunner will hire directors for each episode.
The path from staff writer to showrunner often goes through being a producer. Producers are responsible for a lot of the logistics of filming a show. Hiring extras, finding locations, coordinating stunts and costumes and making sure the director has everything they need.
As part of all this, in the 22 episode TV era, writers would often end up spending time on set while the show is being filmed. They'd learn from the process.
Every part of this was necessary. Those writers on set are your future producers and showrunners.
So what's happened in the streaming era is that writer's rooms got smaller (so-called "mini writer's rooms"), maybe only the showrunner is ever on set, the writers have stopped working by the time filming even begins and you might only be doing 8-12 episodes. On a 22 eipside season, that one job could support you. 8-12 episodes can't.
But you see how this all breaks down when writers can no longer support themselves, they're no longer being trained to be future producers and showrunners, there's no feedback from set back to the writer's room and you end up with 3 year gaps between seasons. The only reason for all of this is because it's cheaper.
So, you may be a staff engineer who tech leads dozens of other engineers. You're not formally a manager or director but you have a lot of influence about the entire project. But how did you get there? You started as a junior engineer being told what to do. You got to see how other leaders operated. You became responsible for more and more things. You might start fixing bugs under supervision to managing a feature then an entire project and so on.
So what's going to happen here is (IMHO) we will have years of the software engineer space shrinking. There'll be very little entry-level hiring. Layoffs will reduce the entire workforce and there'll be a few tech leaders who hang on because they still produce value. Some of them will probably discover they don't produce enough value and they'll go too.
But where do the future tech leaders come from in this scenario? AI is being used as an excuse to kill the entry-level pipeline and if you go around and say "git gud" [sic] then I'm sorry but you just don't understand the impact of what's happening or you don't care because, at least for now, you're simply not affected.
You see the same thing with people who espouse the myth of meritocracy. Well, if a given workforce shrinks by 50%, half those people are, by definition, not going to survive. An individual may be about to reskill or skill up to survive but not everyone can. And that's how people end up in Amazon warehouses. At least until they're no longer needed there ether.
It's ability to write code is alright. Sometimes it impresses me, sometimes it leaves me underwhelmed. It certainly can't be left to do things autonomously if you are responsible for its output.
Moderately useful tool, but hellishly expensive when not being subsidized by imbeciles that dream of it undrrmining labor. A fool and his money should be separated anyway.
What I am really concerned about the incoming economic disaster being brewed. I suspect things will get very ugly pretty soon.
Part of the practical degradation of traditional programmers over time has always been concentration and deep calculation, just like in chess. The old chess player knows chess much better than a 19 year old phenom, but they cannot calculate for that many hours at the same speed as before, so their experience eventually loses to the raw calculation. Maybe at 35, or at 45, but you are just not as good. Claude Code and Codex save you the computation, while every single instinct and 2 second "intuition", which is what you build with experience, is still online.
It's not just that it's a more fair competition: It's now unfair in the opposite direction. The senior that before could lead a team of 6 is now leading a team of agents, and reviewing their code just as before. Hell, it's easier to get the agent to change direction than most juniors around me, which will not be easy to correct with just plain, low-judgement feedback.
In farming, those who were replaced by tractors did not keep their jobs. What is different now?
Taiwan creates most of the worlds semiconductors. China makes the majority of everything else. Silicon Valley created a majority of the tech market's value.
But there's a cap where the world has enough stuff at least in the short term, and growth slows.
Humans only need a certain amount to survive. With populations leveling out, industry will shift from servicing human needs, to the needs of corporations and other industries. Consumers will become a minority in the future economy.
What will corporations value in the future, that they're willing to spend on recurring human capital expenses? I think the answer will always be: the tasks that will help companies grow.
No I watch/listen to a lot of entrepreneurial stuff since 2016 and I still haven't launched my own product. There's a YT channel "Starter Story" it's like "this person make $100K/mo, here is the template".
It really is simple though, put a paypal button on a squarespace page and ask someone to pay it.
That's my point. You couldn't tell an unemployed farm worker to go start their own farm. They probably don't have the land or substantial capital it takes. But an unemployed software engineer just doesn't need anything like that to go into a business built on AI.
Since your farming land is limited, after the job is done, there is no more work.
For software projects, there is always more work to do. It's an arms race between competitors. Imagine you fire developers to maintain your speed, and your competitor keeps their people to go faster. Good luck to you!
They need to go into business for themselves, and become capital owners, who benefit from AI, not workers who are replaced by it. AI won't be able to compete at entrepreneurship unless robots are given autonomy and property rights like humans, which is quite unlikely to happen any time soon.
You live in a world of ever-changing metaphors. Get used to it.
So AI saves me immense amounts of time figuring out how to write proper syntax, remembering the ins and outs of unit testing frameworks, etc. If I stick around for a year or three I'm sure I'll get much much faster and learn these tools better.
Based on my experience, I think this will prove more true than not in the long run, unfortunately.
Professionally, I see people largely falling into two camps: those that augment their reasoning with AI, and those that replace their reasoning with AI. I’m not too worried about the former, it’s the latter for whom I’m worried.
My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth. Maybe it’s just the new “you can’t cite Wikipedia”, but she feels that since the pandemic, there’s a notable decline in the critical thinking skills of children coming through her classes.
We have a whole generation (or two) of kids that have grown up being told what to like, hate, believe, etc. by influencers and anonymous people on the internet. They’d already outsourced their reasoning before LLMs were a thing. Most of them don’t appear to be ready to constructively engage with a system that is designed to make them believe they are getting what they want with dubious quality.
I notice many of the adults in my life are doing this now as well.
It also feels like the hiring "signal", which was always weak before, is just completely gone now, when every job you do advertise receives over 500 LLM written applications and cover letters that all look and feel the same.
The pro-athlete comparison in this article is bit silly IMO- there are obvious physical body issues that occur with aging if you rely on your muscles etc to make money. If you compare to other fields of knowledge work, such as say law or medicine, there are loads of examples of very experienced, very sharp operators in their 40s and 50s.
Instead, we get an economy that feels like it's on the cusp of a fall or at the very least a roller coaster. Poor tax incentives to hire. ZIRP is long gone. And the hiring managers are overrun with slop.
But bosses are happy to say it's AI because that makes you sound in control.
Saying AI for anything, good news or bad news, is a get out of jail free card for execs who want to appease shareholders.
There is also much more productivity. But I’m not sure it’s really a driving force yet as with the new productivity people are still just trying to do more with it which doesn’t translate to efficiency. Yet. It might once AI loses its wow factor and is just status quo. I feel like this is fast approaching but still may be a few years away.
My guess is companies overhired in COVID and between that experience and an uncertain market they don't want to make the same mistake twice.
https://en.wikipedia.org/wiki/Learn_to_Code#Policy_impact
I think the hype peaked around 2016 where Democrats were portrayed as out of touch for saying laid off coal miners could just "learn to code". By 2019 it was a cliché used to mock laid off journalists on Twitter.
2015 had ~50k CS graduates.
2021 had ~100k CS graduates.
You can extrapolate the rest.
We might be able to make a flow-comparison for "entering the field" versus "exiting the field forever", but layoffs don't really measure the latter.
I certainly don't have the money or time to go back to college and start a new career at the bottom.
Which is true, but it’s true as long as it’s not true.
The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.
At a mechanistic level, the “we have always found other jobs” argument misses that the reason we’ve always found other jobs is because humans have always had an intelligence advantage over automation. Even something as mechanical as human inputs in an assembly line was eventually dependent on the human ability to make tiny, often imperceptibly, adjustments that a robot couldn’t.
But if something approximating AGI does work out, human labor has absolutely no advantage over automation so it’s not clear why the past “automation has created more human jobs” logic should continue.
It also isn't true. The story of the last 50 years has been that technology, especially computer and communications technology, has facilitated the concentration of wealth. The skilled work got computerized, or outsourced to India or China. That left U.S. workers with service jobs where they have much lower impact on P&L and thus much less leverage.
In my field, we used to have legal secretaries and law librarians and highly experienced paralegals. They got paid pretty well and had pretty good job security because the people who brought in the revenue interacted with them daily and relied on them. Now, big firms have computerized a lot of that work and consolidated much of the rest into centralized off-site back-office locations. Those folks who got downsized never found comparable work. They didn't, and couldn't, go work for WestLaw to maintain the new electronic tools. The law firms also held on to many of them until retirement or offered them early retirement packages, and then simply never filled the positions. It used to be a pretty solid job for someone with an associates degree, and it simply doesn't exist anymore.
The only thing keeping the job market together is the explosion in healthcare workers. My Gen-Z brother and sister in law are both going into those fields. In a typical tertiary American city, the largest employers are the local hospital and perhaps a university or community college. Both of those get most of their revenue directly or indirectly from the government. It's not clear to me how that's sustainable.
If it makes you feel better, I'm pretty sure it isn't sustainable. (But I'm not an economist so take that with a block of salt.)
I don't think anyone has the answers. It's just some of us are honest enough to concede we have no answers, while others promote an answer that aligns best with their belief system.
"It'll all work out."
"It's the immigrants/blacks/jews/whatever dragging us down."
"Nothing's going to happen and we can all continue doing the work we always have."
"Burn the rich."
Etc etc.
Not a lot of serious attempts out there at even getting a hand on the issues, let alone fixing the issues.
If ai does take a lot of white collar work, is it a lot of comfort that maybe jobs in a very different sector will be better in 20 years?
It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.
Why? There will never be a shortage of sick/dying people. So medical staff, and also undertakers, aren't going anywhere.
But yes, the argument has been wrong often enough that the people still repeating it as a rule should be mocked and ashamed.
Anecdotally I see a lot of schadenfreude online about tech jobs after a decade or two of lecturing everybody from Appalachians in coal country, to Midwestern autoworkers, that they should just “learn to code.”
Malthusian observation can still be true...It only has to be true once, and the only reason people say it isn't right now is due to industrial fertilizers and short memories.
There aren't any careers and if there were, you would have to pay. Corporations certainly won't except under extremely rare situations where they have to to compete.
Education in what, though? No idea. And if there was one answer, and it's true that there will be fewer software developers, you'd likely be competing with many people for few jobs.
Exactly. I have yet to read a single logically sound argument that even gives a hint of what those professions/jobs might be (remember, they have to be plentiful enough to employ large numbers of people, so "I quit my corporate job and making more as a TikTok influencer" doesn't count). Remember that a new profession has to open up new hitherto unknown revenue streams otherwise there are no companies who will pay you.
Of course, there are jobs that will still require human labour for some time yet, but in reality a lot jobs that require physical human labour are now done in other parts of the world where labour is cheaper.
Those which cannot be exported like plumbing or waitressing only have limited demand. You can't take 50% of the current white-collar workforce and dump them in these careers and expect them to easily find work or receive a decent wage. The demand simply does not exist.
Additionally, at the same time as white-collar jobs are being lost an increasing number of "low-skill" manual labour jobs are also being automated. Self-checkout machines mean it's harder to get work in retail, robotaxis and drone delivery will make it harder harder to find work in delivery and logistics, robots in warehouses will make it harder to find warehouse jobs.
It seems to me there is an implicate assumption that AI will either create a bunch of new well-paid jobs that employers need humans for (which means AI cannot do them) and jobs which cannot be exported abroad for cheaper. What well-paid jobs would even fit the category of being immune to AI and immune to outsourcing? Are we all going to be really well paid cleaners or something? It makes no sense.
A lot of the advice we're seeing today about retraining in construction worker or plumber seems to assume that there's an unlimited demand for this labour which there simply is not. And even if hypothetically there was about to be a huge increase in demand for construction workers, it would take years to even have the machinery, supply chain and infrastructure in place to support the millions of people entering construction.
The most likely scenario is that people will lose their jobs and will be stuck in an endless race to the bottom fighting for the limited number of jobs that are left in the domestic economy while everything else is either outsourced or done by robots and AI.
The better advice is to start preparing for this reality. Do not assume the government will or can protect you. When wealth concentrates corruption because almost inevitable and politicians have families to look after too.
Please take this seriously. Even if I'm wrong it's better to prepare for the worse rather than to assume everything will be find and you'll be able to retrain into a new well-paid career.
I think that most advice like this is individual - not systemic. We all won't fit into the remaining fields when white collar work gets less demand, but someone who's just pivoting now still could. There's no systemic solution that will actually be implemented. The only advice left to give to people is to not be too late. There's only so many people that can be trained to do this range of work (has a physical component that is difficult to automate + can only be done here + has an education/certification moat) just based on spots in educational programs, and they'd probably be better off getting on that sooner than later if they think that their current job is going to be in the crosshairs soon.
50% of the workforce was in farming near the end of the 1800's. Today, 2% 40% of the workforce was in manufacturing early to mid 1900's. Today 8% 60+% of the current workforce is white collar. What will it be in 20 years ?
LLM's are only a couple of years old, we have no idea where this will go. Maybe it will be a big hallucination, maybe we are looking at the very early version of farm and manufacturing machines.
The ENIAC was larger than a person, we now have watches that are significantly more powerful. Maybe in the future, your Apple watch will have more compute than several racks of H100's.
When they came for the farmers, no one else cared - everyone got cheap and bountiful food. When they came for the manufacturers, no one else cared - everyone got cheap and bountiful products. Now they are coming for the white collar workers, and their highly paid laptop lifestyles.
Who is left to care ? The billionaires ?
I work for a corporation that includes cleaning brands and I've got bad news...
But if we're waiting to be paid to retrain there, I wouldn't hold our collective breath.
Like a politician who's asked about this in a town hall, but thinks that "our plan is to do absolutely nothing" doesn't sound very appealing.
The last work-house closed in the 1930s.
That all started not because people were afraid jobs were going to be replaced by the new loom. People had been using looms for centuries. They were protesting working conditions: low wages, lack of social protections when people were let go, child labor, work houses, etc. There were no labor laws at the time to protect workers... but there were these valuable new machines that the capital owners valued greatly. The machines were destroyed as leverage: a threat.
Since the capitalists ultimately "won" that conflict it has been written, by technocrats, that technological progress is virtuous and that while workers will initially be displaced the benefit to society will be enough such that those displaced will find productive work else where.
But I think even capitalist economists such as Keynes found the idea a bit preposterous. He wrote about how the gains in productivity from technological advances aren't being distributed back to workers: we're not working less, we're working more than ever. While it isn't about displacement of workers, it is displacement of value and that tends to go hand in hand.
I think asking, "Where do I go?" is a valid question. One that workers have been trying to ask since the Luddites at least. Unfortunately I think it's one that gets brushed under the rug. There doesn't seem to be much political will to provide systems that would make losing a job a non-issue and work optional.
That would give us the most leverage. If I didn't have to work in order to live I could leave a job or get displaced by the latest technological advancement. But I could retrain into anything I wanted and rejoin the work force when I was good and ready. I wouldn't have to risk losing my house, skipping meals, live without insurance, etc.
We saw this pre-ai with uber and door dash. I think as AI automation dies down and most companies are competing at a near optimal level with the new tools we'll need humans again in more traditional roles to build the next generation of innovations. And then the whole cycle will repeat.
Github project work on the weekends? That's not possible for most people in their mature/family years (or shouldn't be necessary - what about living life??)
Almost half of U.S. employment is from small businesses (250 or less employees). That's means there's a lot of entrepreneurship happening already. I have lots of family running their own small businesses (trades), and it's a lot of work, and doesn't necessarily pay as well as a cushy corporate job, but what I'm trying to say is lots of people can and do start their own enterprise.
Yes, lots of them will fail at running their own business, but it's not like corporate jobs are getting any safer either.
Oh, you simply decide to use grit and willpower to pull yourself up by your bootstraps, placing some calls to people you met at certain parties aided by a small 6 digit loan from your family. /s
I see "employees should be more entrepreneurial" as a kind of victim blaming, and I'm especially cynical if the concept arrives via groups that spent the last several decades putting up barriers to entry, drafting non-compete contracts, capturing regulators, and basically shutting out entrepreneurship.
Is it ideal working conditions? No, but its better than nothing, you can set your own hours, and you can leave when the next opportunity comes.
Oh, yeah? Did the Uber drivers and door dashers accrue the surplus value?
My parents were both construction workers. There is an understanding that you cannot lift heavy objects forever. You stop lifting objects and move to being a foreman, a supervisor... and if you are uncomfortable learning to get others to do work that you before have done yourself, you burn out your body entirely and the consequences are horrible.
This is factual reality, but it is also a parable that has been important for me to internalize about delegation in my own career. It is not irrelevant to AI use, but I don't think it slots onto it totally as neatly.
I guess I'm not spun up about the determinism because I've been working at the "treat it like a person" level more than the "treat it like a compiler" level.
To me, it's really like an engineer who knows the docs and had a good memory rather than infallable code generator.
I work at a small company, so we don't have tons of processes in place, but I imagine that if you already had huge "standards" docs that engineers need to follow, then giving the LLM those standards would make things even better.
Large software projects (I'm thinking google3) often have large amounts of both of those things, as they're always getting new developers joining.
Writing code results in a much better understanding of the code than reviewing it
In fact I would say that in large complex codebases, in order to develop the same understanding of what the code is doing might actually take longer than writing it from scratch would have
but in practice, the current obsession with agents means people are creating applications that depend entirely on sending requests to LLMs for their core functionality. which means abandoning the whole idea of deterministic software in favor of just praying that all of the prompts you put around those API requests will lead to the right result.
But that's not the case here
EG: How did Mark Zuckerberg make software five years ago?
He's as capable of opening up an editor as I am, but circumstance had offered him a different interface in terms of human resources. Instead of the editor, he interacts with those humans, who produced the software. This layer between him and the built systems is an abstraction, deterministic or not.
Today, you and I have a broader delegation mandate over many tasks than we did a few years ago.
The concept you're touching on is the idea that LLMs (and humans) are functions which are inscrutable. Their behavior cannot be distilled into a series of logical steps that you can fit in your head, there are no invariants which neatly decompose their complexity into a few interpretable states, and the input and output spaces are unstructured, ambiguous, underspecified, and essentially infinite. This makes them just about impossible to reason about or compose using the same strategies and analysis we apply to traditional programs.
[1] Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice. I can't imagine there are many workflows which feed an LLM the exact same prompt multiple times and rely on the output having some statistical distribution. In fact, even if you wanted this you may just end up getting a cached response.
Everyone added /dev/random to their offerings, so every LLM tools for coding are non deterministic.
At the end of the day, what matters is how willing the person behind a given task is when it comes to deliver quality work, how transparent and honest they are, to understand requirements, and a pleasure to work with along other humans. AI/LLMs are just extra tools for them. As crazy as it might sound, but not so many people are willing to push boundaries and deliver great work. That is what makes the difference.
If you mean creating software, well we are creating more software than ever before and the definition of what software is has never been so diverse. I can see many different careers branching off from here.
The interesting thing to me is that, Software Engineering will have to evolve. Processes and tools will have to evolve, as they had evolved through the years.
When I was finishing university in 2004, we learned about the "crisis of software " time, the Cascade development process and how new "iterative methods" were starting.
We learned about how spaghetti code gave way to Pascal/C structured prpgramming, which gave way to OOP.
Engineering methods also evolved, with UML being one infamous language, but also formal methods such as Z language for formal verification; or ABC or cyclomatic complexity measurements of software complexity.
Which brings me to Today: Now that computers are writing MOST of the code; the value of current languages and software dev processes is decreasing. Programming Languages are made for people (otherwise we would continue writing in Assembler). So now we have to change the abstractions we use to communicate our intent to the computers, and to verify that the final instructions are doing what we wanted.
I'm very interested to see these new abstractions. I even believe that, given that all the small details of coding will be fully automated, MAYBE we will finally see more Engineering (real engineering) Rigor in the Software Engineering profession. Even if there still will be coders, the same way there are non-engineers building and modifying houses (common in Mexico at least)
I think the right analogy to calculators and CAD tools, is IDE with Intellisense for SWE -- instead of typing code one char by one char, we can tab to automate some part of it.
But I agree with your consensus -- SWE is changing, whether we like it or not. We need to adapt, or find a niche and grit to retirement.
It doesn't make sense to get hung up on this aspect of LLMs. We prefer non deterministic so far because it tends to work slightly better even if it is completely possible to ask for a temperature=0 deterministic answer.
With more scale and research, at some point you'll get results that are both useful and deterministic, if it's not already the case.
In 2026, both companies decide that AI can accelerate their developers by a factor of 10x. I'm not asserting that's reality, it's just a nice round number.
Company 1 fires 90 of their programmers and does the same work with 10.
Company 2 keeps all their programmers and does ten times the work they used to do, and maybe ends up hiring more.
Who wins in the market?
Of course the answer is "it depends" because it always is but I would say the winning space for Company 1 is substantially smaller than Company 2. They need a very precise combination of market circumstances. One that is not so precise that it doesn't exist, but it's a risky bet that you're in one of the exceptions.
In the time when the acceleration is occurring and we haven't settled in to the new reality yet the Company 1 answer seems superficially appealing to the bean counters, but it only takes one defector in a given market to go with Company 2's solution to force the entire rest of their industry to follow suit to compete properly.
The value generation by one programmer that can be possibly captured by that programmer's salary is probably not going down in the medium and long term either.
We have seen this in other knowledge industries. U.S. legal sector job count is about the same today as it was 20 years ago. But billing rates have exploded and revenues in the 200 largest firms have increased more than 50% after adjusting for inflation. Higher-end law firms have leveraged technology to be able to service much more of the demand and push out smaller regional competitors.
You might need to relocate to a place with much lower costs of living.
This was the idea behind remote working discussed during COVID-19 times:
- the company can pay less money because the employee is living at a much cheaper place than the expensive city where the company is located
- on the other hand, even with a smaller salary, the employee has more money at the end of the month because of the smaller costs of living
So both sides win.
This will change for the better if more and more educated people relocate there.
We should, IMHO, start getting rid of most software. Go back to basics: what do you need, make that better, make it complete. Finish a piece of software for once.
s/creating software/typing correspondence/
In a world where software programming/architecting is solved by AI, value will accrue to people with expertise in other domains (who have now been granted the power of 1000 expert developers), not the people whose skillsets have been made redundant by better, faster and cheaper AI tools.
It is clear to me that SWE and ML research will be subsumed before other domains because labs are focusing their efforts there, in their quest to build self-improving systems.
The idea that productivity gains which result in more of something being produced also create more demand for labour to produce that thing is more often wrong that true as far as I can tell. In fact, it's quite hard to point to any historical examples of this happening. In general labour demand significantly decreases when productivity significantly increases and typically people need to retrain.
If we truly need to sacrifice our skill to be productive by using LLMs that atrophy us, then the only devs that have a limited lifespan are us. The next ones won't have a skillset to atrophy since they won't have built it through manual work.
Also, I hereby propose to publicly ban the "LLMs generating code are like compilers generating machine code" analogy, it's getting old to reargue the same idea time after time.
For(){} it's normally either undefined or has a specific meaning. "Then iterate and do x" might mean many subtly different things.
Most programmers never deal with a compiler bug in their whole career, and can dismiss the possibility. For LLMs it would be hard to even define what a "compiler bug" would be since there is no specification for English.
Then there's the fact that models generally don't guarantee anything at all. Sonnet can change under your feet.
Models also degrade as the context window gets larger. Compilers handle one line just the same as 20.
I could keep going, there's so many fundamental differences in the process that the analogy only serves to provide a false feeling of security.
But the GP stands.
LLM code generation: "Here is an intent/specification. Invent code that hopefully satisfies it."
Does the compiler analogy provide value under those terms? I don't think it does. In fact, I think it provides negative value.
We don't need to use tortured analogies to express excitement over these tools.
This sounds ageist - I'm around 40 and feel I am at my mental peak, compared to even my mid 20's. This isn't a good analogy at all, the brain doesn't "wear out" like a professional athletes' body does, it just changes its structure. The brain is a remarkable organ.
In other words, if you want to continue stubbornly typing out code by hand, the person right over there has already mastered agentic tooling and is doing vastly more than you, more quickly, and with greater precision, and will simply be a more fit candidate to hire. Roles for this type of legacy stubborn personality will be less and less, and you will age out as part of the old school.
As I interview lot of people for typical Enterprise IT jobs even at 20 years of experience they do seem to not know much beyond what they learned in first few years.
> After being laid off, a programmer becomes a welder. One day while working, he suddenly muttered to himself, "It's been so long, I've even forgotten how to solve three sum". A coworker next to him quietly replied, "Two pointers".
Hand-coding -> llms/agents
Sometimes the only thing that can fit into a tricky spot is a screwdriver. The power drill didn't make screwdrivers obsolete, it just made them less necessary day-to-day.
Same thing here. LLMs are power tools, but sometimes, the only thing that can fit into a "tricky spot" with code/systems is knowing how to do it by hand.
I think that is significantly overlooked when people ask "where are the 50+ engineers?".
New grads are being slammed, "because LLMs can do that work."
No new folks, no managers, and no olds. What a delightful career we've chosen for ourselves.
It's a lot easier to be early than to be smart or quick.
I'm not discounting ageism in the industry, but how popular of a career was it 30+ years ago compared to now?
I've had to change course several times in my career (graduated in 2004). UNIX admin and later network admin, DevOps, and now I'm doing a mixture of DevOps and development (despite not being a full time developer in my entire career, being able to use AI to plug into code and fix/enhance things like monitoring, leveraging cloud APIs, etc has been a game changer for me).
Right now, as somebody in their mid 40s, I'm seeing AI as a productivity amplifier. I am able to take my experience and steer and/or fight opus into doing what's needed and am able to recognize if it looks right.
I'm so glad I'm not fresh out of school in this environment, though people said the same thing when I graduated in the Dotcom bust...but being ready and eager to do groundwork was a door opener. Finding that first door to open was tough, though.
Year after year it was just much more new people joining as things got easier and more accessible.
Now you see 40 or 50 year olds far and between where most guys I see are in their 30s. Ones that are 60 yo diluted in the sea of new entrants.
Ageism didn't came from the top it just happened with flood of young employees, there is just social dynamics where you might get 40yo not being a manager getting along with bunch of 25yolds but that's going to be an exception not the rule.
People need to learn the difference between fluid intelligence and crystalized intelligence.
People need to hear that startup success is maximal when the founders are older, not younger. VCs chasing youth are statistics deniers.
> If AI does turn out to make you dumber, why can’t we just keep writing code by hand? You can! You just might not be able to earn a salary doing so, for the same reason that there aren’t many jobs out there for carpenters who refuse to use power tools.
The argument the piece makes is that being a software engineer who insists on writing code by hand may no longer be a lifetime career.
I think the definition of "software engineer" is changing, and it's not even changing that much. We construct software to help solve human problems. We can keep on doing that, just now we get to do it more.
Argument B: AI means you don't learn as much, and the single most useful work product of a software engineer is knowing how the code functions, so it's depriving your company of the main benefit of your work. Also, layoffs are terrible business strategy because every lost employee is years of knowledge walking out the door, every new hire is a risk, and red PRs are derisking the business.
Institutional and personal knowledge seem similar, but the implications of each are radically different.
But if it's a diversity of things (that use or leverage software development) then you probably have a lifetime career ahead of you.
I've been writing software for over 40 years but I've never seen myself as having a software engineering career. I've been a research assistant in geophysics, a marine technician on research ships, a game developer, an advisor to the UN, and a lot more. Yes all through that I used software, but I did a lot of other things in the process of using it.
This requires having an understanding of a business domain, economics, human psychology and technology.
The competitive aspect of it means that you need to understand these things better than most people and machines. If you don't, then your skills have no value on the market. Will generalist AI trained on public data ever understand these things better than software engineers across every possible niche?
I don't think so because that knowledge is usually gate-kept. Nowadays, new engineers almost have to beg to be given access to knowledge of company systems. It takes at least 6 month for a skilled engineer to ramp up on large systems... And it's mostly because of institutional resistance.
The thing is, it doesn't even require people to be withholding information... Some engineers will happily share everything they know about internal systems... But in a big company; you first have to identify this person. That can take a while... Then you need to identify other persons who will give you other information that is relevant to your specific tasks/integrations.
That's a part of it, but only a small part. They don't get good at the thing mainly by doing the thing. They get good at it by training to do the thing.
An NFL football player does a ton of things other than playing in games. They have practice scrimmages. They do drills like throwing, catching, running patterns, tackling, reading quarterbacks, stripping balls, picking up fumbles, etc. They work with coaches on their technique. They watch film. They spend many hours in the gym and on the track building their strength, speed, cardio, and stamina.
Yes, it's true that your software skills will atrophy if you don't use them. But that doesn't mean your skills have to get worse and worse causing you to eventually quit the job. It means you need to set aside time to maintain your skills. It may no longer happen automatically as a side effect of your work, but it can happen intentionally instead.
Yes, LLMs might dramatically reduce the amount of code we write by hand. But I'm a lot less convinced they'll solve all of the amorphous, human-interacting aspects of the job.
I've long regarded myself as more a master craftsman than an engineer, and I've had the pleasure of working on one-of-a-kind or first-of-a-kind things. Perhaps fortunately I'm near retirement. But I genuinely enjoy the coding: it's how I engage with the problem and learn to understand it. It's also how I ensure that I'll be able to read the code and find things in the code base when I come back to it years later. Last thing I want to do is spend my days overseeing someone (or something) else's code. If I wanted to be a manager of programmers I could have done that years ago.
if you do that then... likely very replacable.
I also see that in the future humans will adapt to AI, instead of the opposite. Why? Because it's a lot easier for humans to adapt to AI, than the opposite. It's already happening -- why do companies ask their employees to write complete documentation for AI to consume? This is what I called "Adaption".
I can also imagine that in the near future, when employment plummets, when basic income become general, when governments build massive condos for social housing -- everything new will be required to adapt to AI. The roads, the buildings, everything physical is going to be built with ease-of-navigation by AI in consideration. We don't need a Gen AI -- that is too expensive and too long term for the Capitalist class to consider. We only need a bunch of AI agents and robots coordinated in an environment that is friendly to them.
Everyone knows that AI-written slop isn't worth actually reading. So when reading mass media content we skim over each paragraph's opening phrases rather than read it deliberately, sentence by sentence. We also do this while writing notes, dropping determiners, acronymming common phrases, and making references to characters/scenes in popular media. Now with the rise of vocal interfaces and ever shorter rounds of engagement, all this abbreviating will only exponentiate.
Over the past two decades, there have been lot of solved problems like building boring scalable web apps, UX design etc and AI is fairly good at this, enough so that good prompting can get you very far. This shouldn’t be a surprise, there’s a lot of publicly available data for this (GitHub repos etc).
On the other hand, there are rarer Computer science problems like designing efficient Datacenters, GPUs, DL models. Think about problems that someone of Jeff Dean’s or James Hamilton’s (AWS SVP) or a skilled Computer Architecture researcher like David Paterson’s ability would solve. These are incredibly hard and rare problems and AI hasn’t been able to make much progress in these areas. That’s true for other sciences as well.
If you’re a regular Joe like me who builds boring CRUD apps, AI is coming for you.
What I mean is if you are working on incredibly hard and rare problems that require rare skills and also those problems don’t have publicly available data that LLMs can be trained on, you’re safe from being “automated” away. If not, you must plan accordingly. Also if you’re a skilled manager (in any field) AI cannot replace you, highly skilled managers that can get the best out of their teams have rare skills that aren’t easily replicable even amongst humans much less AI. Although, if going forward we need fewer developers we will need fewer managers too.
"We may be in the first generation of software engineers in the same position. If so, it’s probably a good idea to plan accordingly."
He compares software engineers to pro athletes. What does it mean to plan accordingly? Start working with the mob to fix poker games? I don't know what "plan accordingly" means at all but it is a thought provoking statement.'If you work in construction, you need to lift and carry a series of heavy objects in order to be effective. But lifting heavy objects puts long-term wear on your back and joints, making you less effective over time. Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”.'
On another note, for sure software developers are saying things like "this is the part of the job that I like" or "if you aren't doing the work, you won't be good at the work." But other people are saying this, too. I just saw an episode of "Hacks" ("QuickScribbl") where the writers say pretty much this exact thing when confronted with AI tooling designed to "make their job easier". Is writing comedy also like lifting heavy objects at a construction site?
And read Programming as Theory Building already, it's not that long
Maybe you want a react app and using redux for state would be the best for the specific case but the AI doesn't recommend it and you don't know, then you are missing out and can end up with something suboptimal This was just an example
Not AI, offshoring combined with downsizing of US based engineering orgs.
Corp America has figured it out finally after 2 decades of entitled developers making 2 day tasks into 2 week tasks in the name of "best practices", "architecture" and "Doing It Right!" etc, all while commanding high salaries.
It turns out that Good Enough is in fact good enough and the people who write the checks are onto it. Even if its not quite good enough, cheap offshore resources can just be sent back to make it work. US based staff of 5 people who can be held responsible for guiding a much larger offshore group seems to be the common pattern.
All of this was imparted to me by a CIO on a recent interview with a financially strong mid sized company in the eastern US. The developers I interviewed with where EXCEPTIONALLY COMFORTABLE and displayed zero signs of any kind of stress from maintaining their literally 20 years out of date infra. It was insinuated that the team I interviewed with "probably wont look the same in 6 months" too.
> professional athletes & construction workers - work in physical fields which means there's physical limits to what they experience both in terms of what they do & what their body can do.
> software engineering is an art & engineering. which means as long you're of sound mind - you can do it till you die of old age or even if say you go blind. Because you ability to refine / taste is not dependent on your physical capabilities.
> llm's one shoting things - is not engineering because engineering is about compromising within constraints & using rules of thumb. so if you have no constraints u r not engineering.
> (2) AI-users thus become less effective engineers over time, as their technical skills atrophy
Wouldn't (2) imply that if everyone just used AI there eventually would come a time when there aren't engineers who will outcompete you (because their skills are so atrophied)?
That's worked out pretty great so far!
Maybe if you somehow stick with the same company for your entire career, it could feel somewhat similar... but I doubt it, as 'best practices' and many other things cause it to change.
The days of 'lifetime career' had already gone for most people, way before AI arrived.
That statement is enough of an evidence
Virtually, the entire blog is about AI with a ridiculous publishing rate (https://www.seangoedecke.com/page/5), funny how I can look at this site HTML and know right away it was done with AI.
Can we stop upvoting vibe published articles? The arguments are flawed and don't even make sense to anyone who does software
Yes, the blog is mostly about AI, and yes, he publishes very frequently. But his articles don't read like AI and he claims not to have used it in his writing (https://www.seangoedecke.com/avoid-ai-writing/). And regardless of how you feel about the content, the community has clearly decided it's worthwhile as a discussion point.
Because people want to discuss about the topic of the headline.
I love thinking about what kind of assembler the compiler may generate (though honestly, I haven't got a chance), I love thinking about how languages should be more dynamic (Who's got actually-first-class functions? Like, ones that you can build, compose, combine and manipulate to the same degree you can a string or a JSON object, no LISP, you're cheating, close no point).
And yet.. I don't care that much. Not because I'm late in my career (I'm 40, there's still some years left in me), but because I want to make computers do things, and what I enjoy doing is thinking up ways the things can happen, and sometimes the particulars that matter when making a lot of different things happen in a coherent system.. And yea, LLMs are trained on peoples output, and from what I'm seeing everywhere, is that people are overall fairly terrible at that, and most of the plumbing-type glue being written is not worth anyones time..
And I'm not saying I don't care because LLMs can't do my job (heck, even after hours of back-and-forth spec building and refining every little nook and cranny, the stupid coding agent still cheats or gets it wrong (even after it's beautifully explained, proven even, by reasoning and example alone, and on first try even) that the words coming after the previous words makes sense, as soon as the plan is put into motion, it'll mess it up on some scale so fundamental I should just have done it myself.. And I hope that changes, I hope that I don't have to go into such detail.. I hope to become a steward of taste rather than a code-reviewer.. I hope that I will eventually not be needed for that anymore.. I want it to replace me, so I can move to telling what I want, and have it made that way..
I hope I won't need to steward good taste, and that nobody will.. I hope the applications I use in 5 years will be a collection of one-offs, and gradually improving tools that was written _just_ for me, for my way of working, and my way of thinking.. I want to prompt the damn program to change itself as I discover new ways to do things, until it can eventually figure out how to automate the last bit of my task away.. And then I'll go do something else exciting.
I'm greatly anticipating the next Great Leap Forward™ with a publicly available Mythos or other new paradigm I can't currently imagine
but at the moment, agentic coding has made me busier than ever before, while its Product Managers, UX, QA, Data Scientists and DevOps that have disappeared from the teams I'm on - across multiple organizations - and I have to do all their work and make dashboards that I didn't have to make as well
All the projects that would have been cancelled by Q3 are being attempted in Q1, means more work
Im looking at proper engineering in building local LLM networks, with proper firewalls, capability access, and guards around the LLM systems to allow and enable advanced use while not just "lol delete everything" happening.
When theres a land grab, move to selling tools and how to knowledge work in maintaining the tools and proper operation and maintenance.
I also look at upsells like local LLMs as reason to do this in house, so that companies arent liable for rug pulls and violation (consumption) of trade secrets, or breaching confidential discussions.
And LLMs arent good at recommending tech stacks for running them. Stuff is moving faster than most data training sets have.
If your crowning achievement is: "I can 100% all leetcode hards" I have bad news for you.
While most developers were busy grinding, the corporations did the most ensuring the only sensible pathway to wealth and development is closed = running own business that is. In many countries, due to regulatory capture enacted by corrupt governments, making profit is next to impossible, that if you manage to jump bureaucratic hurdles that are not present for larger corporations.
AI is just a tool. Will AI replace software engineer is like asking will hammer replace the carpenter?
I dont know, maybe in your part of the world, but where I'm from we have a series of robust worker protection laws that try to limit the damage the work does to you. We generally consider it a bad thing for workers to damage their bodies, and if we could build houses without it, we'd prefer that.
In this specific case we do have a techniques to build software without causing damage, so why change that?
This post is arguing that maybe software enginnering should start being harmful, even though we know it doesn't have to. It's a post of a guy begging to be fed into the capitalist meat grinder. Meaningless self sacrifice.
Like many people I've been sad about the loss of a career I spent years developing skills in and I'm 55 now and won't be quickly retraining for another high paying career. Fortunately I do have other skills I developed earlier in life and low needs so will probably limp by fine but it's still a painful adjustment.
Point being, you could always write code as an older person. Well, back in the old days when we wrote code anyway.
More AI Soothsaying. Not so hard on the Inevitabilism this time.
Less "pure" programming, but lots more programming in general.
It's a tool for knowledge work.
No carpenter is a specialist in drills.
It seems to me that the best way to navigate a long term career is to have another specialty and use software engineering as a tool within that specialty.
I’m kind of confused how you might think it wasn’t. Going through a career as a software dev until retirement was very common.
Software engineers didn’t just disappear after age 40.
At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game" (i.e. you likely won't get a programming job anymore when you are, say, 35 or 40 years old).
So,
> Software engineers didn’t just disappear after age 40.
is rather a very recent phenomenon.
That seemed commonly held among folks participating in the dot-com bubble. Plenty of people had been doing it for decades even as the bubble was growing.
> Software engineers didn’t just disappear after age 40.
>> is rather a very recent phenomenon.
Not really. It's not that they disappeared, it's that they're a small fraction of the overall SWE population as a side-effect of how much that population has grown.
This wasn't common anywhere except for maybe the Silicon Valley bubble.
The rest of the US and even the world could see that not having a very successful company of your own is to equal to being a failure.
> This wasn't common anywhere except for maybe the Silicon Valley bubble.
This was a very common sentiment even in Germany at this time.
That's great, but its nowhere near the norm, and people have been doing generalist software engineering for decades. There has been a sufficient amount of work for a long time to be performed by generalists that it has been a very reasonable career.
IMO AI is the first thing that has ever actually challenged that.
I think there are trades where tool (or process if i may be allowed to extend the analogy) specialists exist and are highly valued. My dad is a plumber, so ill use that example but id trust similar is true for carpentry. there are specialists by task/output (new construction, repairs, boilers etc) but also tool specialist plumbers and companies for example drain clearing equipment or certain kinds of pipe for handling chemicals other than water are very specialised, and there are roles for them because the thing they enable, and the criticality of the task, and often the cost and complexity of using the tool are high enough to make specialisation valuable.
IMO software has, for the 10 years ive been working in it, been in an unusual position where the tools (languages, engineering practices, tech stacks) were super technical and involved, but also could be applied to a large number of problems. That is the perfect recipe for tool specialists: complex tool with high value and broad domain/problem space applicability.
Because of that tool specialisation, we've separated the application of the tool to a problem/domain from the tool use. reduction of complexity of applying these tools to many problems, means all domain specialists will use them, relying less on tool specialists.
imaging a mcguffin tool for attaching any two materials together, but which took a degree to figure out (loose hyperbole here), that sudenly you could use for 5 bucks and a quick glance at the first page of the manual. An industry that used to have lots of mcguffin engineers, would be mega disrupted, and you could argue that those tool specialists would have to identify more with what they were building than the mcguffin they were using.
If you're a paralegal or an accountant who can't manage their workflows with AI, you're going to be way less productive than someone who can.
And if you're a paralegal or an accountant who can manage a lot of your workflows with AI, you don't need custom software (hence less dedicated software engineers).
There's no category difference between being an expert in carpentry vs masonry and being an expert in drills vs hammers. They are both just areas of expertise.
Going down the path of trying to define what is expert functions and what is "merely" a tool using anything but descriptive technique is nonsense.
Expert functions are just those areas where using a tool is sufficiently difficult to require expertise.
If talking to an AI makes me dumber and a limited career, then all the customer support people that ever existed were in the same or worse position talking to dumb humans on chat all day answering tickets always about the same topics and linking the same docs over and over. This makes no sense.
Managers can go back to being technical, because they are still interacting with problems that require human thinking. Token farmers don't.
If you believe this about your software career, how do you think your going to switch into another career as a junior and keep up?
Other professions do too, whether it's healthcare, etc.
Software being a new field, didn't really become a standardized profession in the way engineering might be.
The goalposts are moving because the standards are moving, because the capabilities are moving.
Remaining a self-directed learner will remain critical.
Actually, at this point I feel that the value in software engineering is moving from coding to testing and quality assurance.
If it doesn't find anything it says I didn't find anything.
No it can't.
AI knows nothing about software engineering, all it can do is generate code.
We've entered a period of single-use-plastic software, piling up and polluting everything, because it's cheaper than the alternative
This is sarcasm, but it's probably also going to get sold as a feature at some point.