I built significant pieces of the Copilot onboarding, purchasing, billing and settings flow. For eight months I headed up the Copilot anti-abuse effort. I then led the launch of GitHub Models, and am now working on other Copilot projects.
Tay.ai, Zoe and Copilot bots being deployed wrecking the platform being unable to fix infrastructure issues whilst the humans are just tweaking all the tiniest issues.
They should instead focus on GitHub actions and improving the uptime of the whole platform first before doing anything AI.
If so, why didn’t you personally fix them so that nobody could associate you as an individual with a broken CX?
If not, please let me know where to apply, because that sounds like a unicorn organization.
Humans in large groups do amazing and crappy things at the same time. Playing gotcha with someone’s resume is a shitty thing to do.
I’ve thought about psychology. I know LLMs can work as pseudo-therapists but I feel like that’s a field where the human connection / human element will remain important.
On one hand some jobs with human element are safe, at first. Think of artists being made obsolete by the camera. Portrait artists became mostly obsolete, but we still pay for art. It's the story behind the art that became important. Or, I still go to cafes with nice atmosphere and friendly staff. There are restaurants with robot staff here in Japan, much cheaper. After the meal you pay at the table without ever talking to a person. But it does not feel nice to sit in there, so I gladly pay a premium for the nice coffee.
On the other hand, it is not only software jobs in danger, but all office jobs. So a lot of people may suddenly be out of money. Let's say you open a cafe, but no one has money to come and pay. Society has to change a lot from the current model to be able to handle this.
I think applying AI to other white collar roles that also require problem solving but do not have as much training, will prove much more difficult. Even coding on proprietary dominated domains is a much, much worse experience than people have with more accessible code. Using it for electronics has been hit or miss, embedded software is a bit shakey, game development is also challenging to use it for etc.
I honestly figured that’s why everyone is coming out with MS Office plugins for all the models, and MS itself is putting it in the tools.
So if most any company only needs one person to solve limited IT issues, prompt code production and deployment, generate the usual truckloads of excel spreadsheets, and do most of the finance and accounting… it starts to look pretty scary.
Then, what about the people making and maintaining all the facilities for these people we don’t need anymore? The world flipped its lid about commercial real estate when wfh became a thing. That was relatively small and temporary.
And all the small businesses like local restaurants and coffee shops that they frequent, etc, etc.
There are so many 2nd order contagion impacts if the knowledge work economy implodes that very few people won't be negatively impacted to some significant degree.
And some people seem to think that outcome means the government will step in and engineer some sort of soft landing. And outside of the US this may very well be true, but here in the US? Seems unlikely.
LLMs suit some jobs more than others. Its quite possible SWE's are the only profession massively affected - whether that means a evolution of the role or decline/death is another question.
I could say the same thing for software engineers I know as recently as the middle of last year, things can change very quickly.
Up until about December 2025 the fact that LLMs would replace us all (SWEs) was the punchline to a joke for most working developers I know. But most of the ones I know aren't laughing anymore, unless its a nervous laugh.
LLMs may (likely will) disrupt software developers first, but I don't think we are particularly unique and I don't see any reason why the same risks won't spread to virtually all knowledge work, especially if executives in those fields see a significant amount of SWEs being replaced by LLMs as an initial test case.
We'll see.
They may never reach that point.
But even if they never get good enough to replace all software developers, they can still cause massive job losses by allowing companies to do the same work with far fewer developers.
Unfortunately the workflow of a software engineer has been to do things like asking questions on stack overflow to do their job - to use digital resources scattered across the web - to show examples of code freely across the web.
The workflow of an accountant, portfolio manager etc has nothing to do with accessing and using the web in the same manner. If you did their jobs you’d know this, but you don’t. Right?
Is it really a surprise? Nope. Thankfully writing code isn't enough. So your job is still somewhat safe for now.
Professional accreditation and responsibility is its only real moat. And those are “yeah but!” issues we hand-wave in discussions around swe too.
Otherwise those are more vulnerable.
Most importantly there's often a period of general uncertainty and adoption, during which the new law is already in force, but LLMs will rely on whatever there was previously.
Most people find this job stressful and boring, but the same can be said about software engineering. Regular people pay money to have it dealt with.
Overall I think there will always be demand for handling the messiness of the real world and humans have the upperhand here because they learn as they go, not via release cycles costing a sizeable sum and taking months.
Seriously if the future manifests, all of these standard effort based jobs would become redundant...
The issue with outdated information is way overstated, it'd just add the current rules to the context when evaluating and be done with it. We're already at 1 million context size... That's enough for a lot of rules - and the number will likely go higher as time progresses
A condo costs $2500/month so I will either be homeless and freeze to death or be euthanized.
Maybe I'm a contrarian but I don't think there's hope for anyone that doesn't control resources.
Best choice would be moving up north and slaving in a mineral mine along with everyone else that lost their jobs. Like the 1920s.
I don't see myself being qualified for such a role since I am too short and don't have the physical leverage.
It was manual labour first. Then there were teactors. Now robots join in - does that mean that personel cutting grass is obsolete? No , you need all of them. That means that city becomes nicer.
With software and AI I somehow feel the same will happen. How many features have you skipped just because it would help some niche set of users and PM or Management would not approve the spending. It is low priority. Or bugs that were annoying but financially not bringing much value.
I hope switching some work to AI , some companies will capture opportunity to make software better while others will make the same software cheaper
I think many of us who have been in software for a while will fantasize about low-tech jobs, I imagine there will be a bunch of hobby farms...
One thing I will add: while AI is getting really good at _doing_ the software building bits, I haven't yet seen it well integrated into the decision-making and political structure of organizations. Right now, it seems best in the hands of a high-agency individual empowered and able to make changes or 'ship' something, with them acting as the bridge.
This of course, is not a technical challenge, but I would expect the change in structure of organizations to make this more efficient to be slower than the pace of improvement we've seen over the last few years.
It conflates purpose with outcome. I don't accept the premise that software's purpose is to "automate away other jobs". That may be an outcome, but software's purpose is to enable completely new possibilities. Think bicycles of the mind.
The Apollo guidance computer didn't replace astronauts. It made the missions possible in the first place because no human can continually correct the spaceship trajectory every 250ms for 10 days on end.
We're not receiving some cosmic karma. We're more like cotton pickers after the cotton gin. Someone still has to plow and plant.
Due to a text predictor?
I'm a daily user of the most recent Claude and while it's amazing at presenting other people's knowledge and reducing cognitive load by filling in the gaps, it's still just a machine that predicts text and that is a limitation which won't be overcome in this generation of such tools which, including research demonstrations, are close to a decade old already.
To me the main issue is that investors are not aware of these limitations and will keep pouring money into this way beyond everyone's breaking point. But really that's a failing of the world's economic system, which relies too much on their whims.
> I'm a daily user of the most recent Claude and while it's amazing at presenting other people's knowledge and reducing cognitive load.
'Presenting other people's knowledge' is enough to get the job done when that knowledge encompasses the entire internet.
My experience is that it's really darn good at producing text, but it's not a logic engine - it's not designed to be one and even the most recent versions make mistakes which indicate it's not actually thinking.
It is, by definition, design and architecture a system that produces believable text.
Here's a task to give it which pulls the veil right off:
Ask it to add tests to a piece of code where code coverage is 100%, but it doesn't actually test functionality 100%. You'll start seeing nonsense sooner or later.
The “just a text predictor” framing was fair a couple years ago but hasn’t kept up. Current models can genuinely identify untested edge cases even when coverage is 100%. You're definitely using the latest and greatest models?
The architecture started as next-token prediction, sure, and yes, human judgment is still required, but that judgment is being captured and integrated too. Every time millions of people use these models, their feedback feeds the next round of improvements.
Also, these models don’t need to replace your best engineers to be disruptive. They just need to outcompete the bottom of the bell curve. For a lot of junior-level work, we’re already getting close.
Claude 4.6 opus high, specifically.
As for human brains: every self respecting neural networks 101 course is prefaced with "don't draw analogues to the human brain". And for good reason. Natural neural networks are fundamentally way more complex at every scale.
Also the brain indeed predicts, but also verifies and learns from the predictions. LLMs don't do that - not in real time at least.
The way to survive it was to 1) move to a village/small town where you could have a garden for fruits, vegetables, corn, chickens, maybe a pig or two for winter. 2) Young people lived with their parents while the parents saved up to build/buy their children their own flat or house. Children whose parents saved up enough would often start a family after getting their own place. Those who didn’t, co-lived with parents.
The secure middle class corporate employment in the US is getting severely downgraded by AI. While there is talk of universal basic income the reality is that many many companies depend on the surplus that middle class families enjoy spending and without it, vast swaths of industries will get starved as well. The solution is to show people quickly how to hunt and gather and farm as makers instead of just employees. Figuring out what is needed, taking on a small corner of needs somewhere in meat space or online, and planting there. AI has been fantastic at helping even solo founders with that. They need to encourage a cornucopia of ideas and experimenters as early as k12. They need to set up more favorable conditions for handling the admin side as well.
If the US government does not encourage cornucopia of AI-powered small business entrepreneurship and lets monopolies squash that early, they will end up with far FAR worse conditions. Any monopoly who keeps pitching “universal basic income” while actively avoids paying taxes, will end up forced into more taxes.
Big tech needs to make room for people to build and grow businesses (looking at you, Apple, for copying every successful app with a native, and you Meta for eating every social competitor) or they will end up paying for everyone’s universal basic income and then some.
If this country wants to survive the AI era, it needs to remove the pink glasses of “secure corporate job” and teach people how to plant, hunt, and gather as independent players in the market really fast.
I like the architecture analogy. An architect is not really focused on doing the actual building of a design, but understanding what's possible, what tools and techniques and materials are available, and figuring out how to put the pieces together to make a thing.
Right now, you're the architect who designs a house, but you're also the cement mixer, framer, drywall installer, plumber, electrician, and so on, all at once.
In this analogy, it's hard to design anything big like a skyscraper if you're bogged down by all of the minutia like picking out what type of nails to use for the framing and then installing them.
I think going forward, AI is going to do a lot of the non-architecture part of software engineering. We will all become architects.
The difference between us will be the scope of what we're qualified to design as we go through our careers - new grads will cut their teeth on the likes of designing shacks, principals design skyscrapers.
I think this also unfortunately means there's gonna be a lot less people in software. The industry will still exist though, but it's going to look way different.
I look forward to this being settled out, the uncertainty sucks.
If I was going to college tomorrow, I wouldn't touch a CS program with a 10 foot poll until all of this settles out though.
Maybe companies haven't seen it yet, but most office jobs can and will be eliminated in the next decade or two.
The wheel of industry rolls forward and crushes everyone underneath it
AI-based job displacement will do wonders for raising class consciousness when it's too late.
the best time to unionize is when you don't need a union.
Unions and indeed any bargaining organization only have leverage when their people are needed, but what happens when the people themselves are needed no longer?
Replacing workers will be used to increase profit margins rather than lower prices because there’s no competition to force prices down thanks to monopolistic consolidation
You can’t raise class awareness in other professions that have been undergoing job displacement for decades. Good luck trying to do it among software engineers where self worth rides high and empathy is non existent. They will still be arguing on HN that unionization is a bad idea.
EDIT: also so-called "breaking your back" has the same effect as going to the gym. Sure I am aware that there are really back-breaking jobs and they should be helped by machines. But there's no rule to say that the helping machines need to to all of the work. A moderate amount of physical work is just beneficial to everyone.
> also so-called "breaking your back" has the same effect as going to the gym
I can tell you've never worked a manual labor job in your life. The workload definitely does not have the same workload as just "going to the gym."
Most IC6/7+ would not code anyways - in fact a friend of mine said "we had our own agents we just called them IC4/5" - which was ironic but funny.
I am curious if we would ever get a new programming language like rust or go, without this creativity.
In a way, we have different products that does more or less same things (postgres vs mysql for example). The reason is there's difference of thought in the process. I doubt this will go away.
This is what bugs me the most. Those who are now at IC6/7 rose through the daily grind of coding and debugging from L1. But now that those jobs are getting automated how will someone rise to IC6!!? It’s as if first 10 rungs of a ladder are missing and only someone with an exceptionally good athleticism can jump up and start from 11th rung.
I think in the coming decades we will see IKEA effect in woodworking. Like it’s extremely easy to build cheap furniture whose individual parts are really compressed papers. There’s hardly any good carpenters left to do the real wood carpentry. Those who are left will cost a bomb (rightly so) and can only be afforded by rich people.
However I am not quite so defeated, I think that developers will continue to find employment in tech even as AI augments the roles. Experienced developers are the obvious pick for a hire to run agentic AI development tools, and even the obvious pick for managing a no-code endgame scenario as they are just smart technologists with strong problem solving skills.
I think the devs who were only here for the paycheck and would not reasonably pick software if it didn't pay so much, will probably be happy to retrain into something else but be disappointed by the paycut.
I am also excited by the prospect of being able to take on bigger scale side projects solo as that's really where my passion lies.
I think general purpose technologists will really excel in this new ecosystem as the industry will be back to moving fast and breaking stuff for a while, for better or worse. A lot of them call themselves programmers right now but will evolve pretty quickly.
Pragmatism, small teams and fast pace will best deliver software based projects, and the bottleneck in big orgs will become (or already was) the bureaucracy and communication layers. Small team, greenfield projects have a huge advantage in getting an MVP to market, which is pretty exciting for someone excited mostly by solving problems with technology.
Time will tell though, this is not career advice and times are chaotic. At the end of the day, there are other careers, and you were smart enough to get into software. You will be smart enough to find a new career.
But I don't think the demand will ever be zero, or that laypersons will ever write (useful) software using AI, because most people do not understand what software is, what it does, what it can do, where to start, what to ask, what is data, what is input vs. output, etc. They are incredibly clueless, and it's not a problem of intelligence. Some of the most clever people I know have no idea about this. (Maybe they don't care enough to understand, or maybe it's a mindset that you either have or don't have, IDK.)
I just don't see how we could do without people to think things through.
this could be one of the silver linings to AI disrupting the industry. tech was better for the world when it was run by nerds that were in it for the love of the game.
Software validation was always the interesting part vs software verification. Validation asks the question, did you build what was actually necessary?
Maybe I've just had bad luck but over last 2 decades I've only worked in places where 80% (at least - probably closer to 95%) coworkers (in development related areas) had negative productivity - making software more complex, brittle and abstract than necessary. With AI assistants the same people can be more "productive" and gatekeeping is mostly a lost cause.
I hope to find a decent alternative in those few years and never go back to software :)
The job is changing, and I don't like it in many ways, but there we go. It's not the first time new tech has nuked my dev job and I had to change.
I have personal projects that I hand-code, and personal projects I hand to Claude. Depends on how boring the project is. If it's stuff I've already solved a bunch of times, I hand it off. If I have room for good learning, I code it myself.
Progress is slower than people seem to think. Of course AI as a field is half a century old.
But on the other hand, AI had a period of rapid acceleration 40-ish years ago and was then hit by an AI winter. We might hit that winter again in a year and all predictions made today are off the table.
I don't really feel like it's a "bad" thing; I've said for a long time if a job can be automated, then it should be automated. I still do believe that, even if I am probably on the losing end of that in the not-too-distant future.
I think I am reasonably good at software, and I think I write code that's still a bit better than what Claude does. In fact, I suspect that will actually be true for quite awhile, but the problem is that "writing code 20% better" isn't exactly a selling point when my competition is $100/month and takes like 1/20th the time. Most software, even before AI, wasn't optimal and was kind of shitty, and good engineers were still always replaceable with shittier cheaper ones if it was economically viable.
I tend to land on my feet for this stuff, so I still think I'll be ok; I know how to use the tools and there will still need to be some humans who understand how this shit works, so I'm not worried about becoming homeless or anything. What I'm mostly worried about is that I won't ever have fun at work anymore. I liked solving problems, I liked thinking of clever solutions to avoid a mutex or increase concurrency, I liked figuring out how to squeeze a few percent more performance out of my given limitations. It's something I'm good at, and it's basically the only way to get decent money while doing math.
Since the ceiling for writing software has been significantly lowered, I think eventually the cushy yuppie status of software is going to shrink.
Maybe I should learn to weld or something.
Most of the startups that get the attention are attempting to be the next big thing but a startup can just be a startup. It doesn't have to be big or glorious.
Someone who sells hot dogs (on a small scale) cant really hire a programmer but if he could (or could write it themselves) there would be plenty of software to write. You can make a nice interface with all of the sales statistics, inventory management, maps with competition and demographic data, work schedules, etc etc There is infinite complexity to even the simplest job. You could hire help and have an app talk them though every step in great detail with pictures, videos and animations. You can encode all of the little tricks that could normally take decades to learn. Say, on a busy spot you might not have time to spend 8 minutes properly cleaning the grills every 47 minutes but you could wipe down the glass every 4 minutes and clean part of the grills with alcohol every 11 minutes then clean it properly every 3 hours. The app might instruct to google location related news or other topics to talk about with the customer. If people are walking their dog they expect you to guess what breed it is and where it comes from then ask how old it is.
You might build a tech stack to help recognize their face, remember their name, what they ordered last time and how long ago. You're not suppose to but you know you want to.
You wont be coding for a glorious salary but will earn depending on the sector your chose. The software will be pure dog food of the finest quality in the world.
Grilling hot dogs is also very relaxing, can let the mind float a bit and have software ideas the way you should. Lots of bad ideas will come, I can show people pictures of themselves eating my hotdogs!
You can basically look at programming as the new literacy. You might want a fancy job writing letters for a nobleman but it is hardly the only application.
I wouldnt be so sure. They'll keep the people who can do what needs to be done with new tools. Current title is irrelevant.
In addition loosing a 400k tc vs. 2x 200ktc makes more sense if they are all prooompters and AI handlers anyway.
On the other hand. Unless we have a breakthrough hardware/physical innovation, GenAI in its current forecast is not energy-efficient, cost-efficient, compared to human/deterministic methods. It has shown no capacity so far to « create » in the sense we animals do.
And all that, still being: highly subsidized (your subscription does NOT cover the costs of the service as of now, we are still in the market creation/capture phase), and without mesurable economical benefit.
Things are still far from being over.
Don't ruminate on the future too much folks, you won't die by hunger.
The "just" here is minimizing what has been the crux of the problem for the past ~5 years.
This technology has been capable of producing code all this time. The end result has been improving due to massive scaling efforts, and some relatively trivial engineering ("reasoning", "agents", etc.).
And yet reliability is still a massive problem. The tools still hallucinate, still lead the user in dead-end directions, and still do so confidently and randomly, without any discernible reason. Expert users are able to guide them to a certain extent, but whether the prompting incantation is done manually or via the trendy Markdown file of the week, it's all guesswork based on feelings and anecdata.
I'm personally not too worried about being replaced by these tools, even though my skillset is nothing remarkable. My opportunities might shrink, but this is a two-way street. Companies that use "AI" indiscriminately don't interest me either. The demand for quality human work and ingenuity will always exist, even within a sea of mediocrity.
I'm much more concerned with the societal impact of the mountains of shoddy software being produced, deployed into increasingly more critical infrastructure, and put into hands of incompetent and malicious people. There is very little thought and discussion on this topic, let alone any guardrails. "AI" companies are now attracting governments and advertisers, both full of malicious and incompetent people. The next decade is going to be interesting, that's for sure.
For example: Software engineer role is about automating people -> often not.
That just indicates lack of rigor. Also, if so. Who will make the ai automate people? GOD? People think poorly understood theory and gradient descent will produce God.