Employment in the 2020-2022 range was highly unusual due to COVID stimulus the resulting unprecedented hiring. Tech companies were hiring anyone they could and after some time juniors were the only way to feed the insatiable demand for more headcount.
Comparing to this time without taking that into account is going to be misleading.
This period was also a strange time for remote work. I’ve been remote since before then, but COVID era WFH felt like a turning point when bad behavior during remote work became normalized. That’s when we started having remote hires trying to work two jobs (and giving us half an effort / not getting their work done), and there was a rise of “quiet quitting” as a news media meme because everyone thought they could always just walk out and get a new job if they got fired for not working. We also weren’t doing juniors any favors by hiring them in high numbers without a sufficient ratio of seniors to mentor and lead them.
That also coincided with the rise of GitHub Copilot and ChatGPT. These tools were not great at the time, but if you were a junior who was over-hired into a company that didn’t have capacity to mentor you and you were working remote in the age when Reddit was promoting quiet quitting and overemployment on your feed every day, banging out PRs with GitHub Copilot for a couple hours a day and then going about your life for a $135K salary right out of college felt like you just hit the jackpot of historical confluences for work-life balance.
I saw this exact story play out at multiple companies who got burned out on the idea of hiring juniors due to the risk. Combine that with the rapid improvement of the LLM tools and the idea quickly became that you just hire seniors and treat the LLMs as juniors rather than paying another salary for them to pilot Claude Code around. The seniors had to review the Claude Code output anyway, so why not cut out the middleman?
Then add the economic downturn and the chaos of whatever this administration is doing this month and now there are so many qualified seniors on the market that hiring juniors is hard to justify. This is the part that would have happened with or without AI.
All things considered, being down only 20% from the 2022 peak seems not that bad.
There is no reason people have to tolerate a technology that is destructive to society, anymore than they have to tolerate companies selling fentanyl at 7/11.
The psychos who run the show don’t think like that. Many of them enjoy abusing other people.
They will wall themselves off with their robots with instructions to kill to control the masses.
Unless, power is given to the people through widespread (direct-)democratic reform, urgently.
The big question that never gets answered is: What regulation, specifically?
Any one country could come out and declare that AI can’t be used or just be taxed at an exorbitant rate or something along those lines, but what would happen? The AI usage would go to another country.
If the US heavily regulated AI, China just runs away with it all. None of the calls for regulation I’ve seen have an answer for this, aside from the completely crazy calls to bomb data centers in other countries.
I agree. It will be an interesting debate to watch play out, because a) lots of end-users love using AI and will be loath to give it up, and b) advances in compute will almost certainly allow us to run current frontier models (or better) locally on our laptops and phones, which means that profits no longer accrue to a few massive AI labs. It would also would make regulating it a lot tricker, since kneecapping the AI labs would no longer effectively regulate the technology.
Yes but we’ve been hearing this for two years now and it’s not happening.
Even the silly AI2027 project was predicting society destroying levels of AI arriving next year and that has aged poorly.
If a large percentage of jobs have not been eliminated in two years' time, it will be because AI has largely failed to deliver on its boosters' predictions. In this eventuality, the venture capitalists who funded the rise of AI will lose their money.
What's the end game for these people?
All evidence to the contrary. Aside from the French occasionally burning down some cars the western populations (me unfortunately included) have become remarkably relaxed about such things.
Even very extreme examples like blatant refusal by government to investigate absolutely horrific stuff like Epstein gets at most some mildly upset TikTok reels
Add some aggressive lobbying by big tech and perhaps a sprinkle of palantir population monitoring and I don’t think we’ll see a refusal to tolerate at scale
One of those is an annoyance, the other is full blown revolution territory.
Times change, the ladders you and I climbed to success may not be around in the same forms for our children. That's not new. But will there be any ladders to climb if the bottom rungs are all gone?
they're both a scam, just the "AI" influencer isn't a pretty woman
it's a 45 year old balding guy, with 25 accounts
When I was doing mentoring there were dozens of young people pursuing influencer goals.
Zero of them made it anywhere.
It’s not a safe career path unless you ignore the 99.99% of influencers who don’t get traction and only look at the couple who become famous.
This is quite false. It is trivial to generate UGC (user-generated content) using AI now, and the resulting short-form videos are virtually indistinguishable from the real thing.
Yes electricians are definitely safer than those of us who work in front of a computer all day, but I don’t think AI is good for them either. First of all, more young people might try to become one, potentially crowding the sector. Second, if the rest of us are poorer we’ll also spend less in housing and other things that require an electrician.
> So what’s the mechanism at play? AI replaces codified knowledge
Many job postings peaked in 2022 due to the pandemic. The original paper tries to account for this but falls short in my opinion.
Original paper said[1]:
> One possibility is that our results are explained by a general slowdown in technology hiring from 2022 to 2023 as firms recovered from the COVID-19 Pandemic...
> Figure A12 shows employment changes by age and exposure quintile after excluding computer occupations...
> Figure A13 shows results when excluding firms in information technology or computer systems design...
> ... These results indicate that our findings are not specific to technology roles.
Excluding computer and IT jobs is not enough in my opinion. Look at all these other occupations which had peak hiring in 2022.
Nursing jobs in the US: https://fred.stlouisfed.org/series/IHLIDXUSTPNURS
Sales jobs in the US: https://fred.stlouisfed.org/series/IHLIDXUSTPSALE
Scientific research & development jobs in the US: https://fred.stlouisfed.org/series/IHLIDXUSTPSCREDE
Baking & finance jobs in the US: https://fred.stlouisfed.org/series/IHLIDXUSTPBAFI
[1] https://digitaleconomy.stanford.edu/app/uploads/2025/12/Cana...
I agree with this sentiment, but history shows that humans are absolutely terrible at planning for revolutionary systemic changes like this. Our current inability to address climate change in any systematic way is just the latest example. It seems to me that if and when human labor becomes superfluous it will most likely result in a lot of chaos before a new system emerges.
https://finance.yahoo.com/news/top-10-earners-drive-nearly-1...
If this is to be believed, regular consumer goods won't matter anymore, and instead you just cater to the wealthy.
https://www.robpanico.com/articles/display/the-answer-isnt-m...
it doesn't follow that all software engineers are excellent at other work, please don't take that from my quip. but i could see the pattern, over time, being large enough to identify.
since software engineering jobs historically are very well paid, it does give some plausibility that former engineers working for less money would have this displacing effect.
its all icky no matter what i think, maybe someone else can tell me why i'm wrong and cheer me up
The reality is that higher interest rates hit software particularly hard because less venture capital is being thrown at traditional software development. When money is tight, cutting new hires decreases to push off layoffs, and when layoffs happen experienced potential hires become cheap, displacing inexperienced entry level hires. No one is telling their boss "reduce my budget, the AI is so good I don't need these people anymore" they are getting told by their boss "find a way to make due with 3 fewer people." We should expect overtaxed workers to try and find ways to utilize AI to take up some of this slack, and higher ups may spin a tale of increased efficiency, but the fact is AI adoption is a symptom, not a cause. The hiring decrease and layoffs happened at plenty of places that have failed to adopt AI as well.
Given that the current situation is unique to the circumstances, it does not hold that software portends the fate of all white collar work. That being said, we can certainly expect AI to improve, and attempts to be made to replicate and improve upon any genuine efficiency gains made in the present experiment. But the fact is that while AI may make certain tasks easier, that will lead to reorganization of the labor force more than disappearance. When mechanization of agriculture reduced the labor required to produce enough food to sustain people, people stopped being farmers. It was a major societal shift, and there were certainly issues, but we don't have 90% of our population made up of unemployed farmers who can't afford to buy food, nor even a large percentage of the population who wants to farm but is forced to work a much less desirable job.
Comparative advantage will guarantee people are still doing something. There will always be tasks which would benefit from human input, and there will always be more such tasks. We may not currently place much value in these tasks, but by virtue of AI doing the other tasks, the relative value of fully automated tasks will decrease and the tasks which require human labor will become more highly valued. In a world where the best paid people are ditch diggers, and ditch diggers can afford yachts because yacht production is fully automated, who cares what the wage of the ditch digger actually is?
Wealth concentration is a concern, but not because it will make it impossible for the vast majority to live a decent life. Instead the economic lives (and likely socio-political lives as well) of these two groups will simply diverge. This is extremely concerning from a standpoint of justice, but it's really orthogonal to AI. We've had such aristocracies many times before - they arise because of a failure of social institutions, not technology. We've been on the path towards them long before AI came along, and there is no compelling evidence that AI has accelerated the process. As far as economics is concerned though, your quality of life will continue to improve, even if some billionaire's improves faster.
To the extent that I’ve heard people propose solutions, many of them have pretty big flaws:
- Retraining - AI will likely swoop in quickly and automate many of the brand new jobs it creates. Also retraining has a bit of a messy history, it was pretty ineffective at stopping the bleeding when large numbers of manufacturing jobs were offshored/automated in the past.
- “Make work” programs - I think these are pretty silly on the face of it, although something like this might be mecessary in the really short term if there’s very sudden massive job loss and we haven’t figured out a solution.
- Universal Basic Income - Probably the best system I’ve heard anyone propose. However there are 3 huge issues: 1 - politically this is a huge no-go at the moment (after watching the massive Covid stimulus happen in 2020 I have a sliver of hope, but not much). 2 - Even a pretty good UBI probably wouldn’t be enough to cushion the landing for people who make a lot right now and have made financial decisions (number of kids, purchasing a house, etc) on the basis of their current salary. 3 - Even if this happens in America (presumably redistributing the wealth accruing to American AI companies) it would leave non-Americans out in the cold, and we currently have no globally powerful institution with the trust and capability to manage a worldwide UBI.
It's clear there's some things out there that aren't economically very profitable to do but would be nice to have done. So public works programs could soak up a lot of that and turn labor power on various stuff pretty easily I think.
I think those are the same people that ignored the history of the https://en.wikipedia.org/wiki/New_Deal and the massive amount of infrastructure it built in the US that we still use to this day.
I can't understand how that would work. If you put an income floor under everyone, their rents and other basic bills will simply increase to eat the free money. None of the experiments on how people will use UBI have taken that into account since the experiments were on relatively few people in an area. The other issue is how to pay for it - it has to come from taxes somewhere.
You’d counterbalance that - and solve the other problem - by offering massive tax relief for companies who hire junior employees. In the same way that we use tax relief to encourage real estate and infrastructure investment in underserved areas, we can use it to tip the scales of economic rationality toward continuing to employ young people with no experience or specialized expertise.
Notice that neither of these proposals requires redistribution as such (seizing wealth).
This just incentivize them to find different official reason for firing. Like missed deadlines (that sudently became shorter) or in computing job code quality (due to reduced deadlines).
> This incentivizes companies to try and figure out creative ways to continue using their existing workforce to maximize the value they get out of AI systems.
This doesnothing for the current issue of job market entry positions, where there is the most pressure from AI. Only help people only in position.
I don’t understand why taxation is so off limits to this crowd. We seem to live in a death cult where avoiding a slight inconvenience to 100 people is more important than providing a decent standard of living for the other 345 million people. You can invent whatever clever little solution you want in the meantime but eventually the chickens will come home to roost.
HN is filled with lots of temporarily depressed millionaires and many actual millionaires too. These are the ones that have bought into zero tax, government is all bad, free market capitalism for me Rand'ian ideas without any systematic thought on how their ideas would work out in practice.
Add to this that a lot of media, and pretty much everything on TV, is owned by billionaires these days that use the news as their platform to propagandize on why they should own more of everything and become richer, so it's not exactly surprising we're at this place.
The UBI should take number of underage children into account.
If the house turned out to be too much they’d have to sell.
That said, in face of a particularly disastrous (and yet predictable) outcome it is not enough to call for solving of such underlying issues; it is vital to solve such underlying issues before we introduce respective technology all over the place—and if that is not possible, make corresponding adjustments of how that technology is rolled out.
As for your idea, I see no signs of their striving for redistributing their wealth.
This is confounding AI-exposed white collar occupations with occupations that were overrepresented with extended remote work.
I am on multiple boards and that was a major factor that disincentivized new grad hiring in the US, because a new grad salary in a white collar profession in the US is a mid-career salary in the rest of the world.
AI is used as an excuse, but even most executives when polled agree that we do expect to see the amount of employees being hired at least in software adjacent roles to increase.
I cannot justify hiring a mediocre new grad in Seattle for $120k who will end up using Claude Code anyhow when I can hire an early-career employee doing something similar in Romania or India for around $20k.
The reality is a large portion of new grads and mid-career types who started their careers after 2020 are too mediocre for the TC paid.
---
Edit: pulling a comment of mine from downthread
> Why are then so many US developers still employed
Becuase unlike the HN hivemind, a large portion of experienced developers in the US have found ways to realistically adopt new technologies where they are relevant.
Reflexively being an AI fanatic or Luddite is stupid, but being a SWE who is able to to recognize and explain the value of these tools and their limits is extremely valuable.
I can justify paying $300-400k TCs and 100% remote work if you are not a code monkey. This means being able to architect, manage upwards, do basic design and program management, hop onto customer calls, and keep upskilling on top of writing, testing, and reviewing code.
We are not hiring SWEs to only push code. We hire SWEs in order to translate and implement business requirements into software.
A developer who has a mindset like that is worth their weight in gold, and there are still plenty of these kinds of experienced developers in the US.
Of course it's not justified, but I don't think it has anything to do with Claude Code or AI. It has always been true that you can hire competent programmers from eastern European at a discounted price, since forever.
If you believe (whether if this belief is based on reality or not) American programmers have "better working ethnics," "easier time to communicate with," or "skin in the game," then they still have these traits in AI era. If you don't then you should outsource anyway.
Yes you can. Life and business is not about profit. It’s about bettering the lives of people. Make it a priority to hire American because you’re an American company.
You’re making a choice to prioritize profit (or foreign countries) over the country that you benefit from. This is an immoral and short sighted business decision, as you will eventually see a backlash from the host countries you’re effectively operating as a parasite in.
Not trying to persuade you, just laying out there are alternatives that’ll be a reality eventually. Take a look at the current political swings in Japan, Restore Britain, etc.
This mentality results in the grass at the Taj Mahal being cut with hand tools [0], or Japan having a whole category of "useless jobs" like elevator operators [1, 2] that simply exist to provide employment. Taken to an extreme, this is the broken windows makework fallacy. If I smash a lot of windows, the local glazier gets paid handsomely, at the expense of everyone who had to pay for window replacements.
[0] https://www.youtube.com/shorts/wAH8jj9cm_o
[1] https://www.taipeitimes.com/News/editorials/archives/2015/06...
We know that turning everyone and everything into a product has it's own set of negative outcomes. Trying to play this off as a binary situation is a form of extremism in itself.
There is already the term Bullshit Jobs [1] for service economies like the US where huge numbers of people are employed as part of company bureaucracy rather than representing the most efficient outcome.
Simply put trying to run a society like a business is going to ensure that you get such a large number of people unhappy that you start a revolution that tries to burn everything down and leads to a lot of death.
There are numerous studies that show menial labor leads to poor mental health. Perhaps these people employed as makework automatons are happier than they would be if they had no employment whatsoever and were destitute on the street, but these are not the only two alternatives.
>I'll ignore the glazier example as it seems quite extreme, and also comes with more obvious/specific "victims"
The "victims" at the Taj Mahal/department store are the visitors/customers who have to pay slightly higher prices as a result. While not as extreme as the glazier in the broken window fallacy, the grass cutters/elevator operators exist on the exact same spectrum.
You could frame those visitors to the Taj Mahal as victims, but that takes quite a narrow and short-term view of value to them. Would the Taj Mahal be as pleasant a place to visit if it were in an even more unequal and precarious society than it is? We all pay for things that don't directly benefit us through taxation (usually). The childless pay for schools, the car-less pay for roads, but we benefit from the society that having them creates. It seems hard to say that those visitors to the Taj Mahal would not benefit from being in a more prosperous and sustainable society.
I have the vague sense we're far enough into e.g. offshoring that it's not purely about "profits" but about being competitive because all your competitors are doing the same thing.
But, then again, wealth inequality increase doesn't seem to be slowing (so profits /are/ being achieved), and I mostly think about businesses in robotics (and I don't spend that much time pondering it) where there's a lot of complexity in the stack, needing more "manpower", and being smart with money spent is maybe /more/ important. Robotics is a smallll sliver of software dev companies... (thus, "vague sense")
Have you ever run a business? Literally all anyone cares about is profit. When I talk to potential investors, banks for loans, even the government for grants, all they're interested in is cash-on-hand, revenue, projections, and expenses. I have never once had a bank ask me if I was bettering the lives of my employees when applying for a loan.
I'm not saying it should be this way, or defending capitalism here, but until there's massive changes to the Western economic system... yes, businesses are about profit.
yes
> Literally all anyone cares about is profit.
I would agree there's a lot of people that this is the case for
but it is not everyone
if my back was up against the wall I would rather shut down than e.g. dump PFAS into watercourses (3M style)
or fly-tip
or use AI
>This is confounding AI-exposed white collar occupations with occupations that were overrepresented with extended remote work.
Yup. If you look at Brynjolfsson's actual publication [0], you'll see that precipitous decline in hiring juniors in "AI-exposed occupations" starts in late 2022. This is when ChatGPT first came out, and far too early to see any effects of AI on the job market.
You know what else happened in late 2022? The end of ZIRP and Section 174, which immediately put a stop to the frantic post-COVID overhiring of bootcamp juniors just to pad headcount and signal growth. The problem with Brynjolfsson's paper is that it doesn't effectively deconvolve "AI-exposed occupations" from "ZIRP/Section 174-exposed occupations," which overlap significantly.
[0] https://digitaleconomy.stanford.edu/app/uploads/2025/11/Cana...
Edit: cannot reply
> Why are then so many US developers still employed
Becuase unlike the HN hivemind, a large portion of experienced developers in the US have found ways to realistically adopt new technologies where they are relevant.
Reflexively being an AI fanatic or Luddite is stupid, but being a SWE who is able to tin recognize and explain the value of these tools and their limits is extremely valuable.
I can justify paying $300-400k TCs if you are not a code monkey. This means being able to architect, manage upwards, do basic design and program management, hop onto customer calls, and keep upskilling on top of writing and reviewing code.
We are not hiring SWEs to only push code. We hire SWEs in order to translate and implement business requirements into software.
A developer who has a mindset like that is worth their weight in gold, and there are still plenty of these kinds of experienced developers in the US.
More people need to understand that.
The other thing is that regulations and tax related employment agreements between corporations and local governments are designed to prevent some offshoring of workers.
It's not a binary situation.
Writing code by hand is not going to be the default mode going forward. You either do the majority of your work controlling autonomous agents and reviewing their work or you get surpassed by all of your colleagues.
Are you going to be the farmer who refuses to buy a plow?
I also do not have sympathy for those who refuse to adapt. These people hold back organizations by appealing to tradition and resisting any form of change.
We're already seeing a dramatic slowdown in relative improvements.
The gap between Sonnet 3.5 (June 2024) and Opus 4.6 (Feb 2026) is large, but it's not 1,000x. Not even 100x.
LLMs struggle with novel technical issues.
They still require review and agents work best when they're hand-in-hand with an experienced human.
Let me ask you something: think of the average VP or Director you've interacted with. Do you think this person is capable of directing 100,000 autonomous coding agents? Do you think they have the verbosity in prompt and the skill to know when it's not making a subtle error?
The problem begins people see this as a know everything magic orb and trust it blindly for everything. It's still a pattern matching model, it's not sentient. It should remain a tool rather than one you should be asking for decisions.
Also I've seen people waste massive amount of tokens to add two tabs to a line of code to fix indents. Said they don't want to click the damn line. Bro you just typed a larger prompt, sent the complete file, instead of two keystrokes. And guess what it took multiple attempts. It's like watching someone type google into google 3 times before typing what they want.
Not all software are simple crud from your standard consulting business which makes more money the quicker something is finished. Some software runs critical life threatening infra everywhere. We need people who have the skills to build these, and they're discouraged from the school level not to thanks to AI bros.
Like when Trump tells Canada (paraphrasing) "You think we're going to let China eat you first and just have the scraps?"
Everything is a zero-sum game to them. They are philosophically void.
Then the models will put you out of work. Nobody will need you.
We'll have a world full of largely useless humans.