just look at this:
https://fred.stlouisfed.org/graph/?g=1JmOr
In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)
Maybe AI is having an effect on IT jobs though, look at the unique inflection near the start of 2025: https://fred.stlouisfed.org/graph/?g=1JmOv
For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)
I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.
You're right that we should see comparisons in other developed countries, but with SV being the epicenter of it all, you'd expect the fallout to at least appear more dramatic in the U.S.
And an overwhelming number of (focusing exclusively on the U.S.) tech "businesses" weren't businesses (i.e., little to no profitability). At best they were failed experiments, and at worst, tax write-offs for VCs.
So, what looked like a booming industry (in the literal, "we have a working, profitable, cash-flowing business here" sense) was actually just companies being flooded with investment cash that they were eager to spend in pursuit of rapid growth. Some found profitability, many did not.
Again, IMO, AI isn't so much the cause as it is the bandage over the wound of unprofitability.
And of course ZIRP was pioneered in Japan, not the US.
At least in my professional circles the number of late 2020-mid 2022 job switchers was immense. Like 10 years of switches condensed into 18-24 months.
Further lot of experiences and anecdotes talking to people who saw their company/org/team double or triple in size when comparing back to 2019.
Despite some waves of mag7 layoffs we are still I think digesting what was essentially an overhiring bubble.
I would add one more: me too-ism from CEOs following Musk after the twitter reductions. I think many tech CEOs (e.g., Zuck) hate their workforce with a passion and used the layoff culture to unwind things and bring their workforce to heel (you might be less vocal in this sort of environment... think of the activists that used to work at Google).
I see evidence of a collusion. My friends at several tech companies (software and hardware) received very similar sounding emails in similar time frame. I think the goal was "salary compression". Management was terrified of the turnover and salary growth so they decided to act. They threw a bunch of people on the labor market at once to cool it down. It would normalize eventually but you don't need long. Fired H1-B holders have to find a new job within 2 months or self deport.
Now that someone's said to Trump's face that Wall Street thinks he always chickens out, he may or may not stop doing it.
The point is he’s powerless not to. The alternative is allowing a bond rout to trigger a bank collapse, probably in rural America. He didn’t do the prep that produces actual leverage. (Xi did.)
"You will not find it difficult to prove that battles, campaigns, and even wars have been won or lost primarily because of logistics" (D. D. Eisenhower).
Trump did zero preparation for this trade war. It's still unclear what the ends are, with opposing and contradictory aims being messaged. We launched the war simultaneously against everyone. The formula used to calculate tariffs doesn't make sense. And Trump decided to blow out the deficit and kneecap U.S. state capacity at the same time he's negotiating against himself on trade.
The U.S. President can take on the bond market. Most simply by taking the budget into surplus, thereby threatening its existence. But Trump didn't do that. He didn't even pretend he was going to do that. Instead, he's strategically put himself in a position where he has to chicken out, and it honestly seems like he's surrounded himself with people who are too high, drunk and/or stupid to see that. He's the poker player who shows up at the table, goes all in, looks at his cards and folds in one move.
Same behaviour that bankrupted every institution he's ever been in charge of before. The definition of insanity is doing the same thing again and expecting different results.
It's possible he'll stop chickening out to win his internal argument against that reporter who said he always chickens out. Feeling like he's winning seems to be important to him and he holds grudges for a long time. In that case the American economy goes bye bye.
We already know he wants to end the dollar reserve currency status, because he said so - trade deficit and reserve currency status are different words for the same thing.
So many dumpster fires but only a few official bankruptcies, well that's always what's on the table and anything goes.
Back in the 20th century almost everybody knew that Trump was not trustworthy, especially not with money, give me a break, that's what made him such a tragic/comic character.
It's almost like people forget with any org where he is the ultimate decision-maker, if there is challenging debt with no quick way out, he is more likely than most to declare bankruptcy. Otherwise it would require acumen he has never had to right a faltering ship. Plus he would be bogged down when he wanted to shift his focus to schemes that were more promising to him personally. Like other pie-in-the-sky deals back then, or something like his memecoin today. So many times in different orgs with different/leading personalities it's only a declaration away anyhow. Not normally on the menu for the best of the real decent businessmen, but what do you do when you get one that's far from the best and not even decent?
If there were some deep insight into his personal financial situation over the years, especially recently, there might be a more accurate picture whether he would be inclined to "one day" just decide to declare the whole USA bankrupt and move on to greener pastures himself. Or if the decision has already been made, who knew? Or would believe it yet anyway?
Any President could always have made more money doing something else, the whole time it's only been a matter of integrity, or lack thereof.
edit: grammar
Current issue is community banks have 3x the commercial real estate exposure of other banks [1]. They're also less liquid and have a lower ROA. So in cases where the shock comes from outside the financial sector, they tend to be the first we worry about.
[1] https://www.fdic.gov/quarterly-banking-profile 33% vs 11% of total assets
Mission accomplished.
In 2000 I was moved cities and I had a job lined-up at a company that was run by my friends, I had about 15 good friends working at the company including the CEO, and I was guaranteed the job in software development at the company. The interview was supposed to be just a formality. So I moved, and went in to see the CEO, and he told me he could not hire me, the funding was cut and there was a hiring freeze. I was devastated. Now what? Well I had to freelance and live on whatever I could scrape together, which was a few hundred bucks a month, if I was lucky. Fortunately the place I moved into was a big house with my friends who worked at said company, and since my rent was so low at the time, they covered me for a couple of years. I did eventually get some freelance work from the company, but things did not really recover until about 2004 when I finally got a full-time programming job, after 4 very difficult years.
So many tech companies over-hired during covid, there was a gigantic bubble happening with FAANG and every other tech company at the time. The crash in tech jobs was inevitable.
I feel bad for people who got left out in the cold this time, I know what they are going through.
AI is somewhat creating a similar bubble now, because investors still have money, and the current AI efforts are way over-hyped. 6.5 billion paid to aquihire Jony Ive is a symptom of that.
AI may give us more efficiency, but it will be filled with more bullshit jobs and consumption, not more leisure.
We live in a time that the working class is unbelievably brainwashed and manipulated.
Keynes lived in a time when the working class could not buy cheap from China... and complain that everybody else was doing the same!
> We live in a time that the working class is unbelievably brainwashed and manipulated.
I think it has always been that way. Looking through history, there are many examples of turkeys voting for Christmas and propaganda is an old invention. I don’t think there is anything special right now. And to be fair to the working class, it’s not hard to see how they could feel abandoned. It’s also broader than the working class. The middle class is getting squeezed as well. The only winners are the oligarchs.
I think progress (in the sense of economic growth) was roughly in line with what Keynes expected. What he didn't expect is that people, instead of getting 10x the living standard with 1/3 the working hours, rather wanted to have 30x the living standard with the same working hours.
Throughout human history, starting with the spread of agriculture, increased labor efficiency has always led to people consuming more, not to them working less.
Moreover, throughout the 20th century, we saw several periods in different countries when wages rose very rapidly - and this always led to a temporary average increase in hours worked. Because when a worker is told "I'll pay you 50% more" - the answer is usually not "Cool, I can work 30% less", but "Now I'm willing to work 50% more to get 2x of the pay".
Can you give a single example where that happened?
During the industrial revolution it was definitely not what happened. In the late 1700s laborers typically averaged around 80 hours per week. In the 1880s this had decreased to around 60 hours per week. In the 1920s the average was closer to 48 hours per week. By the time Keynes was writing, the 40 hour work week had become standard. Average workweek bottomed out in the mid 1980s in the US and UK at about 37 hours before starting to increase again.
That never was the case (except for short periods after salary increases).
And this is not a question where there could be any speculation: in those days there were already people collecting such statistics, and we have a bunch of diaries describing the actual state of affairs, both from the workers themselves and from those who organized their labor - and everything shows that few people worked more than 50 hours a week on average.
Most likely, the myth about 80 hours a week stems from the fact that such weeks really were common, but it was work in the format of "work a week or two or a month for 80 hours, then a week or two or a month you don't work, spend money, arrange your life"
There is also agriculture, which employed a significant part of the population in the past. There, on average, there was usually even less than 40 hours of productive work, it's just that timing is of great importance there, and there are bottlenecks, and when necessary, you have to work 20 hours a day, which is compensated by periods when the workload is much less than 6 hours a day.
Take the Philadelphia carpenters' strike in 1791, where they were on strike demanding a reduction in hours to a 60 hour work week. The strike was unsuccessful. In the 1820s there was a so called "10 Hour Day" labor movement in New York City (note that at this time people worked 6 days a week). In the 1840s mill workers in Massachusetts attempted to get the state legislature to intervene and reduce their 74 hour workweeks. This was also unsuccessful. Martin Van Buren signed an executive order limiting workdays for federal employees to 10 hours per day. The first enforceable labor law in the US came in 1874, which set a limit of 60 hours in a workweek for women in Massachusetts.
The words 'have to' are doing a lot of work in that statement. Some people 'have to' work to literally put food on the table, other people 'have to' work to able to making payments on their new yacht. The world is full of people who could probably live out the rest of their lives without working any more, but doing so would require drastic lifestyle changes they're not willing to make.
I personally think the metric should be something along the lines of how long would it take from losing all your income until you're homeless.
What income? Income from job, or from capital? A huge difference. Also a lot harder to lose the latter, gross incompetence or a revolution, while the former is much easier.
Now what?
What about the trust fund kid working part time at an art gallery just because they like the scene and hanging out with artists? Same class?
And on the flip side, are pensioners, the unemployed, and people on permanent disability part of the same class as the dilettante children of billionaires?
We are talking about class, and if we should be making distinctions between groups of people who work for a living based on their wealth, income, and economic stability. I believe there is a fundamental class difference between people who work, but are rich enough to stop working whenever they want, those who can't quite stop working but are comfortable enough to easily go 6 month without a pay check, and people who are only a couple of missed pay checks away from literal homelessness.
Coddling poor people is so severely out of touch with their reality, they most likely resent the hell out of you for it, I know I did.
The original claim was a proposal to increase the resolution of class analysis one degree "higher" than Marx and no longer differentiate between the modern proletariat (working class), bourgeois (middle class), and aristocracy (upper class), in this case proposing to lump together the bourgeois and proletariat because they both have to work or they'll starve to death.
In this world, being born from the orifice of an aristocrat means you never have to work (have meaning, "or you'll die of exposure"). That's a frank reality. If your reaction to being born from a non aristocratic orifice is to shrug your shoulders and accept reality, great, nobody's trying to take that from you.
However you seem to be taking it a step further and suggesting that the people pointing out that this nature of society is unfair are somehow wrong to do so. I disagree. I think is perfectly valid to be born from whatever orifice and declare the obvious unfairness of the situation and the work to balance things out for people. That's not coddling, it's just ensuring that we all benefit in a just way from the work of your grandfather. Cause right now, someone has stolen the value of his work from you, and that's why you (and I) had to work so hard to get where we are today.
If you love that you had to work so hard, fine. I could take it or leave it. Instead of working a double through school I would have preferred to focus more on my studies and get higher grades, find better internships instead of slinging sandwiches. Personally I look at the extraordinarily wealth of the aristocrat class and I think, "is it more important that they're allowed to own 3 yachts or that all the children of our society can go to college?" I strongly believe any given country will be much stronger if it has less yachts and more college-educated people. Or people with better access to healthcare. Or people with better transit options to work. Etc.
And even you strongly disagree with that statement, it is important to have framework within which your opinion of that statement can be analysed.
Satya Nadella doesn't read his emails, and doesn't write responses. He subscribes to podcasts and then gets them summarised by AI.
He turns up to the office and takes home obscene amounts of money for doing nothing except play with toys and pretend he's working.
They are "working", but they are actually just playing. And I think thats the problem with some of these comments, they aren't distinguishing between work and what is basically a hobby.
> What about the trust fund kid working part time at an art gallery just because they like the scene and hanging out with artists?
Its a hobby. They don't have to do it, and if they get fired for gross misconduct then they could find alternative things to pass the time.
I’m willing to bet you haven’t lived long enough to know that’s a more or less a proxy for old age. :) That aside, even homeless people acquire possessions over time. If you have a lot of homeless in your neighborhood, try to observe that. In my area, many homeless have semi functional motor homes. Are they legit homeless, or are they “homeless oligarchs”? I can watch any of the hundreds of YouTube channels devoted to “van life.” Is a 20 year old who skipped college which their family could have afforded, and is instead living in an $80k van and getting money from streaming a “legit homeless”? The world is not so black and white it will turn out in the long run.
https://sanjosespotlight.com/san-jose-to-crack-down-on-rv-re...
Does one have savings? Can they afford to spend time with their children outside of working day to day? Do they have the ability to take reasonable risks without chancing financial ruin in pursuit of better opportunities?
These are things we typically attribute to someone in the middle class. I worry that boiling down these discussions to “you work and they don’t” misses a lot of opportunity for tangible improvement to quality of life for large number of people.
If you have an actual job and an income constrained by your work output, you could be middle class, but you could also recognize that you are getting absolutely ruined by the billionaire class (no matter what your level of working wealth)
I agree with your point. Now doctors are working class as well.
No need for AI. Troll farms are well documented and were in action before transformers could string two sentences together.
right?
pact of steel?
anyone?
All the free money dried up and the happy clapping Barney the Dinosaur Internet was no more!
I will not go into specifics because the authoritarians still disagree and think everything is fine with degenerative debauchery and try to abuse anyone even just pointing to failing systems, but it all does seem like civilization ending developments regardless of whether it leads to the rise of another civilization, e.g., the Asian Era, i.e., China, India, Russia, Japan, et al.
Ironically, I don’t see the US surviving this transitional phase, especially considering it essentially does not even really exist anymore at its core. Would any of the founders of America approve of any of America today? The forefathers of India, China, Russia, and maybe Japan would clearly approve of their countries and cultures. America is a hollowed out husk with a facade of red, white, and blue pomp and circumstance that is even fading, where America means both everything and nothing as a manipulative slogan to enrich the few, a massive private equity raid on America.
When you think of the Asian countries, you also think of distinct and unique cultures that all have their advantages and disadvantages, the true differences that make them true diversity that makes humanity so wonderful. In America you have none of that. You have a decimated culture that is jumbled with all kinds of muddled and polluted cultures from all over the place, all equally confused and bewildered about what they are and why they feel so lost only chasing dollars and shiny objects to further enrich the ever smaller group of con artist psychopathic narcissists at the top, a kind of worst form of aristocracy that humanity has yet ever produced, lacking any kind of sense of noblesse oblige, which does not even extend to simply not betraying your own people.
That there's any cultural "degenerative debauchery" is an extraordinary claim. Can you back up this claim with evidence?
"Decimated," "muddled," and "polluted" imply you have an objective analysis framework for culture. Typically people who study culture avoid moralizing like this because one very quickly ends up looking very foolish. What do you know that the anthropologists and sociologists don't, to where you use these terms so freely?
If I seem aggressive, it's because I'm quite tired of vague handwaving around "degeneracy" and identity politics. Too often these conversations are completely presumptive.
What's the sense in asking for examples? If one person sees ubiquitous cultural decay and the other says "this is fine," I think the difference is down to worldview. And for a pessimist and an optimist to cite examples at one another is unlikely to change the other's worldview.
If a pessimist said, "the opioid crisis is deadlier than the crack epidemic and nobody cares," would that change the optimist's mind?
If a pessimist said, "the rate of suicide has increased by 30% since the year 2000," would that change the optimist's mind?
If a pessimist said, "corporate profits, wealth inequality, household debt, and homelessness are all at record highs," ...?
And coming from the other side, all these things can be Steven Pinker'd if you want to feel like "yes there are real problems but actually things are better than ever."
There was a book that said something about "you will recognize them by their fruit." If these problems are the fruit born of our culture, it's worth asking how we got here instead of dismissing it with "What do you know that the anthropologists and sociologists don't?"
I also wholeheartedly disagree that, vaguely, diversity has something to do with the reduction of material conditions, or gay people, or whatever tf, so I wanted to allow the op the opportunity to be demonstrably wrong. They wouldn't take it of course, because there's no evidence for what they claim, because it's a ridiculous assertion.
The reasons things are they way they are today are identifiable and measurable. Rent is high because mostly because housing is an investment vehicle and supply is locked by a functional cartel. Homelessness is high mostly because of a lack of universal healthcare. Crime is continually dropping despite what the media says, and immigrants commit a lower crime per capita than any other demographic group - but the jails remain full because the USA engages in a demonstrably ineffective retributive justice system.
I'm so tired of conservatives walking around flinging every which way their feelings as facts. Zizek has demonstrated the potential value of a well considered conservative ideology, and unfortunately today all we get from that side is vague (or explicit) bigotry.
The OP didn't just claim that there's cultural degeneracy happening (which again, they didn't definite very well), they blamed real-world outcomes on it. That's a challengeable premise.
Capitalism arrives for everyone, Asia is just late for the party. Once it eventually financializes everything, the same will happen to it. Capitalism eventually eats itself, doesn't matter the language or how many centuries your people might have.
This creates supply-demand pressure for goods and services. Anything with limited supply such as living in the nice part of town will price out anyone working 15 hours/week.
And so society finds an equilibrium…
If minimum wage goes up 40/15 = 267%, then the price of your coffee will go up 267% because the coffeeshop owner needs to pay 267% more to keep the cafe staffed.
The 40 hour work week is something a cultural equilibrium. But we've all heard of doctors, lawyers, and bankers working 100h weeks which affords them some of the most desirable real estate in the world...
Require anyone working over 15 hours to be paid time and a half overtime. If you want to hire one person to work 40 hours per week, that is 30% more expensive than hiring 3 people to work the same number of hours. In some select instances sure, having a single person do the job is worth the markup, and some people will be willing to work those hours, just like today you have some people working over 40, but in general the market will demand reduction in working hours.
Similarly, there is a strong incentive to work enough hours to be counted as a full time employee, so the marginal utility of that 35th hour is pretty high currently, whereas if full time benefits and labor protections started at 15 hours, then the marginal utility of that 35th hour would be substantially less.
> If minimum wage goes up 40/15 = 267%, then the price of your coffee will go up 267% because the coffeeshop owner needs to pay 267% more to keep the cafe staffed.
That would be true if 100% of the coffee shop's revenue went to wages. Obviously that's not the case. In reality, the shop is buying ingredients, paying rent for the space, paying off capex for the coffee making equipment, utilizing multiple business services like accounting and marketing, and hopefully at the end of the day making some profit. Realistically, wages for a coffee shop are probably 20-30% of revenue. So to cover the increased cost of labor, prices would have to rise 53%. Note that in this scenario you also have 267% more money to spend on coffee.
Of course there are some more nuances as prices in general inflate. Ultimately though, the equilibrium you reach is that people working minimum wage for a full workweek wind up able to afford 1 minimum-wage workweek worth of goods and services. This holds true in the long term regardless of what level minimum wage is or how long a workweek is. Indeed you could just as easily have everyone's wages stay exactly the same but we are all working less, then we all have less money and there is a deflationary effect but in the long term we wind up at the same situation. Ideally, you'd strike a balance between these two which reaches the same end state with a reasonably steady money supply.
> The 40 hour work week is something a cultural equilibrium.
No, it isn't. It is an arbitrary convention, one in a long series which had substantially different values in the past. It has remained constant because it is encoded in law in such a way that it is no longer subject to simple pressures of labor supply and demand.
> But we've all heard of doctors, lawyers, and bankers working 100h weeks which affords them some of the most desirable real estate in the world...
There are a lot more than just doctors and lawyers and bankers working long hours. 37% of americans work 2 full time jobs, and most of them aren't exactly in a position to afford extremely desirable real estate. If the workweek were in a equilibrium due to supply and demand, wouldn't these people just be working more hours at their regular jobs?
There can be a certain snobbishness with academics where they are like of course I enjoy working away on my theories of employment but the unwashed masses do crap jobs where they'd rather sit on their arses watching reality TV. But it isn't really like that. Usually.
I don't know that I've ever heard this rationally articulated. I think it's a "gut feel" that at least some people have.
If taxes take 10% of what you make, you aren't happy about it, but most of us are OK with it. If taxes take 90% of what you make, that feels different. It feels like the government thinks it all belongs to them, whereas at 10%, it feels like "the principle is that it all belongs to you, but we have to take some tax to keep everything running".
So I think the way this plays out in practice is, the amount of taxes needed to supply everyones' basic needs is across the threshold in many peoples' minds. (The threshold of "fairness" or "reasonable" or some such, though it's more of a gut feel than a rational position.)
I'll take capitalism with all its warts over that workers paradise any day.
Even myself, work a job that I enjoy building things that I’m good at, that is almost stress free, and after 10-15 years find that I would much rather spend time with my family or even spend a day doing nothing rather than spend another hour doing work for other people. the work never stops coming and the meaninglessness is stronger than ever.
That said, I’m not what you’d call a high-earning person (I earn < 100k) I simply live within my means and do my best to curb lifestyle creep. In this way, Keynes’ vision is a reality, but it’s a mindset and we also have to know when enough wealth is enough.
The arrangement was arrived at because the irregular income schedule makes an hourly wage or a salary a poor option for everyone involved. I’m grateful to work for a company where the owners value not only my time and worth but also value a similar work routine themselves.
I've come across people like you and they don't produce as much value as they think.
It came about late last year when the current employer started going getting gently waved off in early funding pitches. That resulted in some thrash, forced marches to show we could ship, and the attendant burnout for me and a good chunk of the team I managed. I took a hard look at where the company was and where I was, and decided I didn't have another big grind in me right now.
Rather than just quit like I probably would have previously, I laid it out to our CEO in terms of what I needed: more time taking care of my family and myself, less pressure to deliver impossible things, and some broad idea of what I could say "no" to. Instead of laughing in my face, he dug in, and we had a frank conversation about what I _was_ willing to sign up for. That in turn resulted in a (slow, still work-in-progress) transition where we hired a new engineering leader and I moved into a customer-facing role with no direct reports.
Now I to work a part-time schedule, so I can do random "unproductive" things like repair the dishwasher, chaperone the kid's field trip, or spend the afternoon helping my retired dad make a Costco run. I can reasonably stop and say, "I _could_ pay someone to do that for me, but I actually have time this week and I can just get it done" and sometimes I...actually do, which is kind of amazing?
...and it's still fucking hard to watch the big, interesting decisions and projects flow by with other people tackling them and not jump in and offer to help. B/c no matter what a dopamine ride that path can be, it also leads to late nights and weekends working and traveling and feeling shitty about being an absentee parent and partner.
I suspect he didn't factor in how may people would be retired and on entitlements.
We're not SUPER far from that now, when you factor in how much more time off the average person has now, how much larger of percentage of the population is retired, and how much of a percentage is on entitlements.
The distribution is just very unequal.
I.E. if you're the median worker, you've probably seen almost no benefit, but if you're old or on entitlements, you've seen a lot of benefits.
Most people with a modest retirement account could retire in their forties to working 15-hour workweeks somewhere in rural America.
And then after living at the center of everything for 15-20 years be mentally prepared to move to “nowhere”, possibly before your kids head off to college.
Most cannot meet all those conditions and end up on the hedonic treadmill.
Yes to the latter, no to the former. The states with the highest savings rates are Connecticut, New Jersey, Minnesota, Massachussetts and Maryland [1]. Only Massachussetts is a top-five COL state [2].
> then after living at the center of everything for 15-20 years be mentally prepared to move to “nowhere”
This is the real hurdle. Ultimately, however, it's a choice. One chooses to work harder to access a scarce resource out of preference, not necessity.
[1] https://en.wikipedia.org/wiki/List_of_U.S._states_by_savings...
[2] https://en.wikipedia.org/wiki/List_of_U.S._states_by_savings...
CA probably nowhere on the list because its such a small state that any Silicon Valley premium gets diluted at the state level average.
I am not finding a clear definition of this index but it appears to be $saved/$income (or $saved/$living expenses) right? So 114% in CT dollars is probably way more than 102% Kansas dollars..
It's also worth noting the point I was making is - if you take a "one years NYC income in savings" amount of money and relocate to say, New Mexico.. the money goes a lot further than trying to do the opposite!
https://www.theguardian.com/commentisfree/2024/nov/21/icelan...
Policy matters
Yeah, I'd say I get up to 15 hours of work done in a 40 hour workweek.
AI isn't going to generate those jobs, it's going to automate them.
ALL our bullshit jobs are going away, and those people will be unemployed.
When kids stop learning to code for real, who writes GCC v38?
This whole LLM is just the next bitcoin/nft. People had a lot of video cards and wanted to find a new use for them. In my small brain it’s so obvious.
to compare that to NFT’s is pretty disingenuous. i don’t know anyone who has ever accomplished anything with an NFT. (i’m happy to be wrong about that, and i have yet to find a single example).
Trying to make them more than they are is the issue I have. Let them be great at crunching words, I’m all about that.
Pretending that OpenAI is worth billions of dollars is a joke, when I can get 90% of the value the provide for free, on my own mediocre hardware.
Maybe consider it's not all on the AI tools if they work for others but not for you.
Human-written code also needs reviews, and is also frequently broken until subjected to testing, iteration, and reviews, and so our processes are built around proper qa, and proper reviews, and then the original source does not matter much.
It's however a lot easier to force an LLM into a straighjacket of enforced linters, enforced test-suite runs, enforced sanity checks, enforced processes at a level that human developers would quit over, and so as we build out the harness around the AI code generation, we're seeing the quality of that code increase a lot faster than the quality delivered by human developers. It still doesn't beat a good senior developer, but it does often deliver code that handles tasks I could never hand to my juniors.
(In fact, the harness I'm forcing my AI generated code through was written about 95%+ by an LLM, iteratively, with its own code being forced through the verification steps with every new iteration after the first 100 lines of code or so)
You can feel free not to believe it, as I have no plans to open up my tooling anytime soon - though partly because I'm considering turning it into a service. In the meantime these tools are significantly improving the margins for my consulting, and the velocity increases steadily as every time we run into a problem we make the tooling revise its own system prompt or add additional checks to the harness it runs to avoid it next time.
A lot of it is very simple. E.g a lot of these tools can produce broken edits. They'll usually realise and fix them, but adding an edit tool that forces the code through syntax checks / linters for example saved a lot of pain. As does forcing regular test and coverage runs, not just on builds.
For one of my projects I now let this tooling edit without asking permission, and just answer yes/no to whether it can commit once it's ready. If no, I'll tell it why and review again when it thinks it's fixed things, but a majority of commit requests are now accepted on the first try.
For the same project I'm now also experimenting with asking the assistant to come up with a todo list of enhancements for it based on a high level goal, then work through it, with me just giving minor comments on the proposed list.
I'm vaguely tempted to let this assistant reload it's own modified code when tests pass and leave it to work on itself for a a while and see what comes of it. But I'd need to sandbox it first. It's already tried (and was stopped by a permissions check) to figure out how to restart itself to enable new functionality it had written, so it "understands" when it is working on itself.
But, by all means, you can choose to just treat this as fiction if it makes you feel better.
E.g. a real example: The tooling I mentioned at one point early on made the correct functional change, but it's written in Ruby and Ruby allows defining methods multiple times in the same class - the later version just overrides the former. This would of course be a compilation error in most other languages. It's a weakness of using Ruby with a careless (or mindless) developer...
But Rubocop - a linter - will catch it. So forcing all changes through Rubocop and just returning the errors to LLM made it recognise the mistake and delete the old method.
It lowers the cognitive load of the review. Instead of having to wade through and resolve a lot of cruft and make sense of unusually structured code, you can focus on the actual specific changes and subject those to more scrutiny.
And then my plan is to experiment with more semantic checks of the same style as what Rubocop uses, but less prescriptive, of the type "maybe you should pay extra attention here, and explain why this is correct/safe" etc. An example might be to trigger this for any change that involves reading a key or password field or card number whether or not there is a problem with it, and both trigger the LLM to "look twice" and indicate it as an area to pay extra attention to in a human review.
It doesn't need to be perfect, it just need to provide enough of a harness to make it easier for humans in the loop to spot the remaining issues.
So, no, I'll keep doing this because doing this is already saving me effort for my other projects.
Writing code is often easier than reading it. I suspect that coders soon will face what translators face now: fixing machine output at 2x to 3x less pay.
It's also the jobs that involve keeping people happy somehow, which may not be "productive" in the most direct sense.
One class of people that needs to be kept happy are managers. What makes managers happy is not always what is actually most productive. What makes managers happy is their perception of what's most productive, or having their ideas about how to solve some problem addressed.
This does, in fact, result in companies paying people to do nothing useful. People get paid to do things that satisfy a need that managers have perceived.
NONE of the bullshit jobs are going away, there will simply be bigger, more numerous bullshit.
I don’t know if it’s induced demand, revealed preference or Jevon’s paradox, maybe all 3.
OK, but I doubt we're washing 10 times as much clothes, unless are people wearing them for one hour between washes...
I've done some 3rd world travel without washing machines for a while and my laundry was once a week dunk stuff in the sink for 5 minutes with shampoo + rinse water, wring and hang up. I don't buy the whole day being necessary thing.
(Quotes because I personally have a significantly harder time doing bloody housework...)
Before teaching your children to do chores: x hours per week for chores
After teaching your children to do chores: y hours per weeks to have annoying discussions with the child, and X hours per week cautioning the children to do the chores, and ensuring that your children do the chore properly. Here X > x.
Additional time for you: -((X-x)+y), where X>x and additionally y > 0.
> For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!
http://www.econ.yale.edu/smith/econ116a/keynes1.pdf
https://www.aspeninstitute.org/wp-content/uploads/files/cont...
10 years into "we'll have self driving cars next year"
We're 10 years into "it's just completely obvious that within 5 years deep learning is going to replace radiologists"
Moravec's paradox strikes again and again. But this time it's different and it's completely obvious now, right?
I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.
> it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling
What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.I don't know about the rest, but I spoke up because I didn't want to hit a brick wall, I want to keep going! I still want to keep going! But if accurate predictions (with good explanations) aren't a reason to shift resource allocation then we just keep making the same mistake over and over. We let the conmen come in and people who get too excited by success that they get blind to pitfalls.
And hey, I'm not saying give me money. This account is (mostly) anonymous. There's plenty of people that made accurate predictions and tried working in other directions but never got funding to test how methods scale up. We say there's no alternatives but there's been nothing else that's been given a tenth of the effort. Apples and oranges...
You need to model the business world and management more like a flock of sheep being herded by forces that mostly don't have to do with what actually is going to happen in future. It makes a lot more sense.
> mostly don't have to do with what actually is going to happen
Yet I'm talking about what did happen.I'm saying we should have memory. Look at predictions people make. Reward accurate ones, don't reward failures. Right now we reward whoever makes the craziest predictions. It hasn't always been this way, so we should go back to less crazy
It's all a big hype bubble and not only is no one in the industry willing to pop it, they actively defend against popping a bubble that is clearly rupturing on its own. It's endemic of how modern businesses no longer care about a proper 10 year portfolio and more about how to make the next quarter look good.
There's just no skin in the game, and everyone's ransacking before the inevitable fire instead of figuring out how to prevent the fire to begin with.
Those people always do that. Shouting about cryptocurrencies and NFTs from the rooftops 3-4 years ago, now completely gone.
I suspect they're the same people, basically get rich quick schemers.
But if you had been wrong and we would now have had superintelligence, the upside for its owners would presumably be great.
... Or at least that's the hypothesis. As a matter of fact intelligence is only somewhat useful in the real world :-)
This is an improvement for sure, but LLMs themselves are definitely hitting a wall. It was predicted that scaling alone would allow them to reach AGI level.
This is a genuine attempt to inform myself. Could you think to those sort of claims from experts at the top?
There were definitely many other prominent researchers who vehemently disagreed, e.g. Yann LeCun. But it's very hard for a layperson (or, for that matter, another expert) to determine who is or would be "right" in this situation - most of these people have strong personalities to put it mildly, and they often have vested interests in pushing their preferred approach and view of how AI does/should work.
After their success, I definitely saw a ton of blog posts and general "AI chatter" that to get to AGI all you really needed to do (obviously I'm simplifying things a bit here) was get more data and add more parameters, more "experts", etc. Heck, OpenAI had to scale back it's pronouncements (GPT 5 essentially became 4.5) when they found that they weren't getting the performance/functionality advances they expected after massively scaling up their model.
A year ago I expected a golden age of local model intelligence integrated into most software tools, and more powerful commercial tools like Google Jules to be something used perhaps 2 or 3 times a week for specific difficult tasks.
That said, my view of the future is probably now wrong, I am just saying what I expected.
Realistically, we're 2.5 years into it at most.
I admit they don't operate everywhere - only certain routes. Still they are undoubtedly cars that drive themselves.
I imagine it'll be the same with AGI. We'll have robots / AIs that are much smarter than the average human and people will be saying they don't count because humans win X Factor or something.
The argument that self-driving cars should be allowed on public roads as long as they are statistically as safe as human drivers (on average) seems valid, but of course none of these cars have AGI... they perform well in the anticipated simulator conditions in which they were trained (as long as they have the necessary sensors, e.g. Waymo's lidar, to read the environment in reliable fashion), but will not perform well in emergency/unanticipated conditions they were not trained on. Even outside of emergencies, Waymos still sometimes need to "phone home" for remote assistance in knowing what to do.
So, yes, they are out there, perhaps as safe on average as a human (I'd be interested to see a breakdown of the stats), but I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down. I'd also not trust a Tesla where penny pinching, or just arrogant stupidity, has resulted in a sensor-poor design liable to failure modes like running into parked trucks.
I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down.
The challenge is that most people think they’re better than average drivers.My point was that if you are part of one of these accident-prone groups, you are certainly worse than average, and are probably safer (both for yourself, and everyone around you) in a Waymo. However, if you are an intelligent non-impaired experienced driver, then maybe not, and almost certainly not if we're talking about emergency and dangerous situations which is where it really matters.
A recent example - a few weeks ago I was following another car in making a turn down a side road, when suddenly that car stops dead (for no externally apparent reason), and starts backing up fast about to hit me. I immediately hit my horn and prepare to back up myself to get out of the way, since it was obvious to me - as a human - that they didn't realize I was there, and without intervention would hit me.
Driving away I watch the car in my rear view mirror and see it pull a U-turn to get back out of the side road, making it apparent why they had stopped before. I learned something, but of course the driverless car is incapable of learning, and certainly has no theory of mind, and would behave same as last time - good or bad - if something similar happened again.
That's the main difference with a human driver. If I take an Uber and we crash, that driver is liable. Waymo would fight tooth and nail to blame anything else.
I don't care about SF. I care about what I can but as a typical American. Not as an enthusiast in one of the most technologically advanced cities on the planet
And it took what like 2 decades to get there. So no, we don't have self-driving even close. Those examples look more like hard-coded solution for custom test cases.
they have failed in sfo, phoenix and other cities that rolled red carpet for them
There's a big gap between seeing something work in the lab and being ready for real world use. I know we do this in software, but that's a very abnormal thing (and honestly, maybe not the best)
And more specifically, I'm referencing Elon where the context is that its going to be a software push into Teslas that people already own
When someone talks about "having" self-driving cars next year, they're not talking about what are essentially pilot programs.
Not to mention that HN gets really tetchy about achieving specifically SAE Level 6 when in practice some pretty basic driver assist tools are probably closer to what people meant. It reminds me of a gentlemen I ran into who was convinced that the OpenAI DoTA bot with a >99% win rate couldn't really be said to be playing the game. If someone can take their hands off the wheel for 10 minutes we're there in a common language sense; the human in the car isn't actively in control.
It's pretty damning that it failed there.
Why do I think this?
1) They smelled slightly funny. 2) They got the diagnosis wrong.
OK maybe #2 is a red herring. But I stand by the other reason.
So there's some room for interpretation, the weaker interpretation is less radical (that AI could beat humans in radiology tasks in 5 years).
> Helion has a clear path to net electricity by 2024, and has a long-term goal of delivering electricity for 1 cent per kilowatt-hour. (!)
[0] https://observer.com/2025/01/sam-altman-nuclear-fusion-start...
I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved. But the base model powering the whole operation seems stuck.
It really hasn't.
The problem is that a GenAI system needs to not only understand the large codebase but also the latest stable version of every transitive dependency it depends on. Which is typically in the order of hundreds or thousands.
Having it build a component with 10 year old, deprecated, CVE-riddled libraries is of limited use especially when libraries tend to be upgraded in interconnected waves. And so that component will likely not even work anyway.
I was assured that MCP was going to solve all of this but nope.
MCP would allow it to instead get this information at run-time from language servers, dependency repositories etc. But it hasn't proven to be effective.
I can't. GPT-4 was useless for me for software development. Claude 4 is not.
But we are going to see a huge explosion in how those models are integrated into the rest of the tech ecosystem. Things that a current model could do right now, if only your car/watch/videogame/heart monitor/stuffed animal had a good working interface into an AI.
Not necessarily looking forward to that, but that's where the growth will come.
And each successive model that has been released has done nothing to fundamentally change the use cases that the technology can be applied to i.e. those which are tolerant of a large percentage of incoherent mistakes. Which isn't all that many.
So you can keep your 10x better and 100x cheaper models because they are of limited usefulness let alone being a turning point for anything.
The explosion of funding, awareness etc only happened after gpt-3 launch
Around 2010 when I was at university, a friend did their undergraduate thesis on neural networks. Among our cohort it was seen as a weird choice and a bit of a dead-end from the last AI winter.
Nonetheless it took openai til Nov 2022 for 1 Million users.
The overall awareness and breakthrough was probably not at 2020.
Basically, what if GenAI is the Minitel and what we want is the internet.
Human brains seem like an existence proof for what’s possible, but it would be surprising if humans also represent the farthest physical limits of what’s technologically possible without the constraints of biology (hip size, energy budget etc).
We’ve been building actuators for 100s of years and we still haven’t got anything comparable to a muscle. And even if you build a better hydraulic ram or brushless motor driven linear actuator you will still never achieve the same kind of behaviour, because the technologies are fundamentally different.
I don’t know where the ceiling of LLM performance will be, but as the building blocks are fundamentally different to those of biological computers, it seems unlikely that the limits will be in any way linked to those of the human brain. In much the same way the best hydraulic ram has completely different qualities to a human arm. In some dimensions it’s many orders of magnitudes better, but in others it’s much much worse.
It’s not just that ‘we don’t know how to build them’, it’s that the actuators aren’t a standalone part - and we don’t know how to build (or maintain/run in industrial enviroments!) the ‘other stuff’ economically either.
For text generation, it seems like the fast progress was mainly due to feeding the models exponentially more data and exponentially more compute power. But we know that the growth in data is over. The growth in compute has a shifted from a steep curve (just buy more chips) to a slow curve (have to make exponentially more factories if we want exponentially more chips)
Im sure we will have big improvements in efficiency. Im sure nearly everyone will use good LLMs to support them in their work, and they may even be able to do all they need to do on-device. But that doesn’t make the models significantly smarter.
The thing about the latter 1/3rd of a sigmoid curve is, you're still making good progress, it's just not easy any more. The returns have begun to diminish, and I do think you could argue that's already happening for LLMs.
There is a lag in how humans are reacting to AI which is probably a reflexive aspect of human nature. There are so many strategies being employed to minimize progress in a technology which 3 years ago did not exist and now represents a frontier of countless individual disciplines.
If you took a Tesla or a Waymo and dropped into into a tier 2 city in India, it will stop moving.
Driving data is cultural data, not data about pure physics.
You will never get to full self driving, even with more processing power, because the underlying assumptions are incorrect. Doing more of the same thing, will not achieve the stated goal of full self driving.
You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.
Same with GenAI - the tooling factor will not magically solve the people, process, power and economic factors.
Or actual intelligence. That observes its surroundings and learns what's going on. That can solve generic problems. Which is the definition of intelligence. One of the obvious proofs that what everybody is calling "AI" is fundamentally not intelligent, so it's a blatant misnomer.
Absolutely driving is cultural (all things people do are cultural) but given 10’s of millions of miles driven by Waymo, clearly it has managed the cultural factor in the places they have been deployed. Modern autonomous driving is about how people drive far more than the rules of the road, even on the highly regulated streets of western countries. Absolutely the constraints of driving in Chennai are different, but what is fundamentally different? What leads to an impossible leap in processing power to operate there?
I definitely recall reading some thinkpieces along the lines of "In the year 203X, there will be no more human drivers in America!" which was and still is clearly absurd. Just about any stupidly high goalpost you can think of has been uttered by someone in the world early on.
Anyway, I'd be interested in a breakdown on reliability figures in urban vs. suburban vs. rural environments, if there is such a thing, and not just the shallow take of "everything outside cities is trivial!" I sometimes see. Waymo is very heavily skewed toward (a short list of) cities, so I'd question whether that's just a matter of policy, or whether there are distinct challenges outside of them. Self-driving cars that only work in cities would be useful to people living there, but they wouldn't displace the majority of human driving-miles like some want them to.
As others will attest, when adherence to driving rules is spotty, behavior is highly variable and unpredictable. You need to have a degree of straight up agression, if you want to be able to handle an auto driver who is cheating the laws of physics.
Another example of something thats obvious based on crimes in India; people can and will come up to your car during a traffic jam, tap your chassis to make it sound like there was an impact, and then snatch your phone from the dashboard when you roll your window down to find out what happened.
This is simply to illustrate and contrast how pared down technical intuitions of "driving" are, when it comes to self driving discussions.
This is why I think level 5 is simply not happening, unless we redefine what self driving is, or the approach to achieving it. I feel theres more to be had from a centralized traffic orchestration network that supplements autonomous traffic, rather than trying to solve it onboard the vehicle.
Do you really think Waymos in SF operate solely on physics? There are volumes of data on driver behavior, when to pass, change lanes, react to aggressive drivers, etc.
And the point that I am making, is that this view was never baked into the original vision of self driving, resulting in predictions of a velocity that was simply impossible.
Physical reality does not have vibes, and is more amenable to prediction, than human behavior. Or Cow behavior, or wildlife if I were to include some other places.
This is a semantic discussion, because it is about what people mean when they talk about self driving.
Just ditching the meaning is unfair, because goddamit, the self driving dream was awesome. I am hoping to be proved wrong, but not because we moved our definition.
Carve a separate category out, which articulates the updated assumptions. Redefining it is a cop out and dare I say it, unbecoming of the original ambition.
Networked Autonomous vehicles?
Lol. If you dropped the average westerner into Chennai, they would either: a) stop moving b) kill someone
Decades of machine learning research would like to have a word.
3D printing is making huge progress in heavy industries. It’s not sexy and does not make headlines but it absolutely is happening. It won’t replace traditional manufacturing at huge scales (either large pieces or very high throughput). But it’s bringing costs way down for fiddly parts or replacements. It is also affecting designs, which can be made simpler by using complex pieces that cannot be produced otherwise. It is not taking over, because it is not a silver bullet, but it is now indispensable in several industries.
The same thing with AI. You'd be blind or lying if you said it hasn't advanced a lot. People aren't denying that. But people are fed up being constantly being promised the moon and getting a cheap plastic replica instead.
The tech is rapidly advancing and doing good. But it just can't keep up with the bubble of hype. That's the problem. The hype, not the tech.
Frankly, the hype harms the tech too. We can't solve problems with the tech if we're just throwing most of our money at vaporware. I'm upset with the hype BECAUSE I like the tech.
So don't confuse the difference. Make sure you understand what you're arguing against. Because it sounds like we should be on the same team, not arguing against one another. That just helps the people selling vaporware
This was never the case, and this is obvious to anyone who has ever been to factories that doing mass-produced plastic
>Or self-driving, that is "just around the corner" for a decade now.
But it is really around the corner, all that remains is to accept it. That is, to start building and modifying the road infrastructure and changing the traffic rules to enable effective integration self-driving cars into road traffic.
Programmers that don't use AI will get replaced by those that do. (no just by mandate, but by performance)
> 10 years into "we'll have self driving cars next year"
They're here now. Waymo does 250K paid rides/week.
why don't you bring it up then.
> There will be a turning point but it’s not happened yet.
do you know something that rest of us don't ?
Because he suddenly had to pay interest on that gigantic loan he (and his business associates) took to buy Twitter.
It may not be the only reason for everything that happened, but it sure is simple and has some very good explanatory powers.
Doubling interest rate from .1% to .2% does a lot for your DCF models, and in this case we went from zero (or in some cases negative) to several percentage units. Of course stock prices tanked. That's what any schoolbook will tell you, and that's what any investor will expect.
Companies thus have to start turning dials and adjust parameters to make number go up again.
That said, the vibe has definitely shifted. I started working in software in uni ~2009 and every job I've had, I'd applied for <10 positions and got a couple offers. Now, I barely get responses despite 10x the skills and experience I had back then.
Though I don't think AI has anything to do with it, probably more the explosion of cheap software labor on the global market, and you have to compete with the whole world for a job in your own city.
Kinda feels like some major part of the gravy train is up.
But for software, it's a lot smaller.
https://www.trueup.io/job-trend
I have never gone to Indeed to apply for a job.
That part is so overblown. Twitter was still trying to hit moonshots. X is basically in "keep the lights on" mode as Musk doesn't need more. Yeah, if Google decides it doesn't want to grow anymore, it can probably cut it's workforce by 90%. And it will be as irrelevant as IBM in maximum 10 years.
"When interest rates return to normal levels, the ZIRP jobs will disappear." -- Wall Street analyst
We have agency. Whether we are brainwashed or not. If we cared about ourselves, then we don’t need another class, or race, or whatever other grouping to do this for us.
Regarding class struggle I think class division always existed but we the mass have all the tools to improve our situation.
Almost no one has seen a world where the price of money wasn't centrally planned, a committee of experts deciding it based on gut feel like they did in command economies like the Soviet union.
And then thousands of people's lives are disrupted as the interest rate swings wildly due purely to government action (corona lockdowns and fed zirp response), and it all somehow just ends up people talking about AI instead.
The true wrongdoers get absolutely no consequences, and we all just carry on like there's no problem. Often because our taxes go to paying hordes of academics and economists to produce layers and layers of sophisticated propaganda that of course this system is the best one.
Absurd and shitty world.
p.s.: I'm a big fan of yours on Twitter.
If we were to unionize, we could force this machine to a halt and shift the balance of power back in our favor.
But we don't, because many of us have been brainwashed to believe we're on the same side as the ones trying to squeeze us.
Last time it was tried the union coerced everyone to root for their exploiters. People that unionize aren't magically different.
> the tune is "be leaner".
Seems like they're happy to start cutting limbs to lose weight. It's hard to keep cutting fat if you've been aggressively cutting fat for so long. If the last CEO did their job there shouldn't be much fat leftfunny how that fat analogy works...because the head (brain) has a lot more fat content than muscles/limbs.
It's amazing and cringy the level of parroting performed by executives. Independent thought is very rare amongst business "leaders".
At this point I'm not sure it's lack of independent thought so much as lack of thought. I'm even beginning to question if people even use the products they work on. Shouldn't there be more pressure from engineers at this point? Is it yes men from top to bottom? Even CEOs seem to be yes men in response to share holders but that's like being a yes man to the wind.
When I bring this stuff up I'm called negative, a perfectionist, or told I'm out of touch with customers and or understand "value". Idk, maybe they're right. But I'm an engineer. My job is to find problems and fix them. I'm not negative, I'm trying to make the product better. And they're right, I don't understand value. I'm an engineer, it's not my job to make up a number about how valuable some bug fix is or isn't. What is this, "Whose Line Is It Anyways?" If you want made up dollar values go ask the business monkeys, I'm a code monkey
So you think all bugs are equally important to fix?
Do you think every bug's monetary value is perfectly aligned with user impact? Certainly that isn't true. If it were we'd be much better at security and would be more concerned with data privacy. There's no perfect metric for anything, and it would similarly be naïve to think you could place a dollar value on everything, let alone accurately. That's what I'm talking about.
My main concern as an engineer is making the best product I can.
The main concern of the manager is to make the best business.
Don't get confused and think those are the same things. Hopefully they align, but they don't always.
It's not like companies laid off whole functions. These jobs will continue to be performed by humans - ZIRP just changes the number of humans and how much they get paid.
> These workers need to retrain and move on.
They only need to "retrain" insofar as they keep up with the current standards and practices. Software engineers are not going anywhere.
https://en.m.wikipedia.org/wiki/Second_inauguration_of_Donal...
It was the war with Russia that drove the fed to raise interest rates in 2022 - a measure that was intended to curb inflation triggered by spikes in the prices of economic inputs (gas, oil, fertilizer, etc.).
The tech layoffs started later that year.
Widespread job cuts are an intended effect of raising interest rates - more unemployed = less spending = keeps a lid on inflation.
AI is just cashing in on the trend.
Of course, nothing is further from the truth. "Russian invasion of Ukraine" is what should be written there.
Perhaps "Russia's war" would have been a better phrasing that captures both spirits (but it's not a phrase you hear said much).
For example, Ukraine was a very important food supplier -- one of the top grain suppliers in the World -- and the invasion caused shortages of some foods. Another example is that Ukraine provided a good source of iron ore for EU-based manufacture. If nothing else that would be important to USAmericans as indicating a market opportunity.
Without that invasion and Putin's inspiration, would Trump have threatened invasion of USA's neighbours? That's got to be vital to USA finances too.
Here's the BBC using it[1], CNN[2], The AP [3], The Conversation [4]
[1]https://www.bbc.com/news/articles/c0l0k4389g2o
[2]https://www.cnn.com/2024/07/21/europe/europe-conscription-wa...
[3]https://apnews.com/article/russia-ukraine-war-zelenskyy-star...
[4]https://theconversation.com/why-russias-armed-forces-have-pr...
Also even some of Trump's team.
The establishment doesnt suddenly get swapped out because a new President gets in (even if the tides are shifting).
Heroyam Slava.
That is, until 10 years later when they have a new narrative about a different military rival. They quietly stop pushing the old narrative and everyone quietly admits the old one was kinda bullshit all along.
E.g. my views didnt change one iota since 2003 but these views at some point magically stopped conferring a "Saddam sympathizer" moniker from people who demanded unthinking ideological commitment.
It works the same way with people who live under and unthinkingly consume Russian imperialist propaganda too. The more passionate ones make routine demands for ideological purity similar to the one above.
That is, until 10 years later when they have a new narrative about a different military rival. They quietly stop pushing the old narrative and everyone quietly admits the old one was kinda bullshit all along. /---/ It works the same way with people who live under and unthinkingly consume Russian imperialist propaganda too.
It certainly does. The Russian war against Ukraine began with unmarked soldiers, nicknamed "little green men," and Russia denying any involvement, claiming instead that Ukraine was in the midst of a civil war. When the latest Russian weapons appeared in Ukraine, Russia claimed that tourists must've bought them from military surplus stores.Then we went through a lot of bullshit - that Ukrainian nationalists were committing genocide in Donbas, or that Ukraine was secretly developing nuclear and biological weapons.
Now, 10 years later, the narrative has shifted to how this has always been a major confrontation with the USA and NATO, a "proxy war". No doubt, it will shift many more times. Looking forward to when the current "supreme commander" Putin will be regarded as a failure, much like Gorbachev, and blamed for causing the difficult 2030s.
Nobody has ever been jailed for this terrorist attack and all the evidence points to Ukrainian fascists being culpable, including:
* The Berkut who were there being tried and the trial falling through because all of them were too obviously very far away from the protestor-controlled hotel where the snipers nest was set up.
* A Ukrainian war hero who had no reason to lie who was there telling people who was responsible (before being thrown in jail).
* A group of the snipers (mercenaries who were there who never got paid) went public.
It was as much a proxy war back then, it was just fought under the surface with NGO agitators instead of weapons deliveries.
The fact that you have to make something like this up within the first ten words of your narrative really shows just how detached from reality it is.
I wonder what narratives will dominate after the war, when reality sets in: hundreds of thousands dead and never returning home; several times as many disabled, many of them severely; the might and pride of the Russian military sunk or blown up; returned soldiers running massive criminal rings like in the 1990s; state budget empty from massive military spending, leaving people to survive on their own as safety nets crumble. Some conspiracy story about snipers 10+ years ago in another country doesn't really cut it, and getting beaten in an imagined confrontation with the "collective West" sounds really pathetic too, especially when the other side didn't even step into the boxing ring. The USAF hasn't flown a single sortie against Russia, yet strategic bombers are already burning on airfields like in the opening hours of Operation Barbarossa.
Reichstag fire. All the nutter conspiracy theorists think Hitler did it. Obviously you know better.
>The fact that you have to make something like this up
Evidence doesnt mean much to some people. They will follow the narrative of their leaders whether it is dictated via Moscow blabbing about biolabs or Washington that they allied with freedom loving democrats in Ukraine rather than Nazi goons.
The near zero interest rates, pause on student loan payments, pause on rent payments, doubling of unemployment pay, and then the dustings of stimulus checks and bonus childcare checks, all while most white collar workers just continued working like nothing happened, created an incredibly cash rich environment that most people have never seen before.
And the PPP loans handouts to business owners just to throw more gas on the fire.
Software in the US has (aside from maybe finance) been an almost uniquely well-compensated field. That will probably adjust over time especially given the inflow of grads primarily in it for the money.
Software industry was over hiring probably ever since dot-com bubble because after the bubble burst revenue and profits were rapidly growing and it never really stopped. I would rather blame the managers who constantly pushed for more workers instead of increasing the productivity of the existing workforce.
Prices and unemployment really started to rise after that. The EU buys overpriced LNG from the US, so the US is somewhat isolated from that. But the US is not isolated against the general economic downturn worldwide.
Politicians do not care. Merz, with barely 25% approval of the German population, continues the policies outlined by Hegseth during his visit to the EU. Trump still plays theater to appease his MAGA base, but Senators Rubio and Graham increasingly start holding the reins.
Russia's invasion of Ukraine however caused a whole bunch of economic inputs like energy and fertilizer to spike, and central banks world wide didn't want economies to "get used to" constant high inflation rates, causing a perpetual problem.
But instead all the productivity workers just switched to their home office and things just kept working. The stimulus should have been shut off in early-mid 2021 when this was abundantly clear. But the government let it run because people were so jubilant in the money shower.
That was not entirely true.
Trump’s pandemic spending (lockdowns, vaccines…), and subsequently Biden’s, but most importantly the curiously named Inflation Reduction Act were obvious drivers. You can’t stimulate an already overheated economy to the tune of 2 trillion without getting Larry Summers a bit worked up.
And remember when they first said inflation was "transitory" and caused by supply chain issues from the economy reopening after covid? They didn't raise interest rates then because, like I mentioned above, interest rates don't help with supply shocks. If they did, the Fed would have raised rates then.
Was actively looking at this time for months prior and it went from a few recruiters a day reaching out to a few a week.
And that's only the indirect effect on equity funding; debt funding just directly becomes more expensive.
I'd also highlight that beyond over-hiring being responsible for the downturn in tech employment, I think offshoring is way more responsible for the reduction in tech than AI when it comes to US jobs. Video conferencing tech didn't get really good and ubiquitous (especially for folks working from home) until the late teens, and since then I've seen an explosion of offshore contractors. With so many folks working remotely anyway, what does it matter if your coworker is in the same city or a different continent, as long as there is at least some daily time overlap (which is also why I've seen a ton of offshoring to Latin America and Europe over places like India).
Both sides of the aisle retreated from domestic labor protection for their own different reasons so the US labor force got clobbered.
In big dollar markets, the program is used more for special skills. But when a big bank or government contractor needs marginally skilled people onshore, they open an office in Nowhere, Arizona, and have a hard time finding J2EE developers. So some company from New Jersey will appear and provide a steady stream of workers making $25/hr.
The calculus is that more H1=less offshore.
The smart move would be to just let skilled workers from India, China, etc with a visa that doesn’t tie them to an employer. That would end the abusive labor practices and probably reduce the number of lower end workers or the incentive to deny entry level employment to US nationals.
The problem is that the left, which was historically pro-labor, abdicated this position for racial reasons, and the right was always about maximizing the economic zone.
Basically, progressives in Denmark have argued for very strict immigration rules, the essential argument being that Denmark has an expensive social welfare state, and to get the populace to support the high taxes needed to pay for this, you can't just let anyone in who shows up on your doorstep.
The American left could learn a ton of lessons from this. I may loath Greg Abbott for lots of reasons, but I largely support what he did bussing migrants to NYC and other liberal cities. Many people in these cities wanted to bask in the feelings of moral superiority by being "sanctuary cities", but public sentiment changed drastically when they actually had to start bearing a large portion of the cost of a flood of migrants.
The real reason is that they are totally beholden to powerful business interests that benefit from mass immigration, and the ensuing suppression of American labor movements. The racial equity bit is just the line that they feed to their voters.
I think the real problem is that the median voter is either unable to, has no time to or no interest to understand basic economics and second-order consequences. We see this on both sides of the aisle. Policies like caps on credit card interest rates, rent control or no tax on tips are very, very popular while also being obviously bad after thinking about it for just 1 minute.
This is compounded by there being relatively little discussion of policies like that. They get reported on but not discussed and analyzed. This takes us back to your point about the perception of the Democratic party. The media (probably because the median voter prefers it) will instead discuss issues that are more emotionally relatable, like the border being "overwhelmed", trans athletes, etc. which makes it less likely to get people to think about economic policy.
This causes a preference for simple policies that seem to aim straight for the goal. Rent too high? Prohibit higher rent! Credit card fees too high? Prohibit high fees! Immigrants lower wages? Have fewer immigrant!
Telling the median voter that H1-B visa holders are lowering wages due to the high friction of changing sponsors and that the solution is to loosen the visa restrictions is gonna go over well with much of the electorate. Even only the portion of initial problem statement will likely reach most voters in the form of "H1-B visas lower wages". Someone who will simply take that simplified issue and run with cutting down further on immigration will be much more likely to succeed with how public opinion is currently formed.
All this stuff is why I love learning about policy and absolutely loath politics.
What do you think of that?
Further, I'm very disappointed that the median voter doesn't seem to understand or care about the policies they vote for. Tariffs and deportations are recipes to cause more inflation, yet here we are.
Don’t understand why other countries make it harder.
EU would flourish economically and there would be no room for ultra conservative right to gain any real foothold (which is 95% just failed immigration topic just like Brexit was).
Alas, we are where we are, they slowly backpedal but its too little too late, as usually. I blame Merkel for half of EU woes, she really was a horrible leader of otherwise very powerful nation made much weaker and less resilient due to her flawed policies and lack of grokking where world is heading to.
Btw she still acknowledges nothing and keeps thinking how great she was. Also a nuclear physicist who turned off all existing nuclear plants too early so Germany has to import massive amount of electricity from coal burning plants. You can't make it up.
How does Switzerland keep local companies from hiring workers on low wages to compete against locals? How do they police it?
I already know that the right-wing supports h1bs, Trump himself said so.
People in tech are so quick to shoot themselves in the foot.
Tech has its barriers too. Most people I've met in tech come from relatively rich families. (Families where spending $70k+/yr on college is not a major concern for multiple kids - that's not normal middle class at all even for the US)
Even literal Nazis were exempted from immigration controls on the basis of extreme merit.
TACO Trump himself said he'd reveal his health care plan in two weeks, many many years ago, many many times. But then he chickened out again and again and again and again and again. So that the buk buk buk are you talking about?
> nothing ever happens here that helps the workers and whatever rights we have now are slowly dwindling
its almost as if we need a 'workers party' or something... though i'd imagine first-past-the-post in the u.s makes that difficult.I felt enormous sympathy for my coworkers here with that visa. Their lives sucked because there was little downside for sociopathic managers to make them suck.
Most frustrating was when they were doing the same kind of work I was doing, like writing Python web services and whatnot. We absolutely could hire local employees to do those things. They weren't building quantum computers or something. Crappy employers gamed the system to get below-market-rate-salary employees and work them like rented mules. It was infuriating.
While working at Google I worked with many many amazing H1B (and other kinds) visa holders. I did 3 interviews a week, sat on hiring committees (reading 10-15 packets a week) and had a pretty good gauge of what we could find.
There was just no way I could see that we could replace these people with Americans. And they got paid top dollar and had the same wlb as everyone else (you could not generally tell what someone’s status was).
But wanna use it as a way to undercut American jobs with 80-hour-a-week laborers, as I've personally witnessed? Nah.
My criticisms against the H1B program are completely against the companies who abuse it. By all means, please do use it to bring in world-class scientists, researchers, and engineers!
https://www.linkedin.com/posts/jamesfobrien_tech-jobs-have-d...
But, for existing teams they wanted (reasonably) to avoid splitting between locations. So you need someone local.
One theory is that the benefit they might be providing over domestic "grads" is lack of prerequisites for promotion above certain levels (language, cultural fit, and so on). For managers, this means the prestige of increased headcount without the various "burdens" of managing "careerists". For example, less plausible competition for career-ladder jobs which can then be reserved for favoured individuals. Just a theory.
Obviously the only real solution to creating an artificial labor shortage is looking externally from the existing labor force. Simply randomly hiring underserved groups didn't really make sense because they weren't participants.
Where I work, we have two main goals when I'm involved in the technical hiring process: hire the cheapest labor and try to increase diversity. I'm not necessarily against either, but those are our goals.
It's a hard truth for many Americans to swallow, but it is the truth nonetheless.
Not to say there isn't an incredible amount of merit... but the historical impact of rampant nepotism in the US is widely acknowledged, and this newer manifestation should be acknowledged just the same.
Sorry, dude, it's like, all I know.
I hear this argument where I live for various reasons, but surely it only ever comes down to wages and/or conditions?
If the company paid a competitive rate (ie higher), locals would apply. Surely blaming a lack of local interest is rarely going to be due to anything other than pay or conditions?
I enjoy meeting the very smart people from all sorts of backgrounds - they share the values of education and hard work that my parents emphasized, and they have an appreciation for what we enjoy as software engineers; US born folks tend to have a bit of entitlement, and want success without hard work.
I interview a fair number of people, and truly first rate minds are a limited resource - there's just so many in each city (and not everyone will want to or be able to move for a career). Even with "off-shoring" one finds after hiring in a given city for a while, it gets harder, and the efficient thing to do is to open a branch in a new city.
I don't know, perhaps the realtors from my class get more money than many scientists or engineers, and certainly more than my peers in India (whose salaries have gone from 10% of mine to about 40% of mine in the past decade or two), but the point is the real love of solving novel problems - in an industry where success leads to many novel problems.
Hard work, interesting problems, and building things that actual people use - these are the core value prop for software engineering as a career; the money is pretty new and not the core; finding people who share that perspective is priceless. Enough money to provide a good start to your children and help your family is good, but never the heart of the matter.
Nadella ascending to the leadership of Micro"I Can't Believe It's Not Considered A State-Sponsored Defense Corp"soft is what got my mildly xenophobic (sorry) gears turning.
Other than a few international visitors, I’d expect the makeup to look like the domestic tech worker demographics rather than like the global population demographics.
The whole reason H1Bs were invented is to disempower the existing workforce. Not reaching for a (long overdue) tool of power for tech workers is playing right into their hand.
Edit: I found this funny quote describing a scab from the early 1900s:
https://en.wikipedia.org/wiki/Jack_London#Diatribe_about_sca...
> After God had finished the rattlesnake, the toad, and the vampire, he had some awful substance left with which he made a scab. A scab is a two-legged animal with a corkscrew soul, a water brain, a combination backbone of jelly and glue. Where others have hearts, he carries a tumor of rotten principles.
Knowing one’s enemy is key to fighting them.
I have never once worked with a product manager who I could describe as “worth their weight in gold”.
Not saying they don’t exist, but they’re probably even rarer than you think.
These types all go to the same schools and do really well, interview the same, and value the prestige of working in big tech. So it's pretty easy to identify them and offer them a great career path and take them off the market.
Technical founders are way trickier to identify as they can be dropouts, interview poorly, not value the prestige etc.
Again, IMO the good ones added a lot of value by making sure no balls got dropped, which is easy to do with large, multi-team projects. Most of them, though, did a lot of just "status checks" and meeting updates.
Software was truly truly insane for a bit there. Straight out of college, no-name CS degree, making $120, $150k (back when $120k really meant $120k)? The music had to stop on that one.
Honestly it was 10 years too late. The big innovations of the 2010 era were maturing. I’ve spent my career maintaining and tweaking those, which does next to zero for your career development. It’s boring and bloated. On the bright side I’ve made a lot of money and have no issues getting jobs so far.
For example think of space x, Waymo, parts of US national defense, and the sciences (cancer research, climate science - analyzing satellite images, etc). They are doing novel work that’s certainly not boring!
I think you’re probably referring to excitement and cutting edge in consumer products? I agree that has been stale for a while.
Of course, that growth in wages in this sector was a contributing factor to home/rental price increases as the "market" could bear higher prices.
[1] https://en.wikipedia.org/wiki/List_of_United_States_metropol...
[2] https://en.wikipedia.org/wiki/Personal_income_in_the_United_...
More like it means ending up with government-provided bare minimum handouts to not have you starve (assuming you somehow manage to stay on minimum wage all your life).
The "min wage" of HN seems to be "living better than 98% of everyone else"
I mean a real wage associated with standards of living that one took for granted as "normal" when I was young.
If I took a job for ~100k in Washington, I'd live worse than I did as a PhD student in Sweden. It would basically suck. I'm not sure ~120k would make things that different.
The erosion of the standard of living in the US (and the West more broadly) is not something to be ignored in any discussion of wages.
The issue is salary expectations in the US are much higher than those in much of Western Europe despite having similar CoL.
And $120k for a new grad is only a tech specific thing. Even new grad management consultants earn $80-100k base, and lower for other non-software roles and industries.
But that's my point - salaries are factored based on labor market demands and comparative performance of your macroeconomy (UK high finance and law salaries are comparable with the US), not CoL.
But in UK an Ireland they get free healthcare, paid vacation, sick leave and labor protections, no?
There's a reason you don't see new grad hiring in France (where they actually try to enforce work hours), and they have a subsequently high youth unemployment rate.
Though even these new grad roles are at risk to move to CEE, where their administrations are giving massive tax holidays on the tune of $10-20k per employee if you invest enough.
And the skills gap I mentioned about CS in the US exists in Weatern Europe as well. CEE, Israel, and India are the only large tech hubs that still treat CS as an engineering disciple instead of as only a form of applied math.
I happen to have a sibling in consulting who was seconded from London to New York for a year, doing the same work for the same company, and she found the work hours in NY to be ludicrously long (and not for a significant productivity gain: more required time-at-desk). So there are varying levels of "expected to work off the clock hours".
I pay over 40% effective tax rate. Healthcare is far from free.
Think they're too high? You're free to start a company and pay less.
Some managers read Dilbert and think it's intended as advice.
"The reality is that women are treated differently by society for exactly the same reason that children and the mentally handicapped are treated differently. It’s just easier this way for everyone. You don’t argue with a four-year old about why he shouldn’t eat candy for dinner. You don’t punch a mentally handicapped guy even if he punches you first. And you don’t argue when a women tells you she’s only making 80 cents to your dollar. It’s the path of least resistance. You save your energy for more important battles." -Scott Adams
"Women define themselves by their relationships and men define themselves by whom they are helping. Women believe value is created by sacrifice. If you are willing to give up your favorite activities to be with her, she will trust you. If being with her is too easy for you, she will not trust you." -Scott Adams
"Nearly half of all Blacks are not OK with White people. That’s a hate group." -Scott Adams
"Based on the current way things are going, the best advice I would give to White people is to get the hell away from Black people. Just get the fuck away. Wherever you have to go, just get away. Because there’s no fixing this. This can’t be fixed." -Scott Adams
"I’m going to back off from being helpful to Black Americas because it doesn’t seem like it pays off. ... The only outcome is that I get called a racist." -Scott Adams
Should have been 'better still'.
I swear, folks: dennis_jeeves2 is not my sock puppet, the way Scott "plannedchaos" Adams is his own sock puppet and biggest fan.
Scott Adams Poses as His Own Fan on Message Boards to Defend Himself:
https://comicsalliance.com/scott-adams-plannedchaos-sockpupp...
>Dilbert creator Scott Adams came to our attention last month for the first time since the mid to late '90s when a blog post surfaced where he said, among other things, that women are "treated differently by society for exactly the same reason that children and the mentally handicapped are treated differently. It's just easier this way for everyone."
>Now, he's managed to provoke yet another internet maelstorm of derision by popping up on message boards to harangue his critics and defend himself. That's not news in and of itself, but what really makes it special is how he's doing it: by leaving comments on Metafilter and Reddit under the pseudonym PlannedChaos where he speaks about himself in the third person and attacks his critics while pretending that he is not Scott Adams, but rather just a big, big fan of the cartoonist.
>And what makes it really, really special is the level of spectacular ego and hilarious self-congratulation suddenly on display in the comments when you realize they were written by Scott Adams' number one fan... Scott Adams. [...]
Then they had some disappointing results due to their bad decision-making elsewhere in the company, and they turned to my friend and said "Let's lay off some of your guys."
At some point in the 2000's, every manager decided they needed weekly 1:1's, resulting in even more meetings. Many of these are entirely ineffective. As one boss told me, "I've been told I need to have 1:1's, so I'm having them!" I literally sat next to him and talked every day, but it was a good time to go for coffee...
Which is the sole reason automation will not make most people obsolete until the VP level themselves are automated.
I’m worried about the shrinking number of opportunities for juniors.
I have definitely seen real world examples where adding junior hires at ~$100k+ is being completely forgone when you can get equivalent output from someone making $40k offshore.
And AI cannot provide that kind of value. Will a VP in charge of 100 AI agents be respected as much as a VP in charge of 100 employees?
At the end of the day, we're all just monkeys throwing bones in the air in front of a monolith we constructed. But we're not going to stop throwing bones in the air!
The data does not support this. The businesses with the highest market caps are the ones with the highest earnings.
https://companiesmarketcap.com/
Sort by # of employees and you get a list of companies with lower market caps.
Either way, there is no data I have seen to suggest market cap correlates with number of employees. The strongest correlation I see is to net income (aka profit), and after that would be growing revenues and/or market share.
https://www.youtube.com/watch?v=-azFNwF6fa0
Afterlife (video game)
You said you were at large companies, so this is a hard call to make. A lot of large companies work on lots of small products knowing they probably won't work, but one of them might, so it's still worth it to try. It's essentially the VC model.
And there's multiple confounding factors at play.
Yes, lots of jobs are bullshit, so maybe AI is a plausible excuse to downside and gain efficiency.
But also the dynamic that causes the existence of bullshit jobs hasn't gone away. In fact, assuming AI does actually provide meaningful automation or productivity improvemenet, it might well be the case that the ratio of bullshit jobs increases.
- Value creators (i.e. the ones historically carrying companies with the 80%/20% rule) generally are the ones cautious and/or fearful of AI. The ones that carried most of the company. Their output is measurable and definable so able to be automated.
- The people in the jobs you mention in your post conversely are usually the ones most excited about AI. The ones in meetings all day, in the corporate machine. By definition their job is already not well defined anyway - IMV this is harder to automate. They are often there for other reasons other than "productive output" - e.g. compliance, nepotism, stakeholder management, etc.
Everywhere I've ever worked, we had 3-4X more work to do than staff to do it. It was always a brutal prioritization problem, and a lot of good projects just didn't get done because they ended up below the cut line, and we just didn't have enough people to do them.
I don't know where all these companies are that have half their staff "not doing anything productive" but I've never worked at one.
What's more likely? 1. Companies are (for reasons unknown) hiring all these people and not having them do anything useful, or 2. These people actually do useful things, but HN commenters don't understand those jobs and simply conclude they're doing nothing?
Managers always want more headcount. Bigger teams. Bigger scope. Promotions. Executives have similar incentives or don’t care. That’s the reason why they’re bloated.
I’ve seen those guys it is painful to watch.
This had me thinking, how are they going to get "clout", by comparing AI spending?
First, is AI really a better scapegoat? "Reducing headcount due to end of ZIRP" maybe doesn't sound great, but "replacing employees with AI" sounds a whole lot worse from a PR perspective (to me anyway).
Second, are companies actually using AI as the scapegoat? I haven't followed it too closely, but I could imagine that layoffs don't say anything about AI at all, and it's mostly media and FUD inventing the correlation.
whereas "AI" is intuitively an external force; it's much harder to assign blame to company leadership.
Because they don't have to do that. They could just operate at max efficiency all the time.
Instead, they spread the wealth a bit by having bullshit jobs, even if the existence of these jobs is dependent on the market cycle.
I do.
It's much more important that people live a dignified life and be able to feed their families than "increasing shareholder value" or whatever.
I'm a person that would be hypothetically supportive of something like DOGE cuts, but I'd rather have people earning a living even with Soviet-style make work jobs than unemployed. I don't desire to live in a cutthroat "competitive" society where only "talent" can live a dignified life. I don't know if that's "wealth distribution" or socialism or whatever; I don't really care, nor make claim it's some airtight political philosophy.
> It's much more important that people live a dignified life and be able to feed their families than "increasing shareholder value" or whatever.
its just my intuition, but talking to many people around me, i get the feeling like this is why people on both "left" and "right" are in a lot of ways (for lack of a better word) irate at the system as a whole... if thats true, i doubt ai will improve the situation for either...That’s very optimistic! I don’t fully agree with it, but I certainly know some very intelligent people that I wish were contributing more to the world than they do as a pawn in a game of corporate chess.
I think quotes around "real value" would be appropriate as well. Consider all the great engineering it took to create Netflix, valued at $500b - which achieves what SFTP does for free.
The parent comment was complaining about certain employees contributions to "real value" or lack thereof. My question is, how do you ascertain the value of work in this context where the software isn't what's valuable but the IP is, and further how do justify working on a product thats already a solved problem and still refer to it as "creating 'real' value"?
If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.
For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.
For example, I founded a SaaS company late last year which has been growing very quickly. We are track to pass $1M ARR before the company's first birthday. We are fully bootstrapped, 100% founder owned. There are 2 of us. And we feel confident we could keep up this pace of growth for quite a while without hiring or taking capital. (Of course, there's an argument that we could accelerate our growth rate with more cash/human resources)
Early in my career, at different companies, we often solved capacity problems by hiring. But my cofounder and I have been able to turn to AI to help with this, and we keep finding double digit percentage productivity improvements without investing much upfront time. I don't think this would have been remotely possible when I started my career, or even just a few years ago when AI hadn't really started to take off.
So my theory as to why it doesn't appear to be "painfully obvious": you've never heard of most of the businesses getting the most value out of this technology, because they're all too small. On average, the companies we know about are large. It's very difficult for them to reinvent themselves on a dime to adapt to new technology - it takes a long time to steer a ship - so it will take a while. But small businesses like mine can change how we work today and realize the results tomorrow.
Companies that needed to hire 10 people to grow, only need to hire 9 now
In less than 5 years that’s going to be 7 or 6 people
I’m doing more with 5 engineers than I was able to do with 15 just 10 years ago
Part of that is libraries etc have matured too but we’ve reached the point from a developer perspective that you don’t need to build new technologies, you just need to put what exists together in new ways
All the parts exist for any technology to be built, it’s about composition and distribution at this point
So then I start thinking ... what sort of things am I doing that take me away from talking to customers? I spend a lot of time on implementation. I spend a lot of time on administrative sales tasks (chasing people for meetings, writing proposals, negotiating contracts). I spend a lot of time on meeting prep and follow-up. And many more. So I'm always on the hunt for tools with a problem already in mind.
In terms of specific tools...
Claude is a great backbone for a lot. Both the chatbot but also the API. I use the chatbot to help me write proposals and review contracts. I used it to write scripting to automate our implementation process which was once quite manual and is now a button click.
Cursor has been a game changer. In particular, it means that we spend very little time on bugfixes and small features. This keeps my CTO almost 100% focused on big picture needle-moving projects. We are now doing some research into things like Codex/Claude Code to see how we could improve this further.
Another app that I really love is called Granola. It automatically joins all of my meetings, writes notes, reminds me what promises I made, helps me write follow-up emails, and helps me prep for meetings.
Finally, we use an email client called Sedna (disclaimer: I used to work at Sedna) which is fully programmable. We've been building our own internal tooling (leveraging the Claude API) on top of Sedna to help automate different workflows. For example, my inbox is now perfectly prioritised. In many cases, when I receive emails from customers, an AI has already written a draft that I can review and send. I know there are a lot of out-of-the-box tools out there like Fyxer to help with things like this, but I've really appreciated the ability to get exactly what we want by building certain things ourselves.
No.
The bottleneck isn't intellectual productivity. The bottleneck is a legion of other things; regulation, IP law, marketing, etc. The executive email writers and meeting attenders have a swarm of business considerations ricocheting around in their heads in eternal battle with each other. It takes a lot of supposedly brilliant thinking to safely monetize all the things, and many of the factors involved are not manifest in written form anywhere, often for legal reasons.
One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields. Another is art "creatives": graphic artists in particular. They're early victims and likely to be fully supplanted in the near future. A little further on and it'll be writers, actors, etc.
> One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Great point. The perfect example: (From Wiki): > In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
AFAIK: They are talking about DeepMind AlphaFold.Related: (Also from Wiki):
> Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who is the CEO.
Yes, it's an example of ML used in science (other examples include NN based force fields for molecule dynamics simulations and meteorological models) - but a biologist or meteorologist usually cares little how the software package they are using works (excluding the knowledge of different limitation of numerical vs statistical models).
The whole thing "but look AI in science" seem to me like Motte-and-bailey argument to imply the use of AGI-like MLLM agents that perform independent research - currently a much less successful approach.
I specifically didn't call LLMs a statistical model - while they technically are, it's obvious they are something more. While intelligence is a hard concept to pin down, current gen LLMs already can do most (knowledge work) based things better than most people (they are better writers than most people, they can program better than most people, they are better at math than most people, have better medical knowledge than most people...). If the human is the mark of intelligence - it has been achieved.
Alphafold is something else though. I work with something similar (specifically FNOs for biophysical simulations) and the insight that data only models perform better than physics based model is novel - I think that the Nobel prize was deservedly awarded - however the thing is still closer to a curve fit than to LLMs regarding intelligence - or in other words, it's about as "intelligence" as permutation based black boxes were.
Can you give an example, say in Medicine, where AI made a significant advancement? That is we are talking neural networks and up (ie: LLM) and not some local optimization.
"Our study suggests that LLMs have achieved superhuman performance on general medical diagnostic and management reasoning"
In the scenario being discussed - if a bunch of companies hired a whole bunch of lawyers, markerters, etc that might make salaries go up due to increased demand (but probably not super high amoung as tech isnt the only industry in the world). That still first requires companies to be hiring more of these types of people for that effect to happen, so we should still see some of the increased output even if there is a limiting factor. We would also notice the salaray of those professions going up, which so far hasn't happened.
In your own words, a business will run out of business quickly if supply and demand do not match. So unless you are confident that there will be a buyer, you cannot raise prices infinitely.
» because a living space is a basic necessity
While most people can live without a netflix subscription (hence they cannot raise prices infinitely and still expect to find buyers) most people prefer to live in housings. A housing is a basic necessity hence as a landlord you can confidently raise prices to the barrier of an affordability limit.
» Something else will get expensive in the meantime
Lets assume electricity prices get really cheap because humanity discovers fusion reaction. Well guess what, now landlords will increase the rents again because they can.
Hope I expressed myself to your liking. I mean you just walz in here and start lecturing people about capitalism, maybe you should change your careerpath and become a teacher.
The tech is going to have to be absolutely flawless, otherwise the uncanny-valley nature of AI "actors" in a movie will be as annoying as when the audio and video aren't perfectly synced in a stream. At least that's how I see it..
For most of them I'm not seeing any of those issues.
A couple years ago, we thought the trend was without limits - a five second video would turn into a five minute video, and keep going from there. But now I wonder if perhaps there are built in limits to how far things can go without having a data center with a billion Nvidia cards and a dozen nuclear reactors serving them power.
Again, I don't know the limits, but we've seen in the last year some sudden walls pop up that change our sense of the trajectory down to something less "the future is just ten months away."
The quick cuts thing is a huge turnoff so if they have a 15 second clip later on, I missed it.
When I say "1second" I mean that's what I was doing with automatic1111 a couple years ago. And every video I've seen is the same 30-60 generated frames...
LLMs only exist because the companies developing them are so ridiculously powerful that can completely ignore the rule of law, or if necessary even change it (as they are currently trying to do here in Europe).
Remember we are talking about a technology created by torrenting 82 TB of pirated books, and that's just one single example.
"Steal all the users, steal all the music" and then lawyer up, as Eric Schmidt said at Stanford a few months ago.
They want to ban states from imposing their own regulations on AI.
Like let's take operating systems as an example. If there are great productivity gains from LLMs while aren't companies like Apple, Google and MS shipping operating systems with vastly less bugs and cleaning up backlogged user feature requests?
They have trouble with debugging obvious bugs though.
https://www.ft.com/content/4f20fbb9-a10f-4a08-9a13-efa1b55dd...
> The bank [Goldman Sachs] now has 11,000 engineers among its 46,000 employees, according to [CEO David] Solomon, and is using AI to help draft public filing documents.
> The work of drafting an S1 — the initial registration prospectus for an IPO — might have taken a six-person team two weeks to complete, but it can now be 95 per cent done by AI in minutes, said Solomon.
> “The last 5 per cent now matters because the rest is now a commodity,” he said.
In my eyes, that is major. Junior ibankers are not cheap -- they make about 150K USD per year minimum (total comp).Ok, so by 2027 we should be having fleets of autonomous AI agents swarming around every bug report and solving it x times faster than a human. Cool, so I guess by 2028 buggy software will be a thing of the past (for those companies that fully adopt AI of course). I'm so excited for a future where IT projects stop going overtime and overbudget and deliver more value than expected. Can you blame us for thinking this is too good to be true?
In complex systems, you can't necessarily perceive the result of large internal changes, especially not with the tiny amount of vibes sampling you're basing this on.
You really don't have the pulse on how fast the average company is shipping new code changes, and I don't see why you think you would know that. Shipping new public end-use features isn't even a good signal, it's a downstream product and a small fraction of software written.
It's like thinking you are picking up a vibe related to changes in how many immigrants are coming into the country month to month when you walk around the mall.
Doesnt really matter if AI actually works or not.
It also matters a bit where the reputation cost hits. Layoffs can spook investors because it makes it look like the company is doing poorly. If the reputation hit for ai is to non-investors, then it probably matters less.
E.g. look at the indie games count on steam by year: https://steamdb.info/stats/releases/?tagid=492
What makes you so sure of the productivity boost when we aren't seeing a change in output?
LLMs are also not very useful for long term strategy or to come up with novel features or combinations of features. They also are not great at maintaining existing code, particularly without comprehensive test suites. They are good at coming up with tests for boiler plate code, but not really for high-level features.
From my experience, this stuff is rarely introduced to save developers from typing in the code for their logic. Actual reasons I observe:
1. SaaS sales/marketing pushing their offerings on decision makers - software being a pop culture, this works pretty well. It can be hard for internal staff to push back on What Everyone Is Using (TM). Even if it makes little to no sense.
2. Outsourcing liability, maintenance, and general "having to think about it". Can be entirely valid, but often it indeed comes from an "I don't want to think of it" kind of place.
I don't see this stuff slowing down GenAI or not, mainly because it has usually little to do with saving time or money.
How do you know this? What are the bottlenecks?
The company that I work for is currently innovating very fast (not LLM related), creating so much value for other companies that they have never gotten from any other business.. I know this because when they switch to our company, they tell us how much better our software product is compared to anything they've ever used. It has tons of features that no other company has. That's all I can say without doxxing too much.
I feel like it's unimaginative to say:
> What more tech is there to sell besides LLM integrations?
I have like 7 startup ideas written down in my notes app for software products that I wish I had in my life, but don't have time to work on, and can't find anything that exists for it. There is so much left to create
Now, there come a few considerations I don't believe you have factored in:
- Just because your company has struck gold: does that mean that pathway is available or realistic enough for everyone else; and to a more important point, is it /significant/ enough that it can scoop the enormous amount of tech talent on the market currently and in the future? I don't believe so.
- Segueing, "software products that I wish I had in my life." Yes, I too have many ideas, BUT: is the market (the TAM if you will) significant enough to warrant it? Ok, maybe it is -- how will you solve for distribution? Fulfillment is easy, but how are you going to not only identify prospective customers (your ICP), find them and communicate to them, and then convince them to buy your product, AND do this at scale, AND do this with low enough churn/CAC and high enough retention/CLTV, AND is this the most productive and profitable use of your time and resources?
Again, ideas are easy -- we all have them. But the execution is difficult. In the SaaS/tech space, people are burned out from software. Everyone is shilling their vibe-coded SaaS or latest app. That market is saturated, people don't care. Consumer economy is suffering right now due to the overall economy and so on. Next avenue is enterprise/B2B -- cool, still issues: buyer fatigue; economic uncertainty leading to anemic budgets and paralysis while the "fog" clears. No one is buying -- unless you can guarantee they can make money or you can "weather the storm" (see: AI, and all the top-down AI mandates every single PE co and board is shoving down exec teams throats).
I'm talking in very broad strokes on the most impactful things. Yes, there is much to create -- but who is going to create it and who is going to buy it (with what money?). This is a people problem, not a tech problem. I'm specifically talking about: "what more tech is there to sell -- that PEOPLE WILL BUY -- besides LLM integrations?" Again, I see nothing -- so I have pivoted towards finance and selling money. Money will not go out of fashion for a while (because people need it for the foreseeable future).
Ask yourself, if you were fired right now at this moment: how easy would it be for you to get another job? Quite difficult unless you find yourself lucky enough to have a network of people that work in businesses that are selling things that people are buying. Otherwise, good luck. You would have more luck consulting -- there are many many many "niche" products and projects that need to be done on small scales, that require good tech talent, but have no hope of being productized or scaled (hint!).
I do think I may struggle a bit to find something comparable to my current company, but we’re also hiring right now. And it’s a very small company in the grand scheme of things, even though we have customers much bigger.
I guess having that experience makes me think that there must be a lot of other small companies working in their own interesting niche, providing a valuable product for a subset of major companies. You just don’t usually know they exist unless you need their specific niche.
But I recognize your points too. It seems like the B-to-C space is really tricky right now, and likely fits closer with what you’re describing.
I think that the flip side is that a company doesn’t need to make it big to be successful. If you can hire 5 developers and bring in $2m/yr, there’s nothing at all wrong with that as a business. Maybe we will get lucky and the market will trend towards more of those to fill in the void that you mentioned. I think it could lead to a lot of innovation and a really healthy tech world! But maybe it’s just being overly optimistic to think that might be the path forward :)
I don't get it either. You hire someone in the hope for ROI. Some things work some kinda don't. Now people will be n times more productive therefore you should hire fewer people??
That would mean you have no ideas. It says nothing about the potential.
Shipping features faster != innovation or improvements to existing products
I’m not as bullish as some are on the impact of AI, but it does feel nice when you can deliver something in a fraction of the time it used to take. For me, it’s more useful as a research and idea exploration tool, less so about writing code. Part of that is that I’m in Scala land, so it just tends to not work as well as a more mainstream language.
We haven’t used it to help the product management and solution exploration side, which seems to be a big constraint on our execution.
Luckily software companies are not ball bearings factories.
Why wouldn't you just 10x the productive output instead?
If the competition instead uses their productivity boost to do layoffs and increase short term profits, you are likely to outcompete them over time.
> shipping features and fixes faster than ever before
Meanwhile Apple duplicated my gf's contract, creating duplicate birthdays on my calendar. It couldn't find duplicates despite matching name, nickname, phone number, birthdays, and that both contacts were associated with her Apple account. I manually merged and ended up with 3 copies of her birthday in my calendar...Seriously, this shit can be solved with a regex...
The number of issues like these I see is growing exponentially, not decreasing. I don't think it's AI though, because it started before that. I think these companies are just overfitting whatever silly metrics they have decided are best
In 1987 the economist Robert Solow said "You can see the computer age everywhere but in the productivity statistics".
We should remark he said this long before the internet, web and mobile, so probably the remark needs an update.
However, I think it cuts through the salesmen hype. Anytime we see these kinds of claims we should reply "show me the numbers". I'll wait until economists make these big claims, will not trust CEOs and salesmen.
Only if you want to add "internet, web, and mobile" before "age". Otherwise it doesn't need any change.
But that phrase is about the productivity statistics, not about computers or actual productivity.
The problem with computers not changing the productivity statistics is one of the great mysteries economists argue about. It's very clear nowadays that there are problems on both the "statistics" and "productivity" sides of it, but the internet, web, and mobile didn't change anything.
AI also helps immensely in creating those other inefficiencies.
That said: it’s one type of work that is getting dramatically cheaper. The debate is about the scope and quality of that labor, not whether it’s cheap or fast (it is). But if anything negative (errors, faults) compound, and the correction can NOT be done with the same tools, then you must still have humans triage errors. In my experience, bad code can already have negative value (it costs more to fix than rewrite).
In the medium term, the actual scope and ability for different tasks will remain unknown. It takes a lot of time to gather the experience to tell if something was a bad idea – just look at the graveyard of design patterns, languages and software practices. Many of them enjoyed the spotlight for a decade before the fallout hit.
Anyway, while the abilities are unknown, AI will be used everywhere for everything – which is only wise if it’s truly better at every general task – despite every available data about it shows vastly different ability in different domains/problem types. Many of those things will be both (a) worse than humans and (b) expensive to reverse, with compounding effects.
The funny thing is I have already seen enthusiasts basically acknowledging this but explaining that those compounding issues (think tech debt) is the right choice now because better AI will fix those issues in the future. To me, this feels like the early formations of religion (not metaphorically even). And I have a feeling that the goalpost moving from both sides will lead to an unfalsifiability deadlock in the debate.
This eventually changed. Companies do figure out how to use tech, it just takes a while.
Content producers are blocking scrapers of their sites to prevent AI companies from using their content. I would not assume that AI is either inevitable or on a easy path to adoption. AI certainly isn't very useful if what it "knows" is out of date.
It may help you build a real product feature quicker, but AI is not necessarily doing the research and product design which is probably the bottleneck for seeing real impact.
Maybe overall complexity creeping up rolls over any small gains, or devs are becoming more lazy and just copy paste llms output without a serious look at it?
My company didnt even adapt or allow use of llms in any way for anything so far (private client data security is more important than any productivity gains, which anyway seems questionable when looking around.. and serious data breaches can end up with fines in hundreds of millions ballpark easily).
Having worked on software infrastructure, it’s a thankless job. You’re most heroic work has little visibility and the result is that nothing catastrophic happened.
So maybe products will have better reliability and fewer bugs? And we all know there’s crappy software that makes tons of money, so there isn’t necessarily a strong correlation.
I think the reality is less like a switch and more like there are just certain jobs that get easier and you just need fewer people overall.
And you DO see companies laying off people in large numbers fairly regularly.
Sure but, so far, too regularly to be AI-gains-driven (at least in software). We have some data on software job postings and the job apocalypse, and corresponding layoffs, coincided with the end of ultra-low interest rates. If AI had a recent effect this year or last, its quite tiny in comparison.
https://fred.stlouisfed.org/graph/?g=1JmOr
so one can argue more is to come, but its hard to see how its had a real effect on jobs/layoffs thus far.
That doesn't mean it isn't a real productivity gain, but it might be spread across enough domains (bugs, features, internal tools, experiments) to not be immediately or "painfully obvious".
It'll probably get more obvious if we start to see uniquely productive small teams seeing success. A sort of "vibe-code wonder".
Firstly, the capex is currently too high for all but the few.
This is a rather obvious statement, sure. But the impact is a lot of companies "have tried language models and they didn't work", and the capex is laughable.
Secondly, there's a corporate paralysis over AI.
I received a panicky policy statement written in legalaise forbidding employees from using LLMs in any form. Written both out of a panic regarding intellectual property leaking but also a panic about how to manage and control staff moving forward.
I think a lot of corporates still clutch at this view that AI will push the workforce costs down and are secretly wasting a lot money failing at this.
The waste is extraordinary, but it's other peoples money (it's actually the shareholders money) and it's seen as being all for a good cause and not something to discuss after it's gone. I can never get it discussed.
Meanwhile, at a grass roots level, I see AI is being embraced and is improving productivity, every second IT worker is using it, it's just that because of this corporate panicking and mismanagement, it's value is not yet measured.
> This is a rather obvious statement,
Nobody is saying companies have to make LLMs themselves.
SASS is a thing.
In regards to Private LLMs, the situation has become disappointing in the 6 months.
I can only think of Mistral as being a genuine vendor.
But given the limitations in context window size, fine tuning is still necessary, and even that requires capex that I rarely see.
But my comment comes from the fact that I heard from several sources, smart people say "we tried language models at work and it failed".
However in my discussion with them, they have no concept of the size of the datacentres used by the webscalers.
The Google web-based office productivity suite is similar. I heard a rumor that at some point Google senior mgmt said that nearly all employees (excluding accounting) must use Google Docs. I am sure that they fixed a huge number of bugs plus added missing/blocking feature, which made the product much more competitive vs MSFT Office. Fifteen years ago, Google Docs was a curiosity -- an experiment for just how complex web apps could become. Today, Google Docs is the premiere choice for new small businesses. It is cheaper than MSFT Office, and "good enough".
The tools are often cringe because the capex was laughable. E.g. one solution, the trial was done using public LLMs and then they switched over to an internally built LLM which is terrible.
Or, secondly, the process is often cringe because the corporate aims are laughable.
I've had an argument with a manager making a multi-million dollar investment in a zero coding solution that we ended up throwing in the bin years later.
They argued that they are going with this bad product because "they don't want to have to manage a team of developers".
They responded "this product costs millions of dollars, how dare you?"
How dare me indeed...
They promptly left the company but it took 5 years before it was finally canned, and plenty of people wasted 5 years of their career on a dead-end product.
Worker productivity is secondary to business destruction, which is the primary event we're really waiting for.
So let me keep it real, I am shorting Atlassian over the next 5 years. Asana is another, there's plenty of startup IPOs that need to be shorted to the ground basically.
I think that this sentiment, along with all of the hype around AI in general, is failing to grasp a lot of the complexity around software creation. I'm not just talking about writing the code for a new application - I'm talking about maintaining that application, ensuring that it executes reliably and correctly, thinking about the features and UX required to make it as frictionless as possible (and voice input isn't the solution there, I'm very confident of that).
I'll be here in a year, we can have this exact discussion again.
"AI" is not going to wholesale replace software development anytime soon, and certainly not within a year's time because of the reasons I mentioned. The way you worded your post made it sound like you believed that capability was already here - nevertheless, whether you think it's here now or will be here in a year, both estimates are way off IMO.
Realistically though, they might incorporate that high schooler's software into Jira, to make it even more bloated and they will sell it to your employer soon enough! Then team lead Chris will enter your birthday and your vacation days in it too, to enable it to also do vacation planning, without asking you. Next thing is, that Atlassian sells you out and you receive unsolicited AI calls for your holiday planning.
In smaller businesses some roles won’t need to be hired anymore.
Meanwhile in big corps, some roles may transition from being the source of presumed expertise to being one neck to choke.
I’d love it not to be true, but the truth is Jira is to projects what Slack/Teams are to messaging. When everybody is a project manager Jira gets paid more, not less.
When I used a not-so-simple LLM to make it act as a text adventure game it could barely keep track of the items in my inventory, so TBH i am a little bit skeptical that an LLM can handle entire project management - even without voice.
Perhaps it might be able to use tools/MCP/RPC to call out to real project management software and pretend to be your accountant/manager/whoever, but i wouldn't call that the LLM itself doing the project management task - and someone would need to write that project management software.
We just have to wait for the cards to flip, and that’s happening on a quadratic curve (some say exponential).
The more likely scenario is that if those tools make developer so much more productive, we would see a large surge in new companies, with 1 to 3 developers creating things that were deemed too hard for them to do.
But it's still possible that we didn't give people enough time yet.
Note: I’m talking about your run of the mill SE waggie work, not startups where your food is based on your output.
The only reason this existed in the first place is because measuring performance is extremely difficult, and becomes more difficult the more complex a person's job is.
AI won't fix that. So even if you eliminate 50% of your employees, you won't be eliminating the bottom 50%. At worst, and probably what happens on average, your choices are about as good as random choice. So you end up with the same proportion of shitty workers as you had before. At worst worst, you actively select the poorest workers because you have some shitty metrics, which happens more often than we'd all like to think.
So, what you're describing is a mythical situation for me. But - US corporations are fabulously rich, or perhaps I should say highly-valued, and there are lots of investors to throw money at things I guess, so maybe that actually happens.
Note that AI wipes out the jobs, but not the tasks themselves. So if that's true, as a consumer, expect more sleepwalked, half-assed products, just created by AI.
I just wish that instead of getting more efficient at generating bullshit, we could just eliminate the bullshit.
That covers majority of sales, advertising and marketing work. Unfortunately, replacing people with AI there will only make things worse for everyone.
Its the people that are constantly working, and too busy to be seen, producing output and keeping the lights on who don't have time for the "games" who AI is going for. Their jobs are easier to define since they are productive and do "something" - so its easy to market AI products for these use cases. After all these people are usually not the people in charge of the purse strings in most organisations for better or worse.
Management will be thrilled.
We can, together, overcome such challenges when we accept that "The purpose of a system is what it does".
A system is a tool, it does have a use/purpose in the simplistic sense. But how we use the tool is ultimately the crux of the issue, for we can use that hammer to build houses or tear them down, or to build concentration camps or use it simply to injure someone directly.
No, the purpose of a tool/system is generally determined by the guiding philosophy of the user or society. Unfortunately society has replaced its philosophy (at least in America) with the economic system of capitalism; i.e. capitalism for capitalisms sake.
It's about protecting your work, even if an LLM can do it better.
The only way an LLM can devalue your work is if it can do it better than you. And I don't just mean quality, I mean as a function of cost/quality/time.
Anyway, we can be enemies I don't care - I've been getting rid of roles that aren't useful anymore as much as I can. I do care that it affects them personally but I do want them to be doing something more useful for us all whatever that may be.
Caring about climate change doesn't mean you need to spend your entire life planting trees instead of doing what you're doing.
Most criticisms I see of management consulting seem to come from the perspective, which I get the sense you subscribe to, that management strategy is broadly fake so there's no underlying thing for the consultants to do better or worse on. I don't think that's right, but I'm never sure how to bridge the gap. It'd be like someone telling me that software architecture is fake and only code is real.
That said, how would we measure if our KPMG engagement worked or not? There's no control group company, so any comparison will have to be statistical or vibes-based. If there is a large enough sample size this can work: I'm sure there is somebody out there that can prove management consulting works for dentist practices in mid-size US cities or whatever, though any well-connected group that discovers this information can probably make more money by just doing a rollup of them. This actually seems to be happening in many industries of this kind. Why consult on how to be a more profitable auto repair business when you can do a leveraged buyout of 30 of them, make them all more profitabl, and pocket that insight yourself? I can understand if you're an poorly-connected individual and short on capital, but the big consulting firms are made up entirely of well-connected people who rub elbows with rich people all day.
Fundamentally, there will never be enough data to prove that IBM engaging McKinsey on AI in 2025 will have made any difference in IBM's bottom line. There's only one IBM and only one 2025!
Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.
Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.
Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.
The whole idea of interns, is as training positions. They are supposed to be a net negative.
The idea is that they will either remain at the company, after their internship, or move to another company, taking the priorities of their trainers, with them.
But nowadays, with corporate HR, actively doing everything they can to screw over their employees, and employees, being so transient, that they can barely remember the name of their employer, the whole thing is kind of a worthless exercise.
At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers, upon returning to Japan. It was well worth it.
Startups are less enlightened than that about "interns".
Literally today, in a startup job posting, to a top CS department, they're looking for "interns" to bring (not learn) hot experience developing AI agents, to this startup, for... $20/hour, and get called an intern.
It's also normal for these startup job posts to be looking for experienced professional-grade skills in things like React, Python, PG, Redis, etc., and still calling the person an intern, with a locally unlivable part-time wage.
Those startups should stop pretending they're teaching "interns" valuable job skills, admit that they desperately need cheap labor for their "ideas person" startup leadership, to do things they can't do, and cut the "intern" in as a founding engineer with meaningful equity. Or, if you can't afford to pay a livable and plausibly competitive startup wage, maybe they're technical cofounders.
Damn, I wish that was me. Having someone mentor you at the beginning of your career instead of having to self learn and fumble your way around never knowing if you're on the right track or not, is massive force multiplier that pays massive dividends over your career. It's like entering the stock market with 1 million $ capital vs 100 $. You're also less likely to build bad habits if nobody with experience teaches you early on.
They are a marquée company, and get the best of the best, direct from top universities.
Also, no one has less than a Master's, over there.
We got damn good engineers as interns.
I feel this is pretty much the norm everywhere in Europe and Asia. No serious engineering company in Germany even looks at your resume it there's no MSc. degree listed, especially since education is mostly free for everyone so not having a degree is seen as a "you problem", but also it leads to degree inflation, where only PhD or post-docs get taken seriously for some high level positions. I don't remember ever seeing a senior manager/CTO without the "Dr." or even "Prof. Dr." title in the top German engineering companies.
I think mostly the US has the concept of the cowboy self taught engineer who dropped out of college to build a trillion dollar empire in his parents garage.
Also because US salaries are sky high compared to their European counterparts, so I could understand if the extra salary wasn’t worth the risk that they might not have that much extra productivity.
I’ve certainly worked with advanced degree people who didn’t seem to be very far along on the productivity curve, but I assume it’s like that for everything everywhere.
There’s no such a thing as loyalty in employer-employee relationships. There’s money, there’s work and there’s [collective] leverage. We need to learn a thing or two from blue collars.
A majority of my friends are blue-collar.
You might be surprised.
Unions are adversarial, but the relationships can still be quite warm.
I hear that German and Japanese unions are full-force stakeholders in their corporations, and the relationship is a lot more intricate.
It's like a marriage. There's always elements of control/power play, but the idea is to maximize the benefits.
It can be done. It has been done.
It's just kind of lost, in tech.
Because you can't offshore your clogged toilet or broken HVAC issue to someone abroad for cheap on a whim like you can with certain cases in tech.
You're dependent on a trained and licensed local showing up at your door, which gives him actual bargaining power, since he's only competing with the other locals to fix your issue and not with the entire planet in a race to the bottom.
Unionization only works in favor of the workers in the cases when labor needs to be done on-site (since the government enforces the rules of unions) and can't be easily moved over the internet to another jurisdiction where unions aren't a thing. See the US VFX industry as a brutal example.
There are articles discussing how LA risks becoming the next Detroit with many of the successful blockbusters of 2025 being produced abroad now due to the obscene costs of production in California caused mostly by the unions there. Like 350 $ per hour for a guy to push a button on a smoke machine, because only a union man is allowed to do it. Or that it costs more to move across a Cali studio parking lot than to film a scene in the UK. Letting unions bleed companies dry is only gonna result them moving all jobs that can be moved abroad.
Yet. You can’t yet. Humanoids and VR are approaching the point quite rapidly where a teleoperated or even autonomous robot will be a better and cheaper tradesman than Joe down the road. Joe can’t work 24 hours a day. Joe realises that, so he’ll rent a robot and outsource part of his business, and will normalise the idea as quickly as LLMs have become normal. Joe will do very well, until someone comes along with an economy of scale and eats his breakfast.
IMO, real actual people don’t want to live in the world you described. Hell, they don’t wanna live in this one! The “elites” have failed us. Their vision of the future is a dystopian nightmare. If the only reason to exist is to make 25 people at the top richer than gods? What is the fucking point of living?
You just described most medieval societies.
It's been done before, and those 25 people are hoping to make it happen again.
Employees are lucky when incentives align and employers treat them well. This cannot be expected or assumed.
A lot of people want a different kind of world. If we want it, we’re gonna have to build it. Think about what you can do. Have you considered running for office?
I don’t think it is helpful for people to play into the victim narrative. It is better to support each other and organize.
This is part of why some companies have minimum terminal levels (often 5/Sr) before which a failure to improve means getting fired.
An intern is much more valuable than AI in the sense that everyone makes micro decisions that contribute to the business. An Intern can remember what they heard in a meeting a month ago or some important water-cooler conversation and incorporate that in their work. AI cannot do that
AI/ML and Offshoring/GCCs are both side effects of the fact that American new grad salaries in tech are now in the $110-140k range.
At $70-80k the math for a new grad works out, but not at almost double that.
Also, going remote first during COVID for extended periods proved that operations can work in a remote first manner, so at that point the argument was made that you can hire top talent at American new grad salaries abroad, and plenty of employees on visas were given the option to take a pay cut and "remigrate" to help start a GCC in their home country or get fired and try to find a job in 60 days around early-mid 2020.
The skills aspect also played a role to a certain extent - by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple - I don't care if you can prove Dixon's factorization method using induction if you can't tell me how threading works or the rings in the Linux kernel.
The Japan example mentioned above only works because Japanese salaries in Japan have remained extremely low and Japanese is not an extremely mainstream language (making it harder for Japanese firms to offshore en masse - though they have done so in plenty of industries where they used to hold a lead like Battery Chemistry).
That doesn’t fit my experience at all. The applied math vs engineering continuum is mostly dependent on whether a CS program at a given school came out of the engineering department or the math apartment. I haven’t noticed any shift on that spectrum coming from CS departments except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware.
That’s the same across countries though. I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.
Absolutely, but that's if they are exposed to these concepts, and that's become less the case beyond maybe a single OS class.
> except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware
I feel that's part of the issue, but also, CS programs in the US are increasingly making computer architecture an optional class. And network specific classes have always been optional.
---------
Mind you, I am biased towards Cybersecurity, DevOps, DBs, and HPC because that is the industry I've worked on for over a decade now, and it legitimately has become difficult hiring new grads in the US with a "NAND-to-Tetris" mindset because curriculums have moved away from that aside from a couple top programs.
Based on your domain, I think a big part of what you’re seeing is that over the last 15 years there was a big shift in CS students away from people who are interested in computers towards people who want to make money.
The easiest way to make big bucks is in web development, so that’s where most graduates go. They think of DBA, devops, and cybersecurity as low status. The “low status” of those jobs becomes a bit of a self fulfilling prophecy. Few people in the US want to train for them or apply to them.
I also think that the average foreign worker doing these jobs isn’t equivalent to a new grad in the US. The majority have graduate degrees and work experience.
You could hire a 30 year old US employee with a graduate degree and work experience too for your entry level job. It would just cost a lot more.
Today, you hire an intern and they need a lot of hand-holding, are often a net tax on the org, and they deliver a modest benefit.
Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more. Their total impact will be much higher.
The whole "entry level is screwed" view only works if you assume that companies want all of the drawbacks of interns and entry level employees AND there is some finite amount of work to be done, so yeah, they can get those drawbacks more cheaply from AI instead.
But I just don't see it. I would much rather have one entry level employee producing the work of six because they know how to use AI. Everywhere I've worked, from 1-person startup to the biggest tech companies, has had a huge surplus of work to be done. We all talk about ruthless prioritization because of that limit.
So... why exactly is the entry level screwed?
Maybe tomorrow's interns will be "AI experts" who need less hand-holding, but the day after that will be kids who used AI throughout elementary school and high school and know nothing at all, deferring to AI on every question, and have zero ability to tell right from wrong among the AI responses.
I tutor a lot of high school students and this is my takeaway over the past few years: AI is absolutely laying waste to human capital. It's completely destroying students' ability to learn on their own. They are not getting an education anymore, they're outsourcing all their homework to the AI.
What I had growing up though were interests in things, and that has carried me quite far. I worry much more about the addictive infinite immersive quality of video games and other kinds of scrolling, and by extension the elimination of free time through wasted time.
But if you deskill processes, it makes it harder to argue in favor of paying the same premium you did before.
You don’t need managers, or CEOs. You don’t even need VCs.
Well, maybe it'll be the other way around: Maybe they'll need more hand-holding since they're used to relying on AI instead of doing things themselves, and when faced with tasks they need to do, they will be less able.
But, eh, what am I even talking about? The _senior_ developers in a many companies need a lot of hand-holding that they aren't getting, write bad code, with poor practices, and teach the newbies how to get used to doing that. So that's why the entry-level people are screwed, AI or no.
But if the purpose of an internship is to learn how to work in a company, while producing some benefit for the company, I think everything gets better. Just like we don’t measure today’s terms by words per minute typed, I don’t think we’ll measure tomorrow’s interns by Lines of code that hand – written.
So much of the doom here comes from a thought process that goes “we want the same outcomes as today, but the environment is changing, therefore our precious outcomes are at risk.“
Delegation, properly defined, involves transferring not just the task but the judgment and ownership of its outcome. The perfect delegation is when you delegate to someone because you trust them to make decisions the way you would — or at least in a way you respect and understand.
You can’t fully delegate to AI — and frankly, you shouldn’t. AI requires prompting, interpretation, and post-processing. That’s still you doing the thinking. The implementation cost is low, sure, but the decision-making cost still sits with you. That’s not delegation; it’s assisted execution.
Humans, on the other hand, can be delegated to — truly. Because over time, they internalize your goals, adapt to your context, and become accountable in a way AI never can.
Many reasons why AI can't fill your shoes:
1. Shallow context – It lacks awareness of organizational norms, unspoken expectations, or domain-specific nuance that’s not in the prompt or is not explicit in the code base.
2. No skin in the game – AI doesn’t have a career, reputation, or consequences. A junior human, once trained and trusted, becomes not only faster but also independently responsible.
Junior and Interns can also use AI tools.
Maybe some day AI will truly be able to think and reason in a way that can approximate a human, but we're still very far from that. And even when we do, the accountability problem means trusting AI is a huge risk.
It's true that there are white collar jobs that don't require actual thinking, and those are vulnerable, but that's just the latest progression of computerization/automation that's been happening steadily for the last 70 years already.
It's also true that AI will completely change the nature of software development, meaning that you won't be able to coast just on arcane syntax knowledge the way a lot of programmers have been able to so far. But the fundamental precision of logical thought and mapping it to a desirable human outcome will still be needed, the only change is how you arrive there. This actually benefits young people who are already becoming "AI native" and will be better equipped to leverage AI capabilities to the max.
This feels like the ultimate pulling up the ladder after you type of move.
This obviously not being the case shows that we're not in a AI driven fundamental paradigm shift, but rather run of the mill cost cutting measures. Like suppose a tech bubble pops and there are mass layoffs (like the Dotcom bubble). Obviously people will loose their jobs. AI hype merchants will almost definitely try to push the narrative that these losses are from AI advancements in an effort to retain funding.
I've been interviewing marketing people for the last few months (I have a marketing background from long ago), and the senior people were either way too expensive for our bootstrapped start-up, or not of the caliber we want in the company.
At the same time, there are some amazing recent grads and even interns who can't get jobs.
We've been hiring the younger group, and contracting for a few days a week with the more experienced people.
Combine that with AI, and you've got a powerful combination. That's our theory anyway.
It's worked pretty well with our engineers. We are a team of 4 experienced engineers, though as CEO I don't really get to code anymore, and 1 exceptional intern. We've just hired our 2nd intern.
1. Because, generally, they don't.
2. Because an LLM is not a person, it's a chatbot.
3. "Hire an intern" is that US thing when people work without getting real wages, right?
Grrr :-(
You’re probably not going to transform your company by issuing Claude licenses to comfortable middle-aged career professionals who are emotionally attached to their personal definition of competency.
Companies should be grabbing the kids who just used AI to cheat their way through senior year, because that sort of opportunistic short-cutting is exactly what companies want to do with AI in their business.
The AI will definitely require handholding. And that hand-holder will be an intern or a recent college-grad.
There have never been that many businesses able to hire novices for this reason.
Programming is a craft, and just like any other, the best time to learn it is when it's free to learn.
A company that I know of is having a L3 hiring freeze also and some people are downgraded from L4 to L3 or L5 to L4 also.. Getting more work for less cost.
AI can barely provide the code for a simple linked list without dropping NULL pointer dereferences every other line...
Been interviewing new grads all week. I'd take a high performing new grad that can be mentored into the next generation of engineer any day.
If you don't want to do constant hand holding with a "meh" candidate...why would you want to do constant hand holding with AI?
> I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now.
Not sure what you are working on. I would never prioritize speed over quality - but I do work in a public safety context. I'm actually not even sure of the legality of using an AI for design work but we have a company policy that all design analysis must still be signed off on by a human engineer in full as if it were 100% their own.
I certainly won't be signing my name on a document full of AI slop. Now an analysis done by a real human engineer with the aid of AI - sure, I'd walk through the same verification process I'd walk through for a traditional analysis document before signing my name on the cover sheet. And that is something a jr. can bring to me to verify.
The same thing will happen to Gen Z because of AI.
In both cases, the net effect of this (and the desired outcome) is to suppress wages. Not only of entry-level job but every job. The tech sector is going to spend the next decade clawing back the high costs of tech people from the last 15-20 years.
The hubris here is that we've had a unprecedented boom such that many in the workforce have never experienced a recession, what I'd call "children of summer" (to borrow a George RR Martin'ism). People have fallen into the trap of the myth of meritocracy. Too many people thing that those who are living paycheck to paycheck (or are outright unhoused) are somehow at fault when spiralling housing costs, limited opportunities and stagnant real wages are pretty much responsible for everything.
All of this is a giant wealth transfer to the richest 0.01% who are already insanely wealthy. I'm convinced we're beyond the point where we can solve the problems of runaway capitalism with electoral politics. This only ends in tyranny of a permanent underclass or revolution.
I spend a lot of time encouraging people to not fight the tide and spend that time intentionally experimenting and seeing what you can do. LLMs are already useful and it's interesting to me that anybody is arguing it's just good for toy applications. This is a poisonous mindset and results in a potentially far worse outcome than over-hyping AI for an individual.
I am wondering if I should actually quit a >500K a year job based around LLM applications and try to build something on my own with it right now.
I am NOT someone that thinks I can just craft some fancy prompt and let an LLM agent build me a company, but I think it's a very powerful tool when used with great intention.
The new grads and entry level people are scrappy. That's why startups before LLMs liked to hire them. (besides being cheap, they are just passionate and willing to make a sacrifice to prove their worth)
The ones with a lot of creativity have an opportunity right now that many of us did not when we were in their shoes.
In my opinion, it's important to be technically potent in this era, but it's now even more important to be creative - and that's just what so many people lack.
Sitting in front of a chat prompt and coming up with an idea is hard for the majority of people that would rather be told what to do or what direction to take.
My message to the entry-level folks that are in this weird time period. It's tough, and we can all acknowledge that - but don't let cynicism shackle you. Before LLMs, your greatest asset was fresh eyes and the lack of cynicism brought upon by years of industry. Don't throw away that advantage just because the job market is tough. You, just like everybody else, have a very powerful tool and opportunity right in front of you.
The amount of people trying to convince you that it's just a sham and hype means that you have less competition to worry about. You're actually lucky there's a huge cohort of experienced people that have completely dismissed LLMs because they were too egotistical to spend meaningful time evaluating it and experimenting with it. LLM capabilities are still changing every 6 months-1 year. Anybody that has decided concretely that there is nothing to see here is misleading you.
Even in the current state of LLM if the critics don't see the value and how powerful it is mostly a lack of imagination that's at play. I don't know how else to say it. If I'm already able to eliminate someone's role by using an LLM then it's already powerful enough in its current state. You can argue that those roles were not meaningful or important and I'd agree - but we as a society are spending trillions on those roles right now and would continue to do so if not for LLMs
Just as the internet was a democratization of information, llms are a democratization of output.
That may be in terms of production or art. There is clearly a lower barrier for achieving both now compared to pre-llm. If you can't see this then you don't just have your head stuck in the sand, you have it severed and blasted into another reality.
The reason why you reacted in such a way is again, a lack of imagination. To you, "work" means "employment" and a means to a paycheck. But work is more than that. It is the output that matters, and whether that output benefits you or your employer is up to you. You now have more leverage than ever for making it benefit you because you're not paying that much time/money to ask an LLM to do it for you.
Pre-llm, most for-hire work was only accessible to companies with a much bigger bank account than yours.
There is an ungodly amount of white collar workers maintaining spreadsheets and doing bullshit jobs that LLMs can do just fine. And that's not to say all of those jobs have completely useless output, it's just that the amount of bodies it takes to produce that output is unreasonable.
We are just getting started getting rid of them. But the best part of it is that you can do all of those bullshit jobs with an LLM for whatever idea you have in your pocket.
For example, I don't need an army of junior engineers to write all my boilerplate for me. I might have a protege if I am looking to actually mentor someone and hire them for that reason, but I can easily also just use LLMs to make boilerplate and write unit tests for me at the same time. Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.
The junior engineer can also do this too, albeit in most cases less effectively.
That's democratization of work.
In your "5% unemployment" world you have many more gatekeepers and financial barriers.
I write code to drive hardware, in an unusual programming style. The company pays for Augment (which is now based on o4, which is supposed to be really good?!?). It's great at me typing: print_debug( at which point it often guesses right as to which local variables or parameters I want to debug - but not always. And it can often get the loop iteration part correct if I need to, for example, loop through a vector. The couple of times I asked it to write a unit test? Sure, it got a the basic function call / lambda setup correct, but the test itself was useless. And a bunch of times, it brings back code I was experimenting with 3 months ago and never kept / committed, just because I'm at the same spot in the same file..
I do believe that some people are having reasonable outcomes, but it's not "out of the box" - and it's faster for me to write the code I need to write than to try 25 different prompt variations.
Thanks for sharing your perspective with ACTUAL details unlike most people that have gotten bad results.
Sadly hardware programming is probably going to lag or never be figured out because there's just not enough info to train on. This might change in the future when/if reasoning models get better but there's no guarantee of that.
> which is now based on o4
based on o4 or is o4, those are two different things. augment says this: https://support.augmentcode.com/articles/5949245054-what-mod...
Augment uses many models, including ones that we train ourselves. Each interaction you have with Augment will touch multiple models. Our perspective is that the choice of models is an implementation detail, and the user does not need to stay current with the latest developments in the world of AI models to fully take advantage of our platform.
Which IMO is....a cop out, a terrible take, and just...slimey. I would not trust a company like this with my money. For all you know they are running your prompts against a shitty open source model running on a 3090 in their closet. The lack of transparency here is concerning.You might be getting bad results for a few reasons:
- your prompts are not specific enough
- your context is poisoned. how strategically are you providing context to the prompt? a good trick is to give the llm an existing file as an example to how you want it to produce the output and tell it "Do X in the style of Y.file". Don't forget with the latest models and huge context windows you could very well provide entire subdirectories into context (although I would recommend being pretty targeted still)
- the model/tool you're using sucks
- you work in a problem domain that LLMs are genuinely bad at
Note: your company is paying a subscription to a service that isn't allowing you to bring your own keys. they have an incentive to optimize and make sure you're not costing them a lot of money. This could lead to worse results.see here for Cline team's perspective on this topic: https://www.reddit.com/r/ChatGPTCoding/comments/1kymhkt/clin...
I suggest this as the bare minimum for the HN community when discussing their bad results with LLMs and coding:
- what is your problem domain
- show us your favorite prompt
- what model and tools are you using?
- are you using it as a chat or an agent?
- are you bringing your own keys or using a service?
- what did you supply in context when you got the bad result?
- how did you supply context? copy paste? file locations? attachments?
- what prompt did you use when you got the bad result?
I'm genuinely surprised when someone complaining about LLM results provides even 2 of those things in their comment.Most of the cynics would not provide even half of this because it'd be embarrassing and reveal that they have no idea what they are talking about.
> But how is AI supposed to replace anyone when you have either to get lucky or to correctly set up all these things you write about first? Who will do all that and who will pay for it?
I mean....i'm doing it and getting paid for it so...
In other words, did the AI actually replace you in this case? Do you expect it to? Because people clearly expect it, then we have such discussions as this.
good luck with that
So basically you're not trying it out. Please just put it down, you have nothing interesting to say here
> Previously I would have had to have 1 million dollars sitting around to fund the amount of output that I am able to produce with a $20 subscription to an LLM service.
this sounds like the death of employment and the start of plutocracy
not what I would call "democratisation"
Well, I've said enough about cynicism here so not much else I can offer you. Good luck with that! Didn't realize everybody loved being an employee so much
so, employee or destitute? tough choice
This is why free market economies create more wealth over time than centrally planned economies: the free market allows more people to try seemingly crazy ideas, and is faster to recognize good ideas and reallocate resources toward them.
In the absence of reliable prediction, quick reaction is what wins.
Anyway, even if AI does end up “destroying” tons of existing white collar jobs, that does not necessarily imply mass unemployment. But it’s such a common inference that it has its own pejorative: Luddite.
And the flip side of Ludddism is what we see from AI boosters now: invoking a massive impact on current jobs as a shorthand to create the impression of massive capability. It’s a form of marketing, as the CNN piece says.
Those people who were able to get work were now subject to a much more dangerous workplace and forced into a more rigid legalized employer/employee structure, which was a relatively new "corporate innovation" in the grand scheme of things. This, of course, allowed/required the state to be on the hook for enforcement of the workplace contract, and you can bet that both public and private police forces were used to enforce that contract with violence.
Certainly something to think about for all the users on this message board who are undoubtedly more highly skilled craftspeople than most, and would never be caught up in a mass economic displacement driven by the introduction of a new technological innovation.
At the very least, it's worth a skim through the Wikipedia article: https://en.wikipedia.org/wiki/Luddite
I think this situation is very similar in terms of the underestimation of scope of application, however differs in the availability of new job categories - but then that may be me underestimating new categories which are as yet as unforeseen as stokers and train conductors once were.
For instance, upper-middle-class and middle-class individuals in countries like India and Thailand often have access to better services in restaurants, hotels, and households compared to their counterparts in rich nations.
Elderly care and health services are two particularly important sectors where society could benefit from allocating a larger workforce.
Many others will have roles to play building, maintaining, and supervising robots. Despite rapid advances, they will not be as dexterous, reliable, and generally capable as adult humans for many years to come. (See: Moravec's paradox).
On an aggregate level this is true and contrary to the prevailing sentiment of doomer skepticism, the developed world is usually still the best place to do it. On an individual level, a lot of things can go wrong between here and a million dollars.
You know what's hard? Moving from a poor "shithole" to a wealthy country, with expensive accommodation, where a month of rent is something you'd save up months for.
Knowing and displaying (faking really) 'correct' cultural status signifiers to secure a good job. And all the associated stress, etc.
Moving the other direction to a low-cost-of-living or poor shithole country is extremely easy in comparison with a fat stack of resources.
You literally don't have to worry about anything in the least.
So basically once you are rich, you have to choose to leave most of it on the table to go to a poor country.
Same goes for employees with stock options in USA: They get taxed on CGT every year until they sell, for money they don’t have yet.
Same goes for development costs: A change in the US tax code circe 2016 made that development costs were assumed to be an investment over 3 years, so if you have 1m$ sales and 1m$ costs in the first year, the IRS only counts 333k$ real costs and you own them tax on the 666k$ revenue.
It’s a classic problem in capital. So yes, a 300k€ revenue means you are valued at a multiple of that and owe tax.
I think exercising an option may be an event that realizes gains and causes issues for someone who has to pay tax but can't sell the asset as it is not liquid. But I think that isn't what you are taking about?
As for revenue. Many companies are priced at X revenue, but those are companies you expect to grow. If a company raises $1m and sells AI tokens for $1m/y (the revenue) in order to "dominate the market" but they pay AWS $2m/y for, and they can't raise any more or increase prices then that startup is probably worth nothing. For example.
Another example is a bar that sells $1m in revenue drinks and food, makes 500k gross and 50k net after staff, rent taxes etc.
> make $1mm in a rich country and move to a poorer country and chill if you so desire
i wonder if such trends are good for said poorer country (e.g real estate costs) in the long run?Fun fact what most people ignore: There have been around ~7000 people on Mount Everest - while the US alone has around 300.000 / 350.000 people earning more than 1 million USD a year.
So - its clear: Is more easier to become an "income-millionaire" than to climb Mount Everest! :-)
So - pick your opportunity! :-D
You have to always keep on moving just to stay in the same place.
Sure it is painful but a ZIRP economy doesn't listen to the end consumers. No reason to innovate and create crazy ideas if you have plenty of income.
Even if you think all the naysayers are “luddites”, do you really think it’s a great idea to have no backup plan beyond “whupps we all die or just go back to the Stone Age”?
People don’t want society to collapse. So if you think it’s something that people can prevent, feel comforted that everyone is trying to prevent it.
If these mechanisms you mention are in place and functioning, why is there, for example, such large growth of the economic inequality gap?
What makes you think people haven’t made back up plans?
Or are you saying government needs to do it for us?
History has shown us quite clearly what happens if governments, and not individuals, are responsible for finding employment.
They should all just find a way be set for life within the next 3 years, is this your proposal ?
I don’t think this 3 year timeline is realistic and pondering what we’re going to do in 20 years is unpredictable.
What’s a better alternative?
You might think that we are collectively responsible for solving climate change, and someone else might think that we are collectively responsible for ending the murder of unborn children via abortion or any number of other things.
So who gets to be the dictator of whom? If we are all going to live together in harmony, we have to be tolerant of diversity.
That doesn't take away any freedom from you to take responsibility. And it preserves other's freedom as well.
Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".
Is this really the level of analysis CNN has to offer on this topic?
They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.
Compare: "Whenever I think of skeptics dismissing completely novel and unprecedented outcomes occurring by mechanisms we can't clearly identify or prove (will) exist... I think of skeptics who dismissed an outcome that had literally hundreds of well-studied historical precedents using proven processes."
You're right that humans don't have a good intuition for non-linear growth, but that common thread doesn't heal over those other differences.
We can also look at the tools, which have improved relatively quickly but don't appear to be improving exponentially. GPT-4 and GPT-4o came out about a year after their predecessors. Is GPT-4o a bigger leap that GPT-4 was? Are GPT-4.5 or 4.1 a bigger leap than GPT-4 was? I honestly don't know, but the general reception suggests otherwise. The biggest leaps recently seem to be making models that perform roughly as well as past ones but are much smaller. That has advantages from the standpoint of democratization and energy consumption, but those kinds of improvements seems to favor a situation where AI augments workers rather than replaces them.
We are still dealing with the aftereffects, which led to the elimination of any working class representation in politics and suppression of real protests like Occupy Wall Street.
When this bubble bursts, the IT industry will collapse for some years like in 2000.
This isn't very informative. Indeed, engaging in this argument-by-analoguy betrays a lack of actual analysis, credible evidence and justification for a position. Arguing "by analogy" in this way, which picks and chooses an analogy, just restates your position -- it doesnt give anyone reasons to believe it.
Its an apt comparison. The criticisms in the cnn article are already out date in many instances.
In my experience, for practical usage LLMs aren't even improving linearly at this point as I personally see Claude 3.7 and 4.0 as regressions from 3.5. They might score better on artificial benchmarks but I find them less likely to produce useful work.
2 years ago it was cool but unreliable.
Today I just did an entire “photo shoot” in Midjourney.
Humans are. We have tools to measure exponential growth empirically. It was done for COVID (i.e. epidemiologists do that usually) and is done for economy and other aspects of our life. If there's to be exponential growth, we should be able to put it in numbers. "True me bro" is not a good measure.
Edit: typo
"A person is smart. People are dumb, panicky dangerous animals and you know it."
What does this mean? What do you apply to populace at large? Do you mean a populace doesn’t model the exponential change right?
We can have a constructive discussion instead. My problem was not actually parsing what you said. I’m questioning the assumption if populace collectively modeling exponential change is really meaningful. You can, for example, describe how does it look like when populace can model change exponentially. Is there any relevant literature on this subject that I can look into? Does this phenomenon have a name?
Which ones, specifically? I’m genuinely curious. The ones about “[an] unfalsifiable disease-free utopia”? The one from a labor economist basically equating Amodei’s high-unemployment/strong economy claims to pure fantasy? The fact that nothing Amodei said was cited or is substantiated in any meaningful way? Maybe the one where she points out that Amodei is fundamentally a sales guy, and that Anthropic is making the rounds saying scary stuff just after they released a new model - a techbro marketing push?
I like anthropic. They make a great product. Shame about their CEO - just another techbro pumping his scheme.
Yeah. Imagine if COVID had actually killed 10% of the world population. Killing millions sucks, but mosquitos regularly do that too, and so does tuberculosis, and we don't shut down everything. Could've been close to a billion. Or more. Could've been so much worse.
But that didn’t happen. All of the people like pg who drew these accelerating graphs were wrong.
In fact, I think just about every commenter on COVID was wrong about what would happen in the early months regardless of political angle.
Try revisiting their content from spring of 2020 (flatten the curve, wild death predictions, etc).
> I guess the key is to not even have a political angle
It’s a fantasy to imagine technical knowledge allows you to transcend the political and 2020 only reinforced that.
Uh, not to be petty, but the growth was not exponential — neither in retrospect, nor given what was knowable at any point in time. About the most aggressive, correct thing you could’ve said at the time was “sigmoid growth”, but even that was basically wrong.
If that’s your example, it’s inadvertently an argument for the other side of the debate: people say lots of silly, unfounded things at Peak Hype that sound superficially correct and/or “smart”, but fail to survive a round of critical reasoning. I have no doubt we’ll look back on this period of time and find something similar.
Besides the labor economist bit, it also makes the correct point that tech people regularly exaggerate and lie. A great example of this is biotech, a field I work in.
This moment feels exactly to me like that moment when we were going to “shut down for two weeks” and the majority of people seemed to think that would be the end of it.
It was clear where the trend was going, but exponentials always seem ridiculous on an intuitive level.
It's not CNN exlusive. Newsmedia that did not evolve towards clicks, riling up people, hatewatching and paid propaganda to the highest bidder went extinct a decade ago. This is what did evolve.
Not just this topic.
We will wake up in 5 yrs to find we replaced people for a dependence on a handful of companies that serve llms and make inference chips. Its beyond dystopian.
So far, for any given automation, each actor gets to cut their own costs to their benefit — and if they do this smarter than anyone else, they win the market for a bit.
Every day the turkey lives, they get a bit more evidence the farmer is an endless source of free food that only wants the best for them.
It's easy to fool oneself that the economics are eternal with reference to e.g. Jevons paradox.
When you consider how this interacts with the population collapse (which is inevitable now everywhere outside of some African countries) this seems even worse. In 20 years, we will have far fewer people under age 60 than we have now, and among that smaller cohort, the percentage of people at any given age who have useful levels of experience will be less because they may not be able to even begin meaningful careers.
Best case scenario, people who have gotten 5 or more years of experience by now (college grads of 2020) may scrape by indefinitely. They'll be about 47 then and have no one to hire that's more qualified than AI. Not necessarily because AI is so great; rather, how will there be someone with 20 years of experience when we simply don't hire any junior people this year?
Worst case, AI overtakes the Class of 2020 and moves up the experience-equivalence ladder faster than 1 year per year, so it starts taking out the classes of 2015, 2010, etc.
This is my bet. Similar to Moores law. Where it plateaus is anybody’s guess…
Ironically a friend of mine noticed that the team in India they work with is now largely pushing AI-generated code... At that point you just need management to cut out the middleman.
Management will cut down your team’s headcount and outsource even more to India ,Vietnam and Philippines.
A CFO looks at balance sheet not operations context, even if you’re idea is better the opposite of what you think is likely going to happen very soon.
Management did all that at companies I've worked for for years before 'AI'. The big change is that the teams in India won't 200 developers, but 20 developers handholding an AI.
We've already eliminated certain junior level domains essentially by design. There aren't any 'barber-surgeons' with only two years of training for good reason. Instead we have surgery integrated it into a more lengthy and complicated educational path to become what we now would consider a 'proper' surgeon.
I think the answer is that if the 'junior' is uneconomical or otherwise unacceptable be prepared to pay more for the alternative, one way or another.
Caveat that this is anecdotal, not sure if there are numbers on this.
If there's a shortage, in the free market, humans will retrain.
That said, the first thing that jumps to my mind is cars. Back when they were first introduced you had to be a mechanically inclined person to own one and deal with it. Today, people just buy them and hire the very small number of experts (relative to the population of drivers) to deal with any issues. Same with smartphones. The majority of users have no idea how they really work. If it stop working they seek out an expert.
ATM, AI just seems like another level of that. JS/Python programmers don't need to know bits and bytes and memory allocation. Vibe coders won't need to know what JS/Python programmers need to know.
Maybe there won't be enough experts to keep it all going though.
Had to look that up: https://en.wikipedia.org/wiki/Turkey_illusion
This category is expansive enough to make fools of almost everyone on hn.
And if it could think, it would probably be very proud of the quarter (hour) figures that it could present. The Number has gone up, time for a reward.
I guess funding for processing power and physical machinery to run the AI backing a product would be the biggest barrier to entry?
This feels a lot like the dot boom/dot bust era where a lot of new companies are going to sprout up from the ashes of all this disruption.
AI certainly will increase competition in some areas, but there are countless examples where being the best at something doesn't make you the leader.
50% of a group of workers losing their jobs to this tech is not a worrisome future for him. It's a pitch!
All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?
It's easy to imagine a world in which there are way less white collar workers and everything else is pretty much the same.
It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.
It's also easy to imagine a world in which you're able to cut more workers than everyone else, and on aggregate, barely anyone is impacted, but your margins go up.
There's tons of other scenarios, including the most cited one - that technology thus far has always led to more jobs, not less.
They're probably believing any combination of these concepts.
It's not guaranteed that if there's 5% less white-collar workers per year for a few decades that we're all going to starve to death.
In the future, if trends continue, there's going to be way less workers - since there's going to be a huge portion of the population that's old and retired.
You can lose x% of the work force every year and keep unemployment stable...
A large portion of the population wants a lot more people to be able to not work and get entitlements...
It's pretty easy to see how a lot of people can think this could lead to something good, even if you think all those things are bad.
Two people can see the same painting in a museum, one finds it beautiful, and the other finds it completely uninteresting.
It's almost like asking - how can someone want the Red team to win when I want the Blue team to win?
If people don’t have jobs, government doesn’t have taxes to employ other people. If CEOs are salivating at the thought of replacing white collar workers, there is no reason to think next step of AI augmented with robotics won’t replace blue collar workers as well.
Robotics seems harder, though, and has been around for longer than LLMs. Robotic automation can replace blue collar factory workers, but I struggle to imagine it replacing a plumber who comes to your house and fixes your pipes, or a waiter serving food at a restaurant, or someone who restocks shelves at grocery stores, that kind of thing. Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.
Over the last few years, I've seen a few in use here in Berlin: https://www.alibaba.com/showroom/robot-waiter-for-sale.html
> or someone who restocks shelves at grocery stores
For physical retail, or home delivery?
People are working on this for traditional stores, but I can't tell which news stories are real and which are hype — after around a decade of Musk promising FSD within a year or so, I know not to simply trust press releases even when they have a video of the thing apparently working.
For home delivery, this is mostly kinda solved: https://www.youtube.com/watch?v=ssZ_8cqfBlE
> Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.
Sure… if they have the money.
But can we make an economy where all the stuff is free, and we're "working" n-hours a day smiling at bad jokes and manners of people we don't like, so we can earn money to spend to convince someone else who doesn't like us to spend m-hours a day smiling at our bad jokes and manners?
Wow. I genuinely didn't think robotic waiters would ever exist anytime soon.
> For physical retail, or home delivery?
I was thinking for physical retail. Thanks for the video link.
Tech-wise this could have existed 30 years ago (maybe going around the restaurant would have been more challenging than today but it’s a fixed path and the robots don’t leave the restaurant).
Wouldn't you have struggled to imagine most of what LLMs can now do 5 years ago?
These are three totally different jobs requiring different kinds of skills, but they will all be replaced with automation.
1. Plumber is a skilled trade, but the "skilled" parts will eventually be replaced with 'smart' tools. You'll still need to hire a minimum wage person to actually go into each unique home and find the plumbing, but the tools will do all the work and will not require an expensive tradesman's skills to work.
2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.
3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.
But if you have to be trained in the use of a variety of 'smart' tools - that sounds like engineering to know what tool to deploy and how.
It's also incredibly optimistic about future tools - what smart tool fixes leaky faucets, hauls and installs water heaters, unclogs or replaces sewer mains, runs new pipes, does all this work and more to code, etc? There are cool tools and power tools and cool power tools out there, but vibe plumbing by the unskilled just fills someone's house with water or worse...
> 2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.
Takeout culture is popular among GenZ, and we're more likely to see walk-up orders with online order ahead than a facsimile of table service.
Why would cheap restaurants buy robots and allow a dining room to go unmanned and risk walkoffs instead of just skipping the whole make-believe service aspect and run it like a pay-at-counter cafeteria? You're probably right that waiters will disappear outside of high-margin fine dining as labor costs squeeze margins until restaurants crack and reorganize.
>3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.
Do-anything-like-a-human robots might crack that, but today it's still sci-fi. Humans are going to haul things from A to B for a bit longer, I think. I bet we see drive-up and delivery groceries win via lights-out warehouses well before "I, Robot" shelf stockers.
I'm not a plumber, but my background knowledge was that pipes can be really diverse and it could take different tools and strategies to fix the same problem for different pipes, right? My thought was that "robotic plumber" would be impossible for the same reasons it's hard to make a robot that can make a sandwich in any type of house. But even with a human worker that uses advanced robotic tools, I would think some amount of baseline knowledge of pipes would always be necessary for the reasons I outlined.
> 2. Waiter serving food, already being replaced with kiosks, and quite a bit of the "back of the house" cooking areas are already automated. It will only take a slow cultural shift towards ordering food through technology-at-the-table, and robots wheeling your food out to you. We've already accepted kiosks in fast food and self-checkout in grocery stores. Waiters are going bye-bye.
That's true. I forgot about fast-food kiosks. And the other person showed me a link to some robotic waiters, which I didn't know about. Seems kind of depressing, but you're right.
> 3. Shelf restocking, very easy to imagine automating this with robotics. Picking a product and packing it into a destination will be solved very soon, and there are probably hundreds of companies working on the problem.
The way I imagine it, to automate it, you'd have to have some sort of 3D design software to choose where all the items would go, and customize it in the case of those special display stands for certain products, and then choose where in the backroom or something for it to move the products to, and all that doesn't seem to save much labor over just doing it yourself, except the physical labor component. Maybe I just lack imagination.
They've already replaced part of that job at one of the grocery stores that I go to, there's a robot that checks the level of stock on the shelves, https://www.simberobotics.com/store-intelligence/tally.
I have already eaten at three restaurants that have replaced the vast majority of their service staff with robots, and they're fine at that. Do I think they're better than a human? No, personally, but they're "good enough".
I've seen this already at a pizza place. Order from a QR code menu and a robot shows up 20-25 minutes later at your table with your pizza. Wait staff still watched the thing go around.
Hey, is there a good board game in there somewhere? Serfs and Nobles™
Surely the modern history of decision making has been to move as much of it as possible away from humans and to algorithms, even "dumb" ones?
End of conversation.
I can tell you for many of those professions their customers are the same white collar workers. The blue collar economy isn't plumbers simply fixing the toilets of the HVAC guy, while the HVAC guy cools the home of the electrician, while...
That is exactly what blue collar economy used to be though: people making and fixing stuff for each other. White collar jobs is a new thing.
History seems to show this doesn't happen. The trend is not linear, but the trend is that we live better lives each century than the previous century, as our technology increases.
Maybe it will be different this time though.
But it is myth. It has always been in the interest of the rulers and the old to try to imprint on the serfs and on the young how much better they have it.
Many of us, maybe even most of us, would be able to have fulfilling lives in a different age. Of course, it depends on what you value in life. But the proof is in the pudding, humanity is rapidly being extinguished in industrial society right now all over the world.
Yes, the lives of "people selling stuff" will likely get better and better in the future, through technology, but the wellbeing of normal people seems to have peaked at around the year 2000 or so.
If you, a CEO, eliminate a bunch of white-collar workers, presumably you drive your former employees into all these jobs they weren't willing to do before, and hey, you make more profits, your kids and aging parents are better-taken-care-of.
Seems like winning in the fundamental game of society - maneuvering everyone else into being your domestic servants.
So, flooding those industries with more warm bodies probably won't help anything. I imagine it would make the already fucked labor relations even more fucked.
You forgot the born-wealthy.
I feel increasingly like a rube for having not made my little entrepreneurial side-gigs focused strictly on the ultra-wealthy. I used to sell tube amplifier kits, for example, so you and I could have a really high-end audio experience with a very modest outlay of cash (maybe $300). Instead I should have sold the same amps but completed for $10K. (There is no upper bounds for audio equipment though — I guess we all know.)
I briefly did a startup that was kind of a side-project of a guy whose main business was building yachts. Why was he OK with a market that just consisted of rich people? "Because rich people have the money!"
The rich were able to insulate themselves in space which is much harder to get to than some place on Earth. If the rich want to turtle up on some island because that's the only place they're safe, that's probably a better outcome for us all. They lose a lot of ability to influence because they simply can't be somewhere in person.
It also relies heavily on a security force (or military) being complicit, but they have to give those people a better life than average to make it worth it. Even those dumb MAGA idiots won't settle for moldy bread and leaky roofs. That requires more and more resources, capital, and land to sustain and grow it, which then takes more security to secure it. "Some rich dude controlling everything" has an exponential curve of security requirements and resources. This even comes down to how much land they need to be able to farm and feed their security guys.
All this assuming your personal detail and larger security force actually likes you enough, because if society has broken down to this point, they can just kill the boss and take over.
My prediction is that the poor will reinvent the guillotine
I still fail to see why people think we're going to innovate ourselves into global poverty, it makes no sense.
But, 62% is very high. Keep in mind that number takes into account not only the elderly and disable, but also children.
Pretty much everyone who can work is working. We don't want children to be working, that's bad. We should all be on the same page about that.
I'm sure we are, but it doesn't look like an improvement for most people.
It seems like we'll need to generate a lot more power to support these efficiency gains at scale, and unless that is coming from renewables (and even if it is) that cost may outweigh the gains for a long time.
I also respect the operative analysis, but the strategical, long-term thinking, is that this will come and it will only speed up everything else.
Your UBI will be controlled by the government, you will have even less agency than you currently have and a hyper elite will control the thinking machines. But don't worry, the elite and the government are looking out for your best interest!
In 2010, I put together a list of alternatives here to address the rise of AI and Robotics and its effect on jobs: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."
Sure there can be rich people who are radical enough to push for another phase of capitalism.
That’s a kind of a capitalism which is worse for workers and consumers. With even more power in the hands of capitalists.
It just happens that up to this point there have been things that couldn't be done by capital. Now we're entering a world where there isn't such a thing and it is unclear what that implies for the job market. But people not having jobs is hardly a bad thing as long as it isn't forced by stupid policy, ideally nobody has to work.
They spent huge amounts of time on things that software either does automatically or makes 1,000x faster. But by and large that actually created more white collar jobs because those capabilities meant more was getting done which meant new tasks needed to be performed.
On the first point, unemployment during the Great Depression was “only” 30%. And those people were eventually able to find other jobs. Here, we are talking about permanent unemployment for even larger numbers of people.
The Luddites were right. Machines did take their jobs. Those individuals who invested significantly in their craft were permanently disadvantaged. And those who fought against it were executed.
And on point 2, to be precise, a lack of jobs doesn’t mean a lack of problems. There are a ton of things society needs to have accomplished, and in a perfect world the guy who was automated out of packing Amazon boxes could open a daycare for low income parents. We just don’t have economic models to enable most of those things, and that’s only going to get worse.
And there are some laws of nature that are relevant such as supply-demand economics. Technology often makes things cheaper which unlocks more demand. For example, I’m sure many small businesses would love to build custom software to help them operate but it’s too expensive.
A good analogy would be web development transition from c to java to php to Wordpress. I feel like it did make web sites creation for small business more accessible. OTOH a parallel trend was also mass-scale production of industry-specific platforms, such as Yahoo Shopping.
It’s not clear to me which trend won in the end.
It'll be a slow burn, though. The projection of rapid, sustained large-scale unemployment assumes that the technology rapidly ascends to replace a large portion of the population at once. AI is not currently on a path to replacing a generalized workforce. Call center agents, maybe.
Second, simply "being better at $THING" doesn't mean a technology will be adopted, let alone quickly. If that were the case, we'd all have Dvorak keyboards and commuter rail would be ubiquitous.
Third, the mass unemployment situation requires economic conditions where not leveraging a presumably exploitable underclass of unemployed persons is somehow the most profitable choice for the captains of industry. They are exploitable because this is not a welfare state, and our economic safety net is tissue-paper thin. We can, therefore, assume their labor can be had at far less than its real worth, and thus someone will find a way to turn a profit off it. Possibly the Silicon Valley douchebags who caused the problem in the first place.
> It'll be a slow burn, though.
Have you been watching the current developer market?
It's really, really rough out here for unemployed software developers.
One of which was the occupation of being a computer!
Nowadays I'm learning my parents' tongue (Cantonese) and Mandarin. It's just comical how badly the LLMs do sometimes. I swear they roll a natural 1 on a d20 and then just randomly drop a phrase. Or at least that's my head canon. They're just playing DnD on the side.
But what this means at scale, over time, is that if AI can do 80% of your job, AI will do 80% of your job. The remaining 20% human-work part will be consolidated and become the full time job of 20% of the original headcount while the remaining 80% of the people get fired.
AI does not need to do 100% of any job (as that job is defined today ) to still result in large scale labor reconfigurations. Jobs will be redefined and generally shrunk down to what still legitimately needs human work to get it done.
As an employee, any efficiency gains you get from AI belong to the company, not you.
We’re further from UBI than we’ve ever been.
If you don’t snatch up the smartest engineers before your competition does: you lose.
Therefore at a certain level of company, hiring is entirely dictated by what the competition is doing. If everyone is suddenly hiring, you better start doing it too. If no one is, you can relax, but you could also pull ahead if you decide to hire rapidly, but this will tip off competitors and they too will begin hiring.
Whether or not you have any use for those engineers is irrelevant. So AI will have little impact on hiring trends in this market. The downturn we’ve seen in the past few years is mostly driven by the interest rate environment, not because AI is suddenly replacing engineers. An engineer using AI gives more advantage than removing an engineer, and hiring an engineer who will use AI is more advantageous than not hiring one at all.
AI is just the new excuse for firing or not hiring people, previously it was RTO but that hype cycle has been squeezed for all it can be.
> ...I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero.
The end of free money probably has to do with why C-level types are salivating at AI tools as a cheaper potential replacement for some employees, but describing the interest rates returning to nonzero percentages as going insane is really kind of a... wild take?
The period of interest rates at or near zero was a historical anomaly [1]. And that policy clearly resulted in massive, systemic misallocation of investment at global scale.
You're describing it as if that was the "normal?"
[1]: https://www.macrotrends.net/2015/fed-funds-rate-historical-c...
1a. most seed/A stage investing is acyclical because it is not really about timing for exits, people just always need dry powder
1b. tech advancement is definitely acyclical - alexnet, transformers, and gpt were all just done by very small teams without a lot of funding. gpt2->3 was funded by microsoft, not vc
2a. (i have advance knowledge of this bc i've previewed the keynote slides for ai.engineer) free vc money slowed in 2022-2023 but has not at all dried up and in fact reaccelerated in a very dramatic way. up 70% this yr
2b. "vc" is a tenous term when all biglabs are >>10b valuation and raising from softbank or sovereign wealth. its no longer vc, its about reallocating capital from publics to privates because the only good ai co's are private
The point is that there's a correlation between macroeconomic dynamics (ie., the price of credit increasing) and the "rise of AI". In ordinary times, absent AI, the macroeconomic dynamics would fully explain the economic shifts we're seeing.
So the question is why do we event need to mention AI in our explanation of recent economic shifts?
What phenomena, exactly, require positing AI disruption?
Spinning that to say you're a "visionary" for replacing expensive employees with AI (even when it's clear we're not there yet) is risky, but a good enough smoke screen to distract the average bear from poking holes in your financials.
AI company CEOs trying to juice their stock evaluations?
"Starting" is doing a hell of lot of work in that sentence. I'm starting to become a billionaire and Nobel Prize winner.
Anyway, I agree with Mark Cuban's statement in the article. The most likely scenario is that we become more productive as AI complements humans. Yesterday I made this comment on another HN story:
"Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.
But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises."
Suggests you are accumulating money, not losing it. That I think is the point of the original comment: AI is getting better, not worse. (Or humans are getting worse? Ha ha, not ha ha.)
Well, in order to meet the standard of the quote "wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years" we need more than just getting better. We need considerably better technology with a better cost structure to wipe out that many jobs. Saying we're starting on that task when the odds are no better than me becoming a billionaire within two years is what we used to call BS.
It wasn’t just Elon. The hype train on self driving cars was extreme only a few years ago, pre-LLM. Self driving cars exist sort of, in a few cities. Quibble all you want but it appears to me that “uber driver” is still a popular widespread job, let alone truck driver, bus driver, and “car owner” itself.
I really wish the AI ceos would actually make my life useful. For example, why am I still doing the dishes, laundry, cleaning my house, paying for landscaping, painters, and on and on? In terms of white collar work I’m paying my fucking lawyers more than ever. Why don’t they solve an actual problem
TBH, I do think that AI can deliver on the hype of making tools with genuinely novel functionality. I can think of a dozen ideas off the top of my head just for the most-used apps on my phone (photos, music, messages, email, browsing). It's just going to take a few years to identify how to best integrate them into products without just chucking a text prompt at people and generating stuff.
Like in Europe where you're forced to pay a notary to start a business - it's not really even necessary, nevermind something that couldn't be automated, but it's just but of the establishment propping up bureaucrats.
Whereas LLMs and generative models in art and coding for example, help to avoid loads of bureaucracy in having to sort out contracts, or even hire someone full-time with payroll, etc.
Do you have a specific country in mind, as the statement is not true for quite a lot of EU member states... and likely untrue for most of the European countries.
Sure you'll have destroyed the company, but at least you'll have avoided bureaucracy.
Like in the US you have a choice of which jurisdiction you want to start your company. Not all require a notary
Same as a washing machine / drier. Chuck the clothes in, press a button, done.
There are Roomba style lawnmowers for your grass cutting.
I'll grant you painting a house and plumbing a toilet aren't there yet!
It’s less work than it used to be, but remove the human who does all that and the dirty dishes and clothes will still pile up. It’s not like we have Rosie, from The Jetsons, handling all those things (yet). How long before the average person has robot servants at home? Until that day, we are effectively project managers for all the machines in our homes.
The really modern stuff is pretty much as simple as “load, start, unload” - you can buy combo washing machines that wash and dry your clothes, auto dispense detergent, etc. It’s not folding or putting away your clothes, and you still need to maintain it (clean the filter, add detergent occasionally, etc)… but you’re chipping away at what is left for a human to do. Who cares when it’s done? You unload it when you feel like it, just like every dishwasher.
Leave things wet in the washer too long and they smell like mold and you have to run it again. Leave them in the dryer too long and they are all wrinkled, and you have to run it again (at least for a little while).
I grew up watching everyone in my family do this, sometimes multiple times for the same load. That’s why I set timers and remove stuff promptly.
The dishwasher I agree, and it’s usually best to leave them in there at least for a little while once it’s done. However, not unloading it means dirty dishes start to stack up on the counter or in the sink, so it still creates a problem.
As far as “load, start, unload” goes. We covered unload, but load is also an issue where some people do have issues. They load the dishwasher wrong and things don’t get clear, or they start it wrong and are left with spots all over everything. Washing machines can be overloaded, or unbalanced. Washing machines and dryers can also be started wrong, the settings need to match the garments being washed. Some clothes are forgiving, others are not. There is still human error in the mix.
Not a problem for the two-in-one washer/dryers for the mildew issue, and for the wrinkles, most dryers have a cycle to keep running them intermittently after the cycle finishes for hours to mitigate most of the wrinkling issues. You’ve got a much much longer window before wrinkles are an issue with that setup.
If you want to waste my time with an automated nonsense we should at least even the playing field.
This is feasible with today’s technology.
But on my Pixel now, on some phone trees it shows a UI with numbers and choices, and even predicts ahead for the other choices so you aren't forced to wait. Very handy!
Rule 0 is that you never put your angel investors out of work if you want to keep riding on the gravy train
I truly belive these types of paper don't deserve to be valued so much.
ChatGPT was the 5th most visited site in April. So yeah, lots of adoption.
We are absolutely in a hype and market bubble around AI right now - and like the dot com bubble, the growth came not in 2000, but years later. It turns out it takes time for a new technology to percolate through society, and I use the “mom metric” as a bellwether - if your/my mother is using the tech, you’d better believe it has achieved market penetration.
Until 2011 my mum was absolutely not interested in the web. Now she does most of her shopping on it, and spends her days boomerposting.
She recently decided to start paying for ChatGPT.
Sure, it’s a fuzzy thing, but I think the adoption cycle this time around will be faster, as the access to the tech is already in peoples’ hands, and there are plenty of folks who are already finding useful applications for genai.
Robotaxis, whether they end up dominated by Tesla or waymo or someone else entirely, are inarguably here, and the adoption rates (the USA is not the only market in the world) are ramping significantly this year.
I’m not sure I get your point about smartphones? They’re in practically every pocket on the planet, now, they’re not some niche thing.
Nobody shoved Gemini to me - chatGPT sucked and I was curious if Sonnet was the best around there for coding stuff and found Gemini to be excellent. As a side note, it also generates excellent question papers - chatGPT is dog shit compared to that.
This is not a matter of whether AI will replace humans whole sale. There are two more predominant effects:
1. You’ll need fewer humans to do the same task. In other forms of automation, this has led to a decrease in employment. 2. The supply of capable humans increases dramatically. 3. Expertise is no longer a perfect moat.
I’ve seen 2. My sister nearly flunked a coding class in college, but now she’s writing small apps for her IT company.
And for all of you who poo poo that as unsustainable. I became proficient in Rust in a week, and I picked up Svelte in a day. I’ve written a few shaders too! The code I’ve written is pristine. All those conversations about “should I learn X to be employed” are totally moot. Yes APL would be harder, but it’s definitely doable. This is an example of 3.
Overall, this will surely cause wage growth to slow and maybe decrease. In turn, job opportunities will dry up and unemployment might ensue.
For those who still don’t believe, air traffic controllers are a great thought experiment—they’re paid quite nicely. What happens if you build tools so that you can train and employ 30% of the population instead of just 10%?
Cynically, I'm happy we have this AI generated code. It's gonna create so much garbage and they'll have to pay good senior engineers more money to clean it all up.
Which is my point, this is not about replacement, it's about reducing the need and increasing supply.
LLMs absolutely help me pick up new skills faster, but if you can't have a discussion about Rust and Svelte, no, you didn't learn them. I'm making a lot of progress learning deep learning and ChatGPT has been critical for me to do so. But I still have to read books, research papers, and my framework's documentation. And it's still taking a long time. If I hadn't read the books, I wouldn't know what question to ask or how to evaluate if ChatGPT is completely off base (which happens all the time).
I fully understand your point and even agree with it to an extent. LLMs are just another layer of abstraction, like C is an abstraction for asm is an abstraction for binary is an abstraction for transistors... we all stand on the shoulders of giants. We write code to accomplish a task, not the other way around.
fucking lmao
It would have taken me a month to write the GPU code I needed in Blender, and I had everything working in a week.
And none of this was "vibed": I understand exactly what each line does.
My point is that LLMs make it 10x easier to adapt and transition to new languages, so whatever moat someone had by being a "Rust developer" is now significantly erased. Anyone with solid systems programming experience could switch from C/C++ to Rust with the help of an LLM and be proficient in a week or two's time. By proficient, I mean able to ship valuable features. Sure they'll have to leveraging an LLM to help smooth out understanding new features like borrow checking, but they'll surely be able to deliver given how already circumspect the Rust compiler is.
I agree fundamentals matter and good mentorship matters! However, good developers will be able to do a lot more diverse tasks which means more supply of talent across every language ecosystem.
For example, I don't feel compelled at all to hire a Svelte/Vue/React developer specifically anymore: any decent frontend developer can race forward with the help of an LLM.
Being able to program in C is something I can also do, but it sure as heck does not make me proficient Rust developer if I cobble some shit from a LLM together and call it a day.
I can appreciate how "businesses" think this is a valuable, but - and this is often forgotten by salaried developers - as I am not a business owner I have neither the position nor the intention of doing any "business". I am in a position to do "engineering". Business is for someone else to worry about. Shipping "valuable features" is not something I care about. Shipping working and correct features is something I worry about. Perhaps modern developers should call themselves business analysts or something if they wish to stop engineering.
LLMs are souped up Stack Overflows and I can't believe my ears if I hear a fellow developer say someone on Stack Overflow ported some of their code to Rust on request and that this feature of SO now makes them a proficient Rust developer because they can vaguely follow the code and can now "ship" valuable features.
This is like being able to vaguely follow Kant's Critique of Pure Reason, which is something any amateur can do, compared to being able to engage with it academically and rigorously. I deeply worry about the competence of the next generation - and thus my own safety - if they believe superficial understanding is equivalent to deep mastery.
Edit: interesting side note: I am writing this as a dyed in the wool generalist. Now ain't that something? I don't care if expertise dies off professionally, because I never was an "expert" in something. I always like using whatever works and all systems more or less feel equal to me yet I can also tell that this approach is deeply flawed. In many important ways deep mastery really matters and I was hoping the rest of society would keep that up and now they are all becoming generalists who don't know shit and it worries me..
LLMs are great but what they really excel at is raising the rates of Dunning-Kruger in every industry they touch.
Please for the love of god tell me this is a joke.
Productivity doesn’t increase on its own; economists struggle to separate it from improved processes or more efficient machinery (the “multi factor productivity fudge”). Increased efficiency in production means both more efficient energy use AND being able to use a lot more of it for the same input of labour.
(ftr i’m not even taking a side re: is AI going to take all the jobs. regardless of what happens the fact remains that the reporting has been absolute sh*t on this. i guess “the singularity is here” gets more clicks than “sales person makes sales pitch”)
AI / GP robotic labor will not penetrate the market so much in existing companies, which will have huge inertial buffers, but more in new companies that arise in specific segments where the technology proves most useful.
The layoffs will come not as companies replace workers with AI, but as AI companies displace non-AI companies in the market, followed by panicked restructuring and layoffs in those companies as they try to react, probably mostly unsuccessfully.
Existing companies don’t have the luxury of buying market share with investor money, they have to make a profit. A tech darling AI startup powered by unicorn farts and inference can burn through billions of SoftBank money buying market share.
For the moment, AI is enabling a bunch of stuff that was too expensive or time consuming to do before (flooding the commons with shiny garbage and pedantic text to drive “engagement”.
Despite the hype, It’s going to be 2-3 years before AI application really fall into stride, and 3-7 before general purpose robotics really get up to speed.
Exactly. These people are growth-seekers first, domain experts second.
Yet I saw progressive[1] outlets reacting to this as a neutral reporting. So it apparently takes a “legacy media” outlet to wake people out of their AI stupor.
[1] American news outlets that lean social-democratic
The fallacy is in the statement “AI will replace jobs.” This shirks responsibility, which immediately diminishes credibility. If jobs are replaced or removed, that’s a choice we as humans have made, for better or worse.
Supposing that you are trying to increase AI adoption among white-collar workers, why try to scare the shit out them in the process? Or is he moreso trying to sell to the C-suite?
Of course, in the medium term, those companies may find out that they needed those people, and have to hire, and then have to re-train the new people, and suffer all the disruption that causes, and the companies that didn't do that will be ahead of the game. (Or, they find out that they really didn't need all those people, even if AI is useless, and the companies that didn't get rid of them are stuck with a higher expense structure. We'll see.)
This reminds me the "Walter White" meme "I am the documentation". When the CEO of a company that makes LLM says something like that, "I perk up and listen" (to quote the article).
When a doctor says "water in my village is bad quality, it gives diarrhea to 30% of the villagers", I don't need a fancy study from some university. The doctor "is the documentation". So if the Anthropic/ChatGPT/LLaMa/etc. (mixing companies and products, it's ok though) say that "so-and-so", they see the integrations, enhancements, compliments, companies ordering _more_ subscriptions, etc.
In my current company (high volume, low profit margin) they told us "go all in on AI". They see that (e.g. with Notion-like-tools) if you enable the "AI", that thing can save _a lot_ of time on "Confluence-like" tasks. So, paying $20-$30-$40 per person, per month, and that thing improving the productivity/output of an FTE by 20%-30% is a massive win.
So yes, we keep the ones we got (because mass firings, ministry of 'labour', unions, bad marketing, etc.). Headcount will organically be reduced (retirements, getting a new job, etc.) combined with minimizing new hires, and boom! savings!!
If only it worked like this in reality. I used actual notion features literally this week and watched it fail so hard it was hilarious. It continually told me there was no documentation on X despite an entire page worth of documentation existing on it, had to be told this; at which point it apologised and regurgitated it.
Wow! What a time saver! I feel more productive already!
I won't paste in the result here, since everyone here is capable of running this experiment themselves, but trust me when I say ChatGPT produced (in mere seconds, of course) an article every bit as substantive and well-written as the cited article. FWIW.
"Move fast and break things" - Zuckerberg
"A good plan violently executed now is better than a perfect plan executed next week." - George S. Patton
You're not going to sell me your SaaS when I can rent AIs to make faster cheaper IP that I actually own to my exact specifications.
If you can’t extrapolate on your own thesis you can’t be knowledgeable in the field.
Good example was a guy on here who was convinced every company would be ran by one person because of AI. You’d wake up in the morning and decide which products your AI came up with while you slept would be profitable. The obvious next question is “then why are you even involved?”
All that needs to be understood is that the narcissistic grandeur delusion that you will singularly be positioned to benefit from sweeping restructuring of how we understand labor must be forcibly divested from some people's brains.
Only a very select few are positioned to benefit from this and even their benefit is only just mostly guaranteed rather than perfectly guaranteed.
Robot run iron mine that sells iron ore to a robot run steel mill that sells steel plate to a robot run heavy truck manufacturer that sells heavy trucks to robot run iron mines, etc etc.
The material handling of heavy industry is already heavily automated, almost by definition. You just need to take out the last few people.
Yet when tech CEOs do the same thing, people tend to perk up."
Silicon Valley and Redmond make desperate attempts to argue for their own continued relevance.
For Silicon Valley VC, software running on computers cannot be just a tool. It has to cause "disruption". It has to be "eating the world". It has to be a source of "intelligence" that can replace people.
If software and computers are just boring appliances, like yesterday's typewriters, calculators, radios, TVs, etc., then Silicon Valley VC may need to find a new line of work. Expect the endless media hype to continue.
No doubt soda technology is very interesting. But people working at soda companies are not as self-absorbed, detached from reality and overfunded as people working for so-called "tech" companies.
The funny part is, most of those juniors were hired in 2022-2024, and they were better hires because of the harsher market. There were a bunch of "senior engineers" who were borderline useless and joined some time between 2018-2021
I just think it's kind of funny to fire the useful people and keep the more expensive ones around who try to do more "managerial" work and have more family obligations. Smart companies do the opposite
The demand for these products was not where it was intended at the time probably. Perhaps the answer to its biggest effect lies in how it will free up human potential and time.
If AI can do that — and that is a big if — then how and what would you do with that time? Well ofc, more activity, different ways to spend time, implying new kinds of jobs.
Where AI will be different (when we get there - LLMs are not AGI) is that it is a general human-replacement technology meaning there will be no place to run ... They may change the job landscape, but the new jobs (e.g. supervising AIs) will ALSO be done by AI.
I don't buy this "AGI by 2027" timeline though - LLMs and LLM-based agents are just missing so many basic capabilities compared to a human (e.g. ability to learn continually and incrementally). It seems that RL, test-time compute (cf tree search) and agentic application, have given a temporary second wind to LLMs which were otherwise topping out in terms of capability, but IMO we are already seeing the limits of this too - superhuman math and coding ability (on smaller scope tasks) do not translate into GENERAL intelligence since they are not based on general mechanism - they are based on vertical pre-training in these (atypical in terms of general use case) areas where there is a clean reward signal for RL to work well.
It seems that this crazy "we're responsibly warning you that we're going to destroy the job market!" spiel is perhaps because these CEOs realize there is a limited window of opportunity here to try to get widespread AI adoption (and/or more investment) before the limitations become more obvious. Maybe they are just looking for an exit, or perhaps they are hoping that AI adoption will be sticky even if it proves to be a lot less capable that what they are promising it will be.
I’d love a journalist using Claude to debunk Dario: “but don’t believe me, I’m just a journalist - we asked Dario’s own product if he’s lying through his teeth, and here’s what it said:”
I've been a heavy user of AI ever since ChatGPT was released for free. I've been tracking its progress relative to the work done by humans at large. I've concluded that it's improvements over the last few years are not across-the-board changes, but benefit specific areas more than others. And unfortunately for AI hype believers, it happens to be areas such as art, which provide a big flashy "look at this!" demonstration of AI's power to people. But... try letting AI come up with a nuanced character for a novel, or design an amplifier circuit, or pick stocks, or do your taxes.
I'm a bit worried about YCombinator. I like Hacker News. I'm a bit worried that YC has so much riding on AI startups. After machine learning, crypto, the post-Covid 19 healthcare bubble, fintech, NFTs, can they take another blow when the music stops?
Why is that the counter-narrative? Doesn't it seem more likely that it will contine to gradually improve, perhaps asymptotically, maybe be more specifically trained in the niches where it works well, and it will just become another tool that humans use?
Maybe that's a flop compared to the hype?
LLM bulls will say that they are going to generate synthetic data that is better than the real data.
The answer (as always) lies somewhere in the middle. Expert software developers who embrace the tech whole heartedly while understanding its' limitations are now in an absolute golden era of being able to do things they never could have dreamed of before. I have no doubt we will see the first unicorns made of "single pizza" size teams here shortly.
For any bet that involves purchasing bits of profits you you could be right and lose money because because the government generally won't allow the entire economy to implode.
By the time a bubble pops literally everyone knows they're in a bubble, knowing something is a bubble doesn't make it irrational to jump on the bandwagon.
Sometimes my boss has asked me to do something that in the long run will cost the company dearly. Luckily for him, I am happy to push back, because I can understand what we're trying to achieve and help figure the best option for the company based on my experience, intuition and the data I have available.
There's so much more to working with a team than: "Here is a very specific task, please execute it exactly as the spec says". We want ideas, we want opinions, we want bursts of creative inspiration, we want pushback, we want people to share their experiences, their intuition, the vibe they get, etc.
We don't want AI agents that do exactly what we say; we want teams of people with different skill sets who understand the problem and can interpret task through the lens of their skill set and experience, because a single person doesn't have all the answers.
I think your ex-boss Mike will very soon find himself trapped in local minima of innovation, with only his own understanding of the world, and a sycophantic yes-man AI employee that will always do exactly as he says. The fact that AI mostly doesn't work is only part of the problem.
Think of it as an IQ test of how new technology is used
Let me give you an easier example of such a test
Let's say they suddenly develop nearly-free unlimited power, ie. fusion next year
Do you think the world will become more peaceful or much more war?
If you think peaceful, you fail, of course more war, it's all about oppression
It's always about the few controlling the many
The "freedom" you think you feel on a daily basis is an illusion quickly faded
It flickers for a moment, then it either says
"In 2025, mankind vastly underestimated the amount of jobs AI can do in 2035"
or
"In 2025, mankind vastly overestimated the amount of jobs AI can do in 2035"
How would you use that information to invest in the stock market?
So it's index funds (as always) with me anyway.
First part of this statement is clearly false. People on the phone in a tech support company are very much necessary to generate revenue, people tending to field were very much necessary to extract the value of the fields. Draftsmen before CAD were absolutely necessary etc.
Yet technology replaced them, or is in the process of doing so.
So then, your statement simplifies to “if you want to be safe for replacement have a job that’s hard to replace” which isn’t very useful anymore.
Money is just rationing. If you devalue the economy implicitly you accept that, and the consequences for society at large.
Lenin's dictum: A capitalist will sell you the rope you hang him with Comes to mind
People charging on their credit cards. Consumers are adding $2 billion in new debt every day.
"Total household debt increased by $167 billion to reach $18.20 trillion in the first quarter"
Rich people buying even fancier goods and services. You already see this in the auto industry. Why build a great $20,000 car for the masses when you can make the same revenue selling $80,000 cars to rich people (and at higher margins)? This doesn't work of course when you have a reasonably egalitarian society with reasonable wealth inequality. But the capitalists have figured out how to make 75% of us into willing slaves for the rest. A bonus of this is that a good portion of that 75% can be convinced to go into lifelong debt to "afford" those things they wish they could actually buy, further entrenching the servitude.
1. cure cancer
2. fix the economy
3. keep everybody happily employed.
And he's saying we can only pick two, or pick one. Except for the last one, that's not really an option.
This could be because most work is actually frivilous (very possible), but its also easy for them to sell those since ostensibly (1) and (2) actually require a lot of out of distribution reasoning, thinking, and real agentic research (which current models probably aren't capable of).
(3) just makes the most money now with the current technology. Curing cancer with LLMs, though altruistic, is more unrealistic and has no clear path to immediate profitability because of that.
These "AGI" companies aren't doing this out of the goodness of their hearts with humanity in mind, its pretty clearly meant to be a "final company standing" type race where everyone at the {winning AI Company} is super rich and powerful in whatever new world paradigm shows up afterwards.
I remember the pre-Web days of Usenet and BBS and no one thought those were trendy.
AI is far more akin to crypto.
Pretty much everyone I know uses AI for something.
I am not saying this is a nothing burger, the tech can be applied to many domains and improve productivity, but it does not think, not even a little, and scaling won’t make that magically happen.
Anyone paying attention should understand this fact by now.
There is no intelligence explosion in sight, what we’ll see during the next few years is a gradual and limited increase in automation, not a paradigm change, but the continuation of a process that started with the industrial revolution.
“ Final Thought (as a CEO):
I wouldn’t force a full return unless data showed a clear business case. Culture, performance, and employee sentiment would all guide the decision. I’d rather lead with transparency, flexibility, and trust than mandates that could backfire.
Would you like a sample policy memo I’d send to employees in this scenario?”
A better, more reasonable CEO than the one I have. So I’m looking forward to AI taking that white collar job especially.
Even older people prefer to hire younger people.
But the last few paragraphs of the piece kind of give away the game — the author is an AI skeptic judging only the current products rather than taking in the scope of how far they’ve come in such a short time frame. I don’t have much use for this short sighted analysis. It’s just not very intelligent and shows a stubborn lack of imagination.
It reminds me of that quote “it is difficult to get a man to understand something, when his salary depends on his not understanding it.”
People like this have banked their futures on AI not working out.
It's the AI hype squad that are banking their future on AI magically turning into AGI; because, you know, it surprised us once.
Or these guys pivot and go back to building CRUD apps. They’re either at the front of something revolutionary… or not… and they’ll go back to other lucrative big tech jobs.
All I can tell you is that for what I use AI for now in both my personal and professional life, I would pay a lot of money (way more than I already am) to keep just the current capabilities I already have access to today.
Because I wouldn't miss it at all if it disappeared tomorrow, and I'm pretty sure the society would be better off without it.
I’m a software engineer so for work I use it daily. It doesn’t “do my job” but it makes my job vastly more enjoyable. Need unit tests? Done. Want a prototype of an idea that you can refine? Here. Shell script? Boom. Somewhat complicated SQL query? Here ya go. Working with some framework you haven’t used before? Just having a conversation with AI about what I’m trying to do is so much better than sorting through often poorly written documentation. It’s like talking to another engineer who just recently worked on that same kind of problem… except for almost any problem you encounter. My productivity is higher. More than that, I find myself much more willing to take on bigger, harder problems because I know there’s powerful resources to answer just about any question I could have. It just makes me enjoy the job more.
In my personal life, I use it to cut through the noise that in recent year has begun to overwhelm the signal on the internet. Give me a salmon recipe. This used to be the sort of thing you’d put into Google and get great results. Now first result is some ad-stuffed website that is 90% fluff piece and a recipe hidden at the bottom. Just give me the fricken recipe! AI does that.
The other day I was trying to figure out whether a designer-made piece of furniture was authentic despite missing tags. Had a back and forth with ChatGPT, sharing photos, describing the build quality, telling it what the store owner had told me. Incredible depth of knowledge about an obscure piece of furniture.
I also use the image generation all the time. For instance, for the piece of furniture I talked about, I took a picture of my apartment, and the furniture, and asked it to put the furniture into my space, allowing me to visualize it before purchase.
It’s a frickin super power! I cannot even begin to understand how people are still skeptical about the transformative power of this stuff. It kind of feels like people are standing outside the library of Alexandria, debating whether it’s providing any value, when they haven’t even properly gone inside.
Yes, there are flaws. I’m sure there’s people reading this about to tell me it made them put glue on their salad or whatever. But what we have is already so deeply useful to me. Could I have done all of this through old fashioned search? Mastered Photoshop and put the furniture into my apartment on my own? Of course! But the immediacy here is the game changer.
But if the business model collapsed and they had to raise prices, or work cheaped out and stopped paying for our access, then yeah, I’d step up and spend the money to keep it.
It was never used in the sense of denigrating potential competitors in order to stay employed.
> People like this have banked their futures on AI not working out.
If "AI" succeeds, which is unlikely, what is your recommendation to journalists? Should they learn how to code? Should they become prostitutes for the 1%?
Perhaps the only option would be to make arrangements with the Mafia like dock workers to protect their jobs. At least it works: Dock workers have self confidence and do not constantly talk about replacing themselves. /s
As to my recommendation to what they do — I dunno man. I’m a software engineer. I don’t know what I am going to do yet. But I’m sure as shit not burying my head in the sand.
The gross injustices in the original quote were already a fact, which makes the quote so powerful.
We don’t need AGI for there to be large displacement of human labor. What’s here is already good enough to replace many of us.
(ftr i’m not even taking a side re: will AI take all the jobs. even if they do, the reporting on this subject by MSM has been abysmal)
But we're going to get to a point where "the quality goes up" means the quality exceeds what I can do in a reasonable time frame, and then what I can do in any time frame...
however there seems to be a big disconnect on this site and others
If you believe AGI is possible and that AI can be smarter than humans in all tasks, naturally you can imagine many outcomes far more substantial than job loss.
However many people don’t believe AGI is possible, thus will never consider those possibilities
I fear many will deny the probability that AGI could be achieved in the near future, thus leaving themselves and others unprepared for the consequences. There are so many potential bad outcomes that could be avoided merely if more smart people realized the possibility of AGI and ASI, and would thus rationally devote their cognitive abilities to ensuring that the potential emergence of smarter than human intelligences goes well.
A lot of the BS jobs are being killed off. Do some non-bs jobs get burn up in the fire along the way, yes. But it's only the beginning.
As a research engineer in the field of AI, I am again getting this feeling. People keep doubting that AI will have any kind of impact, and I'm absolutely certain that it will. A few years ago people said "AI art is terrible" and "LLMs are just autocomplete" or the famous "AI is just if-else". By now it should be pretty obvious to everyone in the tech community that AI, and LLMs in particular, are extremely useful and already have a huge impact on tech.
Is it going to fulfill all the promises made by billionaire tech CEOs? No, of course not, at least not on the time scale that they're projecting. But they are incredibly useful tools that can enhance efficiency of almost any job that involves setting behind a computer. Even just something like copilot autocomplete or talking with an LLM about a refactor you're planning, is often incredibly useful. And the amount of "intelligence" that you can get from a model that can actually run on your laptop is also getting much better very quickly.
The way I see it, either the AI hype will end up like cryptocurrency: forever a part of our world, but never quite lived up to it's promises, but I made a lot of money in the meantime. Or the AI hype will live up to it's promises, but likely over a much longer period of time, and we'll have to test whether we can live with that. Personally I'm all for a fully automated luxury communism model for government, but I don't see that happening in the "better dead than red" US. It might become reality in Europe though, who knows.
It ain't done yet.
As a user, I haven’t seen a huge impact yet on the tech I use. I’m curious what the coming years will bring, though.
Enough to cause the next financial crash, achieving a steady increase of 10% global unemployment in the next decade at worst,
That is the true definition of AGI.
LLMs are good productivity tools. I've been using it for coding, and it is massively helpful, really speeds things up. There's a few asterisks there though
1) I does generate bullshit, and this is an unavoidable part of what LLMs are. The ratio of bullshit seems to come down with reasoning layers above it, but they will always be there.
2) LLMs, for obvious reasons, tend to be more useful the more mainstream languages and libraries I am working with. The more obscure it is, the less useful it gets. It may have a chilling effect on technological advancement - new improved things are less used because LLMs are bad at them due to lack of available material, the new things shrivel and die on the vine without having a chance of organic growth.
3) The economics of it are super unclear. With the massive hype there's a lot of money slushing around AI, but those models seem obscenely expensive to create and even to run. It is very unclear how things will be when the appetite of losing money at this wanes.
All that said, AI is multiple breakthroughs away of replacing humans, which does not mean they are not useful assistants. And increase in productivity can lead to lower demand for labor, which leads ro higher unemployment. Even modest unemployment rates can have grim societal effects.
The world is always ending anyway.
It is confusing because many of the dismissals come from programmers, who are unequivocally the prime beneficiaries of genAI capability as it stands.
I work as a marketing engineer at a ~1B company and the amount of gains I have been able to provide as an individual are absolutely multiplied by genAI.
One theory I have is that maybe it is a failing of prompt ability that is causing the doubt. Prompting, fundamentally, is querying vector space for a result - and there is a skill to it. There is a gross lack of tooling to assist in this which I attribute to a lack of awareness of this fact. The vast majority of genAI users dont have any sort of prompt library or methodology to speak of beyond a set of usual habits that work well for them.
Regardless, the common notion that AI has only marginally improved since GPT-4 is criminally naive. The notion that we have hit a wall has merit, of course, but you cannot ignore the fact that we just got accurate 1M context in a SOTA model with gemini 2.5pro. For free. Mere months ago. This is a leap. If you have not experienced that as a leap then you are using LLM's incorrectly.
You cannot sleep on context. Context (and proper utilization of it) is literally what shores up 90% of the deficiencies I see complained about.
AI forgets libraries and syntax? Load in the current syntax. Deep research it. AI keeps making mistakes? Inform it of those mistakes and keep those stored in your project for use in every prompt.
I consistently make 200k+ token queries of code and context and receive highly accurate results.
I build 10-20k loc tools in hours for fun. Are they production ready? No. Do they accomplish highly complex tasks for niche use cases? Yes.
The empowerment of the single developer who is good at manipulating AI AND an experienced dev/engineer is absolutely incredible.
Deep research alone has netted my company tens of millions in pipeline, and I just pretend it's me. Because that's the other part that maybe many aren't realizing - its right under your nose - constantly.
The efficiency gains in marketing are hilariously large. There are countless ways to avoid 'AI slop', and it involves, again, leveraging context and good research, and a good eye to steer things.
I post this mostly because I'm sad for all of the developers who have not experienced this. I see it as a failure of effort (based on some variant of emotional bias or arrogance), not a lack of skill or intellect. The writing on the wall is so crystal clear.
When was the last time there was no financial pressure? I keep hearing how hard it is to be a small business owner for as long as I'm an adult and it's like a quarter century by now.
That's the most charitable thing I can say, at least.
History is always strikingly similar, the AI revolution is the fifth industrial revolution, and it is wise to embrace AI and collaborate with AI as soon as possible.
One can argue about the timeline and technology (maybe not LLM based), but it does seem that human-level AGI will be here relatively soon - next 10 or 20 years, perhaps, if not 2. When this does happen, history is unlikely to be a good predictor of what to expect... AGI may create new jobs as well as detstroy old ones, but what's different is that AGI will also be doing those new jobs! AGI isn't automating one industry, or creating a technology like computers that can help automate any industry - AGI is a technology that will replace the need for human workers in any capacity, starting with all jobs that can be conducted without a physical presence.