> I legitimately feel like I am going insane when I hear AI technologists talk about the technology. They’re supposed to market it. But they’re instead saying that it is going to leave me a poor, jobless wretch, a member of the “permanent underclass,” as the meme on Twitter goes.
They are marketing it. The target customer isn't the user paying $20 for ChatGPT Pro, though; the customers are investors and CEOs, and their marketing is "AI is so powerful and destructive that if you don't invest in AI, you will be left behind." FOMO at its finest.
Meanwhile if you go fishing for niche whales, there’s less competition and much higher ROI for them buying. That’s why a lot of tech isn’t really consumer friendly, because it’s not really targeting consumers, it’s targeting other groups that extract wealth from consumers in other ways. You’re selling it to grocery stores because people need to eat and they have the revenue to pay you, and see the proposition of dynamic pricing on consumers and all sorts of other things. Youre marketing it for analyzing communications of civilians for prying governments that want more control. You’re selling it to employers who want to minimize labor costs and maximize revenue, because they have millions or billions often and small industry monopolies exist all around, just find your niche whales to go hunting for.
And right now I’d say a lot of people in tech are happy to implement these things but at some point it’s going to bite you too. You may be helping dynamic pricing for Kroger because you shop at Aldi but at some point all of this will effect you as well, because you’re also a laboring consumer.
It's a negative feedback loop, and the politicians would rather reduce taxes on the rich then reverse that trend
At least that's how it looks to me
> To be clear: I like and use AI when it comes to coding, and even for other tasks. I think it’s been very effective at increasing my productivity—not as effective as the influencers claim it should be, but effective nonetheless.
It's hard to get measured opinions. The most vocal opinions online are either "I used 15 AI agents to vibe code my startup, developers are obsolete" or "AI is completely useless."
My guess is that most developers (who have tried AI) have an opinion somewhere between these two extremes, you just don't hear them because that's not how the social media world works.
Yes you can indeed vibe code a startup. But try building on that or doing anything relatively complicated and you're up shit creek. There's literally no one out there doing that in the influencer-sphere. It's all about the initial cut and MVP of a project, not the ongoing story.
The next failure is replacing a 20 year old legacy subsystem with 3MLOC with a new React / microservices thing. This has been sold to the directors as something we can do in 3 months with Claude. Project failure number three.
The only reality is no one learns or is accountable for their mistakes.
I reckon the reason the VC rhetoric has reached running-hair-dye-Giuliani-speech level absurdity isn’t because they’re trying to convince other people— it’s because they’re trying to convince themselves. I’d think it was funny as hell if my IRA wasn’t on the line.
Yes my pension is probably going down the same sinkhole with your IRA. Good luck. We need it.
Is he facing charges yet for sneaking drugs into his wife's food, or did he only ever discuss that with his buddy Jeff E and never actually follow through with it
Now programming and art are both gambling.
AI has led us into a deep spaghetti hole in one product where it was allowed free rein. But when applied to localised contexts. Sort of a class at a time it’s really excellent and productivity explodes.
I mostly use it to type out implementations of individual methods after it has suggested interfaces that I modify by hand. Then it writes the tests for me too very quickly.
As soon as you let it do more though, it will invariably tie itself into a knot - all the while confidently ascertaining that it knows what it’s doing.
Anyone trusting it to just "do its own thing" is out if their mind
I think if someone's goal was just the tutorial code, it would have been very impressive to them the AI can summon it.
I'm too poor for local llms, I think there might be a 2 or 4gb graphics card in one of my junk pcs but thats about it lol
This is an opportunity. You can have a good long career consulting/contracting for these types of companies.
This is a straw man. I don't know anybody who sincerely claims this, even online. However if you dare question people claiming to be solving impossible problems with 15 AI agents (they just can't show you what they're building quite yet, but soon, soon you'll see!), then you will be treated as if you said this.
AI is a superior solution to the problem Stack Overflow attempted to solve, and really great at quickly building bespoke, but fragile, tools for some niche problem you solve. However I have yet to see a single instance of it being used to sustainably maintain a production code base in any truly automated fashion. I have however, personally seen my team slowed down because code review is clogged with terribly long, often incorrect, PRs that are largely AI generated.
They are fine, moderately useful here and there in terms of speeding up some of my tasks.
I wouldn't pay much more than 20 bucks for it though.
None of our much-promoted AI initiatives have resulted in any ROI. In fact they have cost a pile of cash so far and delivered nothing.
Back then, whenever there was a thread discussing the merits of Crypto, there would be people speaking of the certainty that it was the future and fiat currency was on its way out.
It's the same shit with AI. In part it's why I am tranquil about it. The disconnect in between what AI shills say and the reality of using it on a daily basis tell me what I need to know.
I remember when everyone was reading SICP and Clojure and Blockchain were going to burn the universe to the ground. Then crypto. Now this.
Been around much longer than HN and watched this cycle so many times it's boring. I'm still writing C.
It's good toilet time for when Reddit gets too annoying.
That said, a well-reasoned text should probably go on a blog site, not here, or here only as link. Otherwise you are wasting a lot of effort, with only few people even noticing your comment and the discussion soon entirely disappearing into history.
Productivity gains won't show up on economic data and companies trying to automate everything will fail.
But the average office worker will end up with a much more pleasant job and will need to know how to use the models, just like who they needed to learn to use a PC.
There absolutely is but I'm increasingly realizing that it's futile to fight it.
The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.
Even if you restrict yourself to small, open models, there is so much unexplored around messing with the internals of these. The entire world of open image/video generation is pretty much ignored by all but a very narrow niche of people, but has so much potential for creating interesting stuff. Even restricting yourself only to an API endpoint, isn't there something more clever we can be doing than re-implementing code that already exists on github badly through vibe coding?
But nobody in the hype-fueled mind rot part of this space remotely cares about anything real being done with gen AI. Vague posting about your billion agent setup and how you've almost entered a new reality is all that matters.
I think we all do???
Even if I'm not coding a lot, I use it every day for small tasks. There is not much to code in my job, IT in a small traditional-goods export business. The tasks range from deciphering some coded EDI messages (D.96A as text or XML, for example), summarizing a bunch of said messages (DESADV, ORDERSP, INVOIC), finding missing items, Excel formula creation for non-trivial questions, and the occasional Python script, e.g. to concatenate data some supplier sent in a certain way.
AI is so strange because it is BOTH incredibly useful and incredibly random and stupid. Among the latter, see a comment in my history I made earlier today, the AI does not tell me when it uses a heuristic and does not provide an accurate result. EVERY result it shows me it shows as final and authoritative and perfect. Even when after questioning it suddenly "admits" that it actually skipped a few steps and that's not the correct final result.
Once AI gets some actual "I" I'm sure the revolution some people are commenting about will actually happen, but I fear that's still some way off. Until then, lots of sudden hallucinations and unexpected wrong results - unexpected because normal people believe the computer when it claims it successfully finished the task and presents a result as correct.
Until then it's daily highs and lows with little in between, either it brilliantly really solves some task, or it fails and that includes telling you about it.
A junior engineer will at least learn, but the AI stays pretty constant in how it fails and does not actually learn anything. The maker providing a new model version is not the AI learning.
You asked people what their project was for and you'd get a response that made sense to no one outside of that bubble, and if you pressed on people would get mad.
The bizarre thing is that this time around, these tools do have a bunch of real utility, but it's become almost impossible online to discuss how to use the tech properly, because that would require acknowledging some limitations.
I've been pretty consistently skeptical of the crypto world, but with web3 I was really hoping to be wrong. What's wild is there was not a single, truly distributed, interesting/useful service at all to come out of all that hype. I spent a fair bit of time diving into the details of Ethereum and very quickly realized the "world computer" there (again, wonderful idea) wasn't really feasible for anything practical (I mean other than creating clever ways to scam people).
Right now in the LLM space I see a lot of people focused on building old things in new ways. I've realized that not only do very few people work with local models (where they can hack around and customize more), a surprisingly small number of people write code that even calls an LLM through an API for some specific task that previously wasn't possible (regular ol'software build using calls to an LLM has loads of potential). It's still largely "can some variation on a chat bot do this thing I used to do for me".
As a contrast, in the early web, plenty of people were hosting their own website, and messing around with all the basic tools available to see what novel thing they could create. I mean "Hamster Dance" was it's own sort of slop, but the first time you say it you engaged with it. Snarg.net still stands out as novel in it's experiments with "what is an interface".
"I shipped code 15% faster with AI this month" doesn't have the pull of a 47 agent setup on a mac mini
Any guesses on how long this lasts?
6-12 months before non-technical leaders take notice and realize they can't actually fire half their team
“Ohhhh this is so scary! It’s so powerful we have to be very careful with it!” (Buy our stuff or be left behind, Mr. CEO, and invest in us now or lose out)
I understand why people don't act polite to LLMs, but honestly I think not thanking them will make people act more dickish to other humans.
Neither your bicycle nor your spell-checker hold conversations and answer questions, neither of them is being used as therapist or virtual girl/boy friend, and neither's whole shtick is being trained on a ginormous human corpus to convincingly respond like a person.
I like to think we can perceive a difference between a bicycle and an something specifically developed and trained to pass for intelligence...
They're trying to get government to hand them a moat. Spoilers... There's no moat.
Many users don't want to acknowledge this about the company making their fav ai
There's no way any LLM code generator can replace a moderately complex system at this point and looking at the rate of progress this hasn't improved recently at all. Getting one to reason about a simple part of a business domain is still quite difficult.
The rate of progress in the last 3 years has been over my expectations. The past year has been increasing a lot. The last 2 months has been insane. No idea how people can say "no improvement".
I wonder is there are parallel realities. What I remember from the last year is a resounding yawn at the latest models landing, and even people being actively annoyed in e.g. ChatGPT 4.1 vs 4 for being nerfed. Same for 5, big fanfare, and not that excited reception. And same for Claude. And nothing special in the last 2 months either. Nobody considers Claude 4.6 some big improvement over 4.5.
Sorry for closing this comment early, I need to leave my car at the house and walk to the car wash.
There was some improvement in terms of the ability of some models to understand and generate code. It's a bit more useful than it was 3 years ago.
I still think that any claims that it can operate at a human level are complete bullshit.
It can speed things up well in some contexts though.
It's comments like these that make me not really want to interact with this topic anymore. There's no way that your comment can be taken seriously. It's 99.9% a troll comment, or simply delusional. 3 years ago the model (gpt3.5, the only one out there basically) was not able to output correct code at all. It looked like python if you squinted, but it made no sense. To compare that to what we have today and say "a bit more useful" is not a serious comment. Cannot be a serious comment.
It's a religious war at this point. People who hate AI are not going to admit anything until they have no choice.
And given the progress in the last few months, I think we're a few years away from nearly every developer using coding agents, kicking and screaming in some cases, or just leaving the industry in others.
My comment was that I think AI is useful. I use it on a daily basis, and have been for quite a while. I actually pay for a Chat GPT account, and I also have access to Claude and Gemini at work.
That you frame my comment as "people who hate AI" and calls ir "a religious war" honestly says more about you than me.
It seems that if you don't think that AI is the second coming of Christ, you hate it.
But you're sort of doing the same thing I did - "second coming of Christ"?!
I have no intention of changing your mind. I don't think of the people I reply to highly enough to believe they can change their minds.
I reply to these comments for other people to read. Think of it as me adding ky point of view for neutral readers.
Either way, I could use AI for some coding tasks back in GPT 3.5 days. It was unreliable, but not completely useless (far from it in fact)
Nowadays it is a little more reliable, and it can do more complex coding tasks with less detailed prompts. AI now can handle a larger context, and the "thinking" steps it adds to itself while generating output were a nice trick to improve its capabilities.
While it makes me more productive on certain tasks, it is the sort of the improvements I expected in 3 years of it being a massive money black hole. Anything less would actually be embarrassing all things considered.
Perhaps if your job is just writing code day in an out you would find it more useful than I do? As a software engineer I do quite a bit more than that, even if coding is the bit of work I used to enjoy the most.
What developments have been "staggering"? Claude 4.6 vs 4.5? ChatGPT 5.2 vs 5? The Gemini update?
Only the hype has been staggering, and bs non-stories like the "AI agents conspire and invent their own religion".
Do a small test: if you're 10x faster then keep going. If not, shelve it for a while and maybe try again later
The case where it's not obvious is when the effect is <1.5x. I think that's clearly where we're at
That such a collapse of consumption economy, even just counting white collar jobs cut by "mere" 30%, would also mean a collapse of the stock market, society, infrastructure, and even basic safety, doesn't enter the mind.
similar to the ATM example in the article (and my experience with ai coding tools), the automation will start out by handling the easiest parts of our jobs.
eventually, all the easy parts will be automated and the overall headcount will be reduced, but the actual content of the remaining job will be a super-distilled version of 'all the hard parts'.
the jobs that remain will be harder to do and it will be harder to find people capable or willing to do them. it may turn out that if you tell somebody "solve hard problems 40hrs a week"... they can't do it. we NEED the easy parts of the job to slow down and let the mind wander.
LLMs will help such teams move and break things even faster than before. I’m not against the use of LLMs in software development, but I’m against their blind use. However, when there is pressure to ship as fast as possible, many will be tempted to take shortcuts and not thoroughly analyze the output of their LLMs.
100%. i have a basically unlimited Claude balance at work. I do not think of cost except for fun. CEO thinks every engineer has to use AI because nobody is gonna just be using text editors alone in the future.
If marketing it was the sole objective there are many other stories they could have told, but didn't.
"Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)
You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.
and by pretending that he has no option but to "go against his own will" and continue it, he gets to make it sound nuclear-bomb-level important.
This is hyping 101.
Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think.
- Alex Karp genuinely believes China is a threat
- I think China is an economic threat, especially for tech
- An AI arms race is itself threatening; it is not like the nuclear deterrent
- Geopolitical tensions are very convenient for Alex Karp
- America has a history of exaggerating geopolitical threats
- Tech is very credulous with politics
"Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think".
I mean, dude, a corporate head having a conflict of interest in saying insincere shit to promote the stuff his company makes is not some conspiracy thinking about everybody playing "8 dimensional chess".
It's the very basic baseline case.
Like I said you can believe whatever you want about good-faith motives, but he didn't have to say he wanted to pause AI, he could have been bright-and-cheery bullish, there was no real advantage to laying his cards out on his qualms.
If the USA pauses AI development, do you think China will?
This is the source of migration allowance but I bet you stand there with a "Refugees welcome" sign.
But it's tacitly understood we need to develop this as soon as we can, as fast as we can, before those other guys do. It's a literal arms race.
Probably only a matter of time until there’s a Snowden-esque leak saying AI is responsible for drone assassinations against targets selected by AI itself.
Still wouldn't mean much. Wars are won on capacity, logistics (the practical side, not ability to calculate them), land/etc advantages, and when it comes to boots on the ground, courage, knowledge of the place, local population support, etc. Not "analyzing info sources" at scale which is mostly a racket that pretends to be important.
"If there's a chance psychic powers are real..."
Against other countries? The biggest endgame is own population control. That has always been the biggest problem/desire of elites, not war with other countries.
Shumer is of a similar stock but less capable, so he gets caught in his lies.
I’m still shocked people work with Altman knowing his history, but given the Epstein files etc it’s not surprise. Our elite class is entirely rotten.
Best advice is trust what you see in front of your face (as much as you can) and be very skeptical of anything else. Everyone involved has agendas and no morals.
But what I really hate about AI and how most people talk about it is that if one day it does what the advertisements say, all white collar jobs collapse.
Then everything collapses. The carpenter will also be out of work if more than half of his client base cannot afford his work anymore.
If everyone is out of a job, then what is the point of the economy? Who is doing what work for whom? If nobody can afford anything, then why are we even doing this?
And, at some point, needs have to be met. If you were to drop 1000 random people on a remote island, you wouldn't expect nothing to happen just because there aren't any employers with jobs to hire people to do. People would spontaneously organize for survival and a local economy would form.
I find depictions of post-apocalyptic societies in sci-fi to be difficult to accept, too. Like Elysium: how could the entire Earth be an underclass, yet the space station needs them to work to be able to survive? That would be the easiest siege warfare of all time; do literally nothing and the space station eventually starves to death. Like Fallout: how could places stay completely run down for centuries? You mean to tell me that nobody would at least start a scrap yard and start cleaning up the old baby buggies and scrap metal from burnt out hulks of cars?
And then grown-ass adults tell stories about what could happen as if they are experts in Economy. Nobody knows how Economy work. Hell, half the Nobel prizes in economics in the last decade have basically been about proving the stupid, fairytale version of economics that existed for the last 100 years was a complete farce.
Yeah, I can definitely see a market collapse leading to a lot of mortgages getting foreclosed. But a complete shutdown? It seems preposterous. How? Why would everyone go along with it?
I live in an area that's not a tech hub and lots of people get confrontational when they find out I work in tech. First they want to know if I'm working on AI, and once they're satisfied that the answer is no, they start interrogating me about it. Which companies are behind it, who their CEOs are, who's funding them, etc. All easily Googleable, but I'm seen as the AI expert because I work in tech.
My career is built on people not knowing how to Google lmao (IT)
To most people, AI is chatGTP. Maybe Gemini.
Claude? No idea.
VS Code, Cursor, Antigraivity, Claude Code? Blank stares.
Same as when the computer came, some will fall behind. Excel monkeys copy pasting numbers will go, copywriters, written word jobs = already gone. Art for simple images = AI now all done by one person.
Unless you want a Soviet system where jobs are kept to keep people busy.
In $big_corp, everyone seems to have penis envy over “head count”, constantly checking whose is bigger.
If you want to see an executive have an existential crisis, ask them how many of those folks are necessary for the org to run.
There's an argument that even under capitalism, a lot of jobs still only exist in order to keep people busy.
I remember when Covid got out of control in China a lot of people around me [in NY] had this energy of "so what it'll never come to us." I'm not saying that they believed that, or had some rational opinion, but they had an emotional energy of "It's not big deal." The emotional response can be much slower than the intellectual response, even if that fuse is already lit and the eventuality is indisputable.
Some people are good at not having that disconnect. They see the internet in 1980 and they know that someday 60 years from now it'll be the majority of shopping, even though 95% of people they talk to don't know what it is and laugh about it.
AI is a little-bit in that stage... It's true that most people know what it is, but our emotional response has not caught up to the reality of all of the implications of thinking machines that are gaining 5+ iq points per year.
We should be starting to write the laws now.
If we started writing lots of laws around NFTs, it would just be a bunch of pointless (at best), or actively harmful laws.
Nobody cares about NFTs today, but there were genuinely good ideas about how they’d change commerce being spouted by a small group of people.
People can say “this is the future” while most people dismiss them, and honestly the people predicting tectonic shifts are usually wrong.
I don’t think that the current LLM craze is headed for the same destiny as NFTs, but I don’t think that the “LLM is the new world order” crowd is necessarily more likely to be correct just because they’re visionaries.
This is where the misrepresentation... no, the lie comes in. It always does in these "sensible middle" posts! the genre requires flattening both sides into dumber versions of themselves to keep the author positioned between two caricatures. Supremely done, OP.
If you read Matt's original article[0] you see he was saying something very different. Not "AI is going to kill lots of people" but that we're at the point on an exponential curve where correct modeling looks indistinguishable from paranoia to anyone reasoning from base rates of normal experience. The analogy is about the epistemic position of observers, not about body counts.
This tech is a breakthrough for so many reasons. I’m just not worried about it replacing my job. Like, ever.
Actually, in my city, not the ATMs, but the apps which made it possible to do almost everything on the phone significantly reduced the number of banks in the last few years. I have to go very rarely to the bank, but, when I have to do, I see that another close one has closed and I have to go somewhere even farther.
I honestly believe everything will be normalized. A genius with the same model as I will be more productive than I, and I will be more productive than some other people, exactly the same as without AI.
If AI starts doing things beyond what you can understand, control and own, it stops being useful, the extra capacity is wasted capacity, and there are diminishing returns for ever growing investment needs. The margins fall off a cliff (and they're already negative), and the only economic improvement will come from Moore's Law in terms of power needed to generate stuff.
The nature of the work will change, you'll manage agents and what not, I'm not a crystal ball, but you'll still have to dive into the details to fix what AI can't, and if you can't, you're stuck.
AI users and especially AI boosters are the best thing that happened to me. I no longer felt as an impostor, maybe the opposite. The confidence they gave me got me a promotion and a raise.
I still get mad at AI sometime, when someone use it to resolve an easy issue tagged "good first issue" i especially took time to describe for a new hire to enter a very complex codebase, or when i review a fully vibe-coded PR before the person who vibe-coded it, but overall the coming of AI (especially since Sonnet/Opus4.5 and GPT 5.1) is great.
It's very useful as a coding autocomplete. It provides a fast way to connect multiple disparate search criteria in one query.
It also has caused massive price hikes for computer components, negatively impacted the environment, and most importantly, subtly destroys people's ability to understand.
Whenever there is a massive paradigm shift in technology like we have with AI today, there are absolutely massive, devastating wars because the existing strategic stalemates are broken. Industrialized precision manufacturing? Now we have to figure out who can make the most rifles and machine guns. Industrialized manufacturing of high explosives? Time to have a whole world war about it. Industrialized manufacturing of electronics? Time for another world war.
Industrialized manufacturing of intelligence will certainly lead to a global scale conflict to see if anyone can win formerly unwinnable fights.
Thus the concerns about whether you have a job or not will, in hindsight, seem trivial as we transition to fighting for our very survival.
ie new stalemate in the form of multiple inward focused countries/blocs
Claude, go hack <my enemy nation-state> and find me ways to cause them harm that are unlikely be noticed until it is too late for them to act on it.
Drones change everything we think we know about warfare, except for adage that logistics is what wins wars (post industrialization)
> Industrialized manufacturing of electronics?
Ukraine seems to be exploring this and rewriting military doctrine. The Iranian drones the Russians are using seem to be effective, too. The US has drones, too, and we've discovered that drone bombing is not helpful with insurgencies; we haven't been in any actual wars for a while, though.
> Industrialized manufacturing of intelligence
I don't think we've gotten far enough to discover how/if this is effective. If GP means AI, then we have no idea. If GP means fake news via social media, then we may already be seeing the beginning effects. Both Obama and Trump had a lot of their support from the social media.
Having written this, I think I flatly disagree with GP that technology causes wars because of its power. I think it may enable some wars because of its power differential, but I think a lot is discovered through war. WWI discovered the limitations of industrial warfare, also of chemical weapons. Ukraine is showing what constellations of mini drones (as opposed to the US' solitary maxi-drones) can do, simply because they are outnumbered and forced to get creative.
It seems you may be extending this
If you don't think internet is a vital tech on the front line drone war, I would invite you to watch Perun's recent video on Starlink
GP's assertion about tech revolutions making wars doesn't make any sense to me on any level, but it's not just because the latest revolutions were 'not military tech'
i'm liking william spaniel's model : wars happen when 1 - there is a substantial disagreement between parties and 2 - there is a bargaining friction that prevents reaching a less-costly negotiated resolution.
I don't see how a technical revolution necessarily causes either, much less both, of those conditions. there sure is a lot of fear and hype going around - and that causes confusion and maybe poor decisions - but we should chill on the apocalyptics
Hahah, this guy Gen-Zs.
Anthropic’s Dario Amodei deserves a special mention here. Paints the grimmest possible future, so that when/if things go sideways, he can point back and say, "Hey, I warned you. I did my part."
Probably there is a psychological term that explains this phenomenon, I asked ChatGPT and it said it could be considered "anticipatory blame-shifting" or "moral licensing".
Not all of it was like that, I think oddly enough it was Tesla or just Elon Musk claimng you'd soon be able to take a nap in your car on your morning commute through some sort of Jetsons tube or that you could let your car earn money on the side while you weren't using it, which might actually be appealing to the average person. But a lot of it felt like self-driving car companies wanted you to feel like they just wanted to disrupt your life and take your things away.
I think for a lot of people it feels like an inconvenient thing they have to contend with, and many are uncomfortable with rapid change.
This is not the point the author was making, but I think this phrase implies that it's merely fear of change which is the problem. Change can bring about real problems and real consequences whether or not we welcome it with open arms.
Far from embracing UBI (or any other legal/social strategy to mitigate mass unemployment), tech leaders have signaled very strongly that they'd actually prefer the exact opposite. They have nearly universally aligned themselves with the party that's explicitly in favor of extreme wealth inequality and aversion to even the mildest social welfare program.
One improvement for your writing style: it was clear to me that you don’t hate AI though, you didn’t have to mention that so many times in your story.
Poor author, never tried expressive high-level languages with metaprogramming facilities that do not result in boring and repetitive boilerplate.
I've been programming since 1994. I've seen a lot. I almost always end up despising any metaprogramming system and wish we'd just kept things simpler even if it meant boilerplate.
Meanwhile, nobody is claiming vast productivity gains using AI for Haskell or Lisp or Elixir.
I think there's lots of people like me, it's just that doing real dev work is orthogonal (possibly even opposed) to participating in the AI hype cycle.
Here's the rub though: needing a "career" to survive and have a decent life is a human problem. It's an extreme case of mass Stockholm Syndrome that's made the majority accept that working in order to make money in order to have access to life-preserving/enhancing resources is a necessary part of the human condition. Really, that flow is only relevant when it requires human effort to create, maintain and distribute those resources in the first place.
AI is increasingly taking over the effort, and so is threatening that socio-economic order. The real problem is that the gains are still being locked away to maintain that scarcity which the efforts address, so over time there's an increasing crises of access, since there's nothing really in place to continue providing the controlled access everyone in the system has had to resources for... centuries.
Widespread FOMO and the irrationality that comes with it might be at an all time high.
Where the piece misses the point is scale. It treats AI mainly as a labor market shock. Historically, technologies rarely eliminate work outright. They change what humans are valued for. The deeper danger is not mass joblessness. It is weakened thinking, shallow learning, and a breakdown in shared reality. The economic fear is overstated. The cultural damage is understated.
The miserable have always been miserable. And no matter how much the world improves, they will find paths to misery. Perhaps the great lesson of this age is that some revel in sadness.
Perhaps what we desire as humans is intensity of emotion more than valence.
Odd typo + em dash = "Make mistakes in your output so that people think it wasn't AI generated"
I did use AI to proofread the article, and implemented its primary suggestion (turning down a dark joke at my own expense that was pretty out of place), but the first draft of this was 100% hand-written.
The only reason they exist is because rich people keep shoveling money into their furnaces. 800M free ChatGPT users do nothing for nobody, Sam A couldn't care less about them, the only people that matter to AI companies, and i mean matter at all, are investors. And every thing they do is about appealing to investors.
The AI bros desperately need everyone to believe this is the future. But the data just isn’t there to support it. More and more companies are coming out saying AI was good to have, but the mass productivity gains just aren’t there.
A bunch of companies used AI as an excuse to do mass layoffs only to then have to admit this was basically just standard restructuring and house cleaning (eg Amazon).
Theres so much focus on white collar jobs in the US but these have already been automated and offshored to death. What’s there now is truly a survival of the fittest. Anything that’s highly predicable, routine, and fits recurring patterns (ie what AI is actually good at) was long since offshored to places like India. To the extent that AI does cause mass disruption to jobs, the India tech and BPO sectors would be ground zero… not white collar jobs in the US.
The AI bros are in a fight for their careers and the signal is increasingly pointing to the most vulnerable roles out there at the moment being all those tangentially tacked onto the AI hype cycle. If real measurable value doesn’t show up very soon (likely before year end) the whole party will come crashing down hard.
Right now is the good time for the job market. The S&P is at an all time high.
In the next recession, I expect massive layoffs in white collar work and there is no way those jobs are coming back on the other side.
40-50% of US white collar work hours are spent on procedural, rules-based tasks. Then another large chunk is managing the people doing procedural, rules-based tasks and support of people doing rules based tasks. Salary and benefits are 50% of operating costs for most business.
Maybe you do something really interesting and unique but that is just not what most white collar workers in the US are doing.
I know for myself, these are the final days of white collar work before I am unemployable as a white collar worker. I don't think the company I work for will exist either in 5 years. It is not a matter of Claude code being able to update a legacy system or not. It is that the tide hasn't really gone out in 15 years and all these zombie companies are going to get wiped out at the same time AI is automating the white collar jobs. Delaying the business cycle from clearing over and over is not a free lunch, it is a bill that has been stacking up for a long time.
On the other side, the business as usual of today won't be an option.
From my own white collar experience, I think if you view procedural rules-based tasks as a graph, the automation of any one task depends so much on other tasks being automated. So it will seem like the automation is not working but at some point you get a contagion of automation. Then so much automation will happen at once.
There isnt gonna be a huge event in the public markets though, except for Nvidia, Oracle and maybe MSFT. Firms that are private will suffer enormously though.
The fact that the NYT thought the guy was worthy of a profile is yet another piece of evidence that I should never have given that paper money in the first place.
So if AI improves a bit, it might be better than the current customer service workers in some ways...
The customer service reps are warm bodies for sensitive customers to yell at until they tire themselves at.
Tolerating your verbal abuse is the job.
Amazon never intended to improve the quality of the service being offered.
You're not going to unsubscribe, and if you did they wouldn't miss you.
Maybe not the best example? The luddites were skilled weavers that had their livelihoods destroyed by automation. The govt deployed 12,000 troops against the luddites, executed dozens after show trials, and made machine breaking a capital offense.
Is that what you have planned for me?
> while it’s true that textile experts did suffer from the advent of mechanical weaving, their loss was far outweighed by the gains the rest of the human race received from being able to afford more than two shirts over the average lifespan
I hope the author has enough self awareness to recognize that "this is good for the long term of humanity" is cold comfort when you're begging on the street or the government has murdered you, and that he's closer to being part of the begging class than the "long term of humanity" class (by temporal logistics if not also by economic reality).
> We should hate/destroy this technology because it will cause significant short term harm, in exchange for great long term gains.
Rather
> We should acknowledge that this technology will cause significant short term harm is we don't act to mitigate it. How can we act to do that, while still obtaining the great long term gains from it.
Also you might not care, but normies sure care!
So feelings have soured and tech seems more dystopian. Any new disruptive technology is bound to be looked upon with greater baseline cynicism, no matter how magical. That's just baked in now, I think.
When it comes to AI, many people are experiencing all the negative externalities first, in the form of scams, slop, plagiarism, fake content - before they experience it as a useful tool.
So it's just making many people's lives slightly worse from the outset, at least for now
Add all that on top of the issues the OP raises and you can see why so many are have bad feelings about it
And relying on your government do do the right thing as of 2026 is, frankly, not a great idea.
We need to think hard ourselves how to adapt. Perhaps "jobs" will be the thing of the past, and governments will probably not manage to rule over it. What will be the new power structures? How do we gain a place there? What will replace the governments as the organizing force?
I am thinking about this every day.
Well may be together with the full societal collapse, but as long as there are nation-states and competition, AI will be developed.
The difference is, with AI, it looks like they're really pulling it together and delivering something. For years it was "any minute now, this is gonna change everything, keep giving us money please" and it was all amusing but not-worth-the-hassle chatbots until recently, but Professor Harold Hill came through in the clutch and put together a band that can play something resembling music at the latest possible moment. And now we have agents that can do real work and the distinct possibility that the hermetic magicians might actually awaken their electronic god.
Also, I can make even the most slopped models give me 100% "human written" easily by simply fiddling with the sampler settings. Catch me if you can with temp = 100 and a modern distributional aware truncation sampler (i.e. P-less decoding, top-H, even min_p)
That and the whitewashing it allows on layoffs from failing or poorly planned businesses.
Human issues as always.
We've solved this problem before.
You have 2 separate segments:
1. Lessons that forbid AI 2. Lessons that embrace AI
This doesn't seem that difficult to solve. You handle it like how you handle calculators and digital dictionaries in universities.
Moving forward, people who know fundamentals and AI will be more productive. The universities should just teach both.
it was easy to force kids to learn multiplication tables in their head when there were in-person tests and pencil-and-paper worksheets. if everything happens through a computer interface... the calculator is right there. how do you convince them that it's important to learn to not use it?
if we want to enforce non-ai lessons, i think we need to make sure we embrace more old-school methods like oral exams and essays being written in blue books.