The author catalog of harms is real. But it's worth noting that nearly identical catalogs were compiled for every major technological shift in modern history. The Internet destroyed print journalism, local retail, and enabled cyberbullying and mass surveillance. If we applied the same framework used here, Internet optimism in 2005 was also a form of "class privilege" (his term, I personally hate it).
And the pattern extends well beyond the Internet. For example, mechanized looms devastated weavers, the automobile wiped out entire trades while introducing pollution and traffic deaths, and recorded music was supposed to kill live performances.
In each case, the harms were genuine, the displacement was painful and unevenly distributed, and the people raising alarms were not irrational. They were often right about the costs. What they tended to miss was the longer trajectory: the way access to books, transportation, music, and information gradually broadened rather than narrowed, even if the transition was brutal for those caught in it.
History doesn't guarantees a good outcome for AI, but the author does advocates from a position of "class privilege": of having access to good lawyers, good doctors, and good schools already, and not feeling the urgency of tools that might extend those things to people who don't.
I dunno I think you can also take a really dim view of whether society as currently structured is set up to use AI to make any of those things more accessible, or better.
In education, certainly we've seen large tech companies give away AI to students who then use it to do their work. Simultaneously teachers are sold AI-detection products which are unreliable at best. Students learn less by e.g. not actually doing the reading or writing, and teachers spend more of their time pointlessly trying to catch the very common practice.
In medicine, in my most recent job search I talked to companies selling AI solutions both to insurers and to healthcare providers, to more quickly prepare filings to send to the other. I think the amount of paperwork per patient is just going to go up, with bots doing most of the actual form-filling, but the proportion of medical procedures that gets denied will be mostly unchanged.
I am not especially familiar with the legal space, but given the adversarial structure of many situations, I'm inclined to expect that AI will allow firms to shower each other in a paperwork, most of which will not be read by a human on either side. Clients may pay for a similar or higher number of billable hours.
Even if the technology _works_ in the sense of understanding the context and completing tasks autonomously, it may not work for _society_.
- teeth and nails with knives (in various shapes from bones to steel)
- feet with carriages and bicycles and cars
- hands with mills and factories on steam engines to industrial robots
Literaly every automation was meant to help humans somehow so, this naturally entailed an automation of some human function.
This automation is an automation of the human brain.
While the "definition" of what's human doesn't end here (feelings, etc.) , the utility does.
With loss of utility comes loss of benefits.
Mainly your ability to differentiate as a function of effort (physical or intellectual) gets diminished to 0. This poses some concerns wrt to ability to achieve goals and apsirations - like buying that house at some point or ensuring your childrens future, potentially vanish for large swaths of the population — the "unfortunates" - which are these it's hard to tell, but arguably the level of current resources (assets) becomes a better indicator of the future for generations to come, with work becoming less to none.
By freezing utility based on own effort you arguably freeze the structure of society in time. So yes, every instance sucked for the displaced party, but this one seems to be particularly broader (i.e. wider splash damage)
Genuinely interested in some sort of data on this.
My working assumption was that print news media was dying through a combination of free news availability on the internet, shifting advertising spending as a result, shifting ‘channels’ to social media, and shifting attention spans between generations.
We could be at the end of the rope with how much we can displace unevenly and how much people will put up with another cycle of wealth concentration. Just like we might be at the end of the rope with how much our minds can be stunted and distracted before serious negative consequences occur
It's been this overpowered tool for the wealthy to gather more wealth by erasing jobs and the data brokers to perform intense surveillance.
I think both the scale (how many industries will be impacted effectively simultaneously) and speed of disruption that could be caused by AI makes it very different from anything we have seen before.
Think about the physical objects in the room you're in right now. How many of them were made from start to finish by human hands? Maybe your grandmother knitted the woollen jersey you're wearing -- made from wool shorn using electric shears. Maybe a clay bowl your kid made in a pottery class on the mantelpiece. Anything else?
We are so far post automobile that it’s hard to compare, but many of the benefits are illusionary when you consider how society has evolved with them as commutes for example used to be shorter. Similarly the air used to be far cleaner and that’s after we got rid of leaded gas and required catalytic converters decades ago.
Throughout our entire race as a species, abusers have always fucked the commons to the extreme using whatever tools they have available.
I mean take something as "innocuous" as the cotton gin, prior to the cotton gin there was a real decline in slavery but once it became extremely easier to process cotton slavery skyrocketed. Some of the worse laws the US has ever passed, the fugitive slave act, was during this period.
To think that technological progress means prosperity is extremely delusional.
We're still dealing with the ramifications of nuclear weapons and the likelihood that someone makes a committed nuclear attack will assuredly happen again in our species, just hoping that it doesn't take out all life on Earth when it happens.
Industrialization has rapidly accelerated planet wide climate change that will have disastrous effects in many of our lifetimes. A true runaway condition will really test the merit of those billionaire bunkers.
All for what? a couple hundred years of "advancement"? A blink in the lifespan of humanity, but dooms everyone to a hyper-competitive death drive towards an unlivable world.
As a society, our understanding of "normal" has narrowed down to the last 80 years of civilization. A normal focused around consumption, which stands to take it all away just as fast.
The techno-optimists never seriously propose any meaningful solution to millions losing their livelyhoods and dignity so Sam Altman can add an extension to his doomsday bunker. They just go along with it as if they'll be invited down to weather the wet-bulb temperature.
The biggest harm that would come from AI is ”everything at once”, we’re not talking about a single craft, we’re talking about the majority of them. All while moving the control of said technology to even fewer privatized companies, the printing press didn’t centralize all knowledge and utility to a few entities, it spread it. AI is knowledge and history centralized, behind paywalls and company policies. Imagine picking up a book about the history of music and on every second page there’s an ad for McDonald’s, this is how the internet ended up and it’s surely how LLM providers will end up.
And sure, some will run some local model here and there, but it will irrelevant in a global context.
And it has been... quite a correct view? In the past few decades the US cranked up its Gini index from 0.35 to ~0.5, successfully eliminated single-earner housebuyers[0]. It's natural to assume the current technology shift will eliminate double-earner housebuyers too. The next one would probably eliminate PC-buyers if we're lucky!
[0]: https://www.economist.com/united-states/2026/02/12/the-decli...
LLMs turn out to be biased against white men:
https://www.lesswrong.com/posts/me7wFrkEtMbkzXGJt/race-and-g...
> When present, the bias is always against white and male candidates across all tested models and scenarios. This happens even if we remove all text related to diversity.
> For our evaluation, we inserted names to signal race / gender while keeping the resume unchanged. Interestingly, the LLMs were not biased in the original evaluation setting, but became biased (up to 12% differences in interview rates) when we added realistic details like company names (Meta, Palantir, General Motors), locations, or culture descriptions from public careers pages.
> To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.
So I think even these people should not feel secure. The perceived value of expertise is decreased by AI which routinely claims to have PhD level mastery of a lot of material. I think even for people with deep experience, in the current job market, many firms are reluctant to hire or pay in a way that's commensurate with that expertise. If you're a leader whose clout in an organization is partly tied to how many people are under you in an org-chart (it's dumb but we have all seen it), maybe that will begin to shrink quarter after quarter. Unless you can make it genuinely obvious that a junior or mid-tier person could not write a prompt which cause a model to spew the knowledge or insight that you have won through years or decades of work, your job may become vulnerable.
I think the class divide that is most relevant is more literal and old-school:
- Do you _own_ enough of businesses that that's how you get most of your income? If so, maybe there's a way that AI will either cause your labor costs decrease, or your productivity per worker increases, and either way you're probably happy.
- Can you invest specifically in the firms that are actively building AI, or applications thereof?
We're back to owners vs workers, with the added dynamic that if AI lets you partially replace labor with capital, then owners of course take a bigger share of value created going forward.
When the Luddities broke machines and burned the buildings that held them it wasn't because they hated machines (well at least initially). It's because they hated starving in the streets.
This is just a continuing part of the class war that has been going on since humanity started writing. Now, the only thing that might make this different is class/capital may have finally gotten the power to win it.
Every time you vote against a social safety net, you are ensuring that our AI future is a dark one. History has repeated this over and over.
An interesting multi-pronged approach is post labor economics which is being promoted by David Shapiro: https://www.youtube.com/@DaveShap
The basic premise is that currently we have households being supported by labor, capital, and transfers. With labor largely going away, the leaves capital and transfers. Relying on transfers alone will lead to ownership of the people by government. So we have to find ways to generate way more distributed capital ownership by the masses. This is what he plans, discusses, and promotes.
An argument formed from 1 word in a metaphor is illegitimate.
Actually it's the lower classes that will escape the longest from AI replacing their jobs, unskilled physical work will remain human for a while yet. Whereas any job that can be done remotely is likely to replaced by one or more agents.
The author addresses this point.
> While I’m sure the technology and its costs will continue to improve, it’s hard to see how that would mitigate most of these harms. Many would just as likely be intensified by greater speed, efficiency, and affordability.
> This sort of technology distributes instability to the many at the bottom, while consolidating benefit at the top—and there has arguably never been a more efficient mechanism for this than AI.
Personally, I'm really tired of every criticism of AI being met with "you haven't tried the latest models". The model isn't the point. It doesn't matter how good it is, it cannot possibly outweigh the harms.
> I wrote my last line of code about a month ago after 20+ years coding
You are exactly the kind of person the author talks about
> To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure. Maybe it’s because you’ve already made a name for yourself. Maybe you’re known at conferences, or on podcasts. Maybe you’re just senior enough that your résumé opens doors for you.
I fear you've entirely missed the point of the article. Just because you believe you can get value from it, does not make up for the downsides to everyone else, and it's quite literally privilege to ignore that.
Skilled labor still has some legs.
This has echoes of moral panic to me. We hear about mental health crises triggered by LLMs in the media because they’re novel, uncommon and the stories grab attention. The modern equivalent of video games cause violence, or jazz is corrupting the youth?
I’ll concede AI has many perils, and I doubt we’ve even broken the surface of it yet, but I don’t think user psychosis is either now, or going to be, a common one.
We can’t simply dispose of an argument just because it smells in a particular way to us.
There exists great promise in AI to be an equalizing force, if implemented well
The future is yet to be written
That doesn’t sound like a promise then no?
Many other advancements might also carry that kind of existential danger. Genetic engineering, human machine interfacing, actual AGI.
I see the technological climb as a bit like climbing Mt Everest - it's possible that we might reach the peak and one day live on some kind of Star Trekian society, but the climb becomes increasingly treacherous along with the risk that we perish.
The trouble of course is that there's nothing else for us to do: it's in our nature to explore new frontiers. It's just not clear whether we'll be able to handle the responsibility that comes with the power.
Most people are left with no choice but to adapt or parish. The fact he is contemplating optionally in the most profound automation in the industry is a form of a.. privilege.
I don't see how these are distinct. It's a technology shift, of course it's going to make certain jobs obsolete - that's how technology shifts work.
I'm not going to go through every quote I disagree with, but unlike some AI negativity discourse (some of which I agree with btw, being an optimist doesn't mean being irrational) this just reads as old man yells at cloud. Mainly because the author doesn't understand the technology, and doesn't understand the impact.
The author clearly does not understand model capabilities (seems to be in the camp that these are just "prediction machines") as they claim it's unreasonable to expect models to "develop presently impossible capabilities". This is not at all supported by prior model releases. Most, if not all, major releases have displayed new capabilities. There are a lot more misconceptions on ability, but again not going to go through all of them.
The author also doesn't understand the impact, saying stuff like "Tech doesn’t free workers; it forces them to do more in the same amount of time, for the same rate of pay or less". What? Is the author unaware of what average labor hours were like before the industrial revolution? AI is clearly going to be hugely net positive for white-collar (and with robots eventually blue-collar) workers in the near future (it already is for many).
They would only decrease much later, after a long period of social conflict, economic growth, and technological progress.
During the early phase of the Industrial Revolution (roughly 1760–1850):
Agricultural workers who once labored seasonally were pushed into factory schedules of 12–16 hours per day, 6 days per week.
Annual labor hours often exceeded 3,000 hours per year per worker.
This was not because work became harder physically, but because capital-intensive machinery became expensive and had to run continuously to be profitable.
Time discipline replaced task-based work. Before industrialization, a farmer might stop when tasks were done; factory workers had fixed shifts.
This trend persisted into the late 19th century.
Deepfakes are highly damaging right now because much of the world still doesn't realise that people can make deepfakes.
When everyone knows that a photo is no longer reliable evidence by itself, the harm that can be done with a deepfake will drop to a similar level as that of other unreliable forms of evidence, like spoken or written claims. (Which is not to say that they won't be harmful at all -- you can still damage someone's reputation by circulating completely fabricated rumours about them -- but people will no longer treat photorealistic images as gospel.)
I continue to be shocked by the way people (with platforms) talk up to the (speculative) line where AI replaces most or all jobs, and then lamely suggest that this will be bad for the people who've lost their jobs because they will be poor, or something. No. What actually happens in that scenario is that money ceases to have value, at least in the way we currently understand it to. That scenario will produce a handful of monsters—sociopathic trillionaire brains encysted in layers of automation and automated production—that will crave more resources, more land, more power, and they will fight each other by various means for those things, and the rest of us will be at best in the way.
This scenario is not a given, because it's not obvious that AI can become this capable in the near term where the stage is set for such a profoundly lopsided outcome, but you can bet these people are thinking about it now, if not talking about it, if not materially preparing for it. And they are, indeed, the only people with reason to feel optimistic about it.
Not only do you have to believe that you're in the group that benefits, but you also have to believe that "AI" improvement from here forward will stall out prior to the point where it goes from assisting your job to replacing it wholesale. I suspect there are many less people for whom that applies to them than there are people who believe it applies to them.
It is very easy for us to exist in that denialism bubble until we see the machine nipping at our heels.
And that is not even getting into second order effects, like even if you do provide AI-proof value, what happens when some significant percentage of everyone else (your potential customers) loses their income and society starts to crumble?
We would live in a post-scarcity utopia if big economic decisions were taken based on long-term optimal effects.
To be clear: I'm not claiming that AI rollouts won't be billion-dollar failed IT projects! They very well could be. But if that's the case, they aren't going to disrupt the labor market.
Again: you have to pick a lane with the pessimism. Both lanes are valid. I buy neither of them. But recognize a coherent argument when I see one. This, however, isn't one.
AI journalism is strictly worse than having a human research and write the text, but it's also so orders of magnitude much cheaper. You see prompt fragments and other blatant AI artifacts in news articles almost every day. So we get newspapers that have the same shape as they used to, but that don't fulfill purpose. That's a development that was already going on before AI, but now it's even worse.
Walked past a billboard with an advertisement the other day that was blatantly AI-generated. Had a logo with visible JPEG artifacts plastered on top of it. Real amateur hour stuff. It probably was as cheap as it looked. But man was it ever cheap to design.
You see the trend in software too. Microsoft's recent track record is a good example of this. They can barely ship a working notepad.exe anymore.
Supposedly some birds will eat cigarette butts thinking they're bugs, and then starve to death with a belly full of indigestible cigarette filters. Feels a lot like what is happening to a lot of industries lately.
They will start to burn down data centers.
It didn't make anyone better off.
The author is a grown man, describing how he felt after being insulted by a machine.
Imagine how high school kids feel after being mocked and humiliated by movie stars. That’s exactly what happened to a group of school kids in 2019. Chris Evans, Alyssa Milano, John Cusack, Debra Messing and others mocked the kids, made fun of their looks, made unflattering comparisons about them, etc.
What kind of damage could that do at that point in someone’s life? It’s horrendous.