That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.
I don't like AI, generally. I am skeptical of corporate influence, I doubt AI 2027 and so-called 'AGI'. I'm certain we'll be "five years away" from superintelligence for the forseeable future. All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this. It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.
Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place. What began as a rejection of externally imposed values devolved into a mouthpiece of the current powers and principalities.
This is evidenced by the new set of hacker values being almost purely performative when compared against the old set. The tension between money and what you make has been boiled away completely. We lean much more heavily on where someone has worked ("ex-Google") vs their tech chops, which (like management), have given up on trying to actually evaluate. We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact.
We sold out the culture, which paved the way for it to be hollowed out by LLMs.
There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs. We need to stop being complicit in propagating that noxious cloud of inevitability and nihilism that is choking our culture. We need to call out the bullshit and extended psyops ("all software jobs are going away!") that have gone on for the past 2-3 years, and mock it ruthlessly: despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.
In short, it's time to wake up.
Developers waste a lot of time writing a bunch of boilerplate code that they hate writing and that doesn’t make them happy and has nothing to do with craftmanship. We also just spent 60 years in a culture that dignified work done by developers to the extreme, and honestly that produced some of the most narcissistic minds the world has ever seen: Thiel, Andressen, et al - and why? Because we dignify work in a capitalistic culture by increasing wages.
You want to talk about a culture that values craftmanship? Let everyone have the time and the freedom and the security to build whatever they want to build, instead of what they have to build in order to have health insurance.
> We need to talk seriously and plainly about the spiritual and existential damage done by LLMs.
Uhhhhhh…..excuse me?
> despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.
More money was invested in the dot-com boom, and in the lead up to the railroad era, before anyone could ride. So this isn’t new.
This is the exact sentiment when said about some other profession or craft, countless people elsewhere and on HN have noted that it's neither productive not wise to be so precious about a task that evolved as a necessity into the ritualized, reified, pedestal-putting that prevents progress. It conflates process with every single other thing about whatever is being spoken about.
Also: Complaining that a new technology bottlenecked by lack of infrastructure, pushback from people with your mindset, poorly understood in its best use because the people who aren't of your mindset are still figuring out and creating the basic tooling we currently lack?
That is a failure of basic observation. A failure to see the thing you don't like because you don't like and decide not to look. Will you like it if you look? I don't know, sounds like your mind is made up, or you might find good reasons why you should maintain your stance. In the later case, you'd be able to make a solid contribution to the discussion.
Oh, and the “I'm not accepting 'the AI did it' as an excuse for failures” camp. Just like outsourcing to other humans: you chose the tool(s), you are responsible for verifying the output.
I got into programming and kicking infrastructure because I'm the sort of sad git who likes the details, and I'm not about to let some automaton steal my fun and turn me into its glorified QA service!
I'd rather go serve tables or stack shelves, heck I've been saying I need a good long sabbatical from tech for a few years now… And before people chime in with “but that would mean dropping back to minimum wage”: if LLMs mean almost everybody can program, then programming will pretty soon be a minimum wage job anyway, and I'll just be choosing how I earn that minimum (and perhaps reclaiming tinkering with tech as the hobby it was when I was far younger).
Now this, putting aside my thoughts above, i find a compelling argument. You just don’t want to. I think that should go along with a reasonable understanding of what a person is choosing to not use, but I’ll presume you have that.
Then? Sure, the frustrating part is to see someone making that choice tell other people that theirs is invalid, especially when we don’t know what the scene will look like when the dust settles.
There’s no reason to think there wouldn’t be room for “pure code” folks. I use the camera comparison— I fully recognize it doesn’t map in all respect to this. But the idea that painters should have given up paint?
There were in fact people at the time who said, “Painting is dead!”. Gustav Flaubert, famous author, said painting was obsolete. Paul Delaroche Actually said it was dead. Idiots. Amazingly talented and accomplished, but short sighted, idiots. Well like be laughing at some amazing and talented people making such statement about code today in the same light.
Code as art? Well, two things: 1) LLM’s have tremendous difficulty parsing very dense syntax, and then addressing the different pieces and branching ideas. Even now. I’m guessing this transfers to code that must be compact, embedded, and optimized to a precision such that sufficient training data, generalizable to the task with all the different architectures of microcontrollers and embedded systems… not yet. My recommendation to coders who want to look for areas where AI will be unsuitable? There’s plenty of room at the bottom. Career has never taken me there, but the most fun I’ve had coding has been homebrew microcontrollers.
2) code as art. Not code to produce art, or not something separable from the code that created it. Think Thing minor things from the past like the obfuscated C challenges. Much of that older hacker ethos is fundamentally an artistic mindset. Art has a business model, some enterprising person aught to crack the code of coding code into a recognized art form where aesthetic is the utility.
I don’t even mean the visual code, but that is viable: Don’t many coders enjoy the visual aesthetic of source code, neatly formatted, colored to perfect contrasts between types etc? I doubt that’s the limit of what could be visually interesting, something that still runs. Small audience for it sure— same with most art.
Doesn’t matter, I doubt that will be something masses of coders turn to, but my point is simply that there are options there are options that involve continuing the “craft” aspects you enjoy, whether my napkin doodle of an idea above holds or not. The option, for many, may simply not include keeping the current trajectory of their career. Things change: not many professional coders that began at 20 in 1990 have been able— or willing— to stay in the narrow area they began in. I knew some as a kid that I still know, some that managed to stay on that same path. He’s a true craftsman at COBOL. When I was a bit older in one of my first jobs he helped me learn my way around a legacy VMS cluster. Such things persist, reduced in proportion to the rest is all. But that is an aspect of what’s happening today.
My endgame is not to be beholden to any given corporations' sense of value (because it is rarely in the engineering), so I don't personally care what happens at large. I'll still enjoy the "craft" on my own and figure out the lines where I need to take a disciplined stance and grind it out myself, where I take on a dependency, or where I leave the work to a black box.
But if time comes for collaboration, then we'll work as a team. AKA we'll decide those lines and likely compromise on values to create something larger than the all of us. I doubt my line will ever be "let's just vibecode everything". But it's likely not going to be "use zero AI" unless I have a very disciplined team at hand and no financial stress between any of us.
Maybe we're observing different parts of the elephant. This is my industry right now: https://www.pcgamer.com/games/call-of-duty/call-of-duty-blac...
A deca-billion dollar franchise now owned by a trillion dollar tech company... using it to make art that wouldn't pass a junior interview. It's no surprise there's such a strong rejection by the community who's paying attention. Cheaping out on a product a consumer pays $70 + a bunch of microtransactions for clearly shows the company's priorities.
Maybe there are spaces where you find success, but it's very clear that the water is muddying at large. You don't argue against a swamp by saying "but my corner here is clean!".
"progress" is doing a lot of work here. Progress in what sense, and for whom? The jury is still out on whether LLMs even increase productivity (which is not the same as progress), and I say this as a user of LLMs.
If this tech was designed in an open way and not put under paywalls and used to develop models that are being used to take away peoples power, maybe I'd think differently. But right now its being promoted by the worst of the worst, and nobody is talking about that.
If the thread were about 1) the current problems and approaches AI alignment, 2) the poorly understood mechanisms of hallucination, 3a) the mindset the doesn't see the conflict whey they say "don't anthropomorphize" but runs off to create a pavlovian playground in post-training, 3b) the mindsets that do much the reverse and how both these are dangerous and harmful, 4) the poorly understood trade off of sparse inference optimizations. But it's not, so I hold those in reserve.
Mostly I agree with you. But there's a large group of people who are way too contemptuous of craftsmen using AI. We need to push back against this arrogant attitude. Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.
Some tools are table saws, and some tools are subcontracting work out to lowest cost bidders to do a crap job. Which of the two is AI?
I'm the guy other programmers I know ask for advice.
I think your metaphor might be a little uncharitable :)
For straightforward stuff, they can handle it.
For stuff that isn't straightforward, they've been trained on pattern matching some nontrivial subset of all human writing. So chances are they'll say, "oh, in this situation you need an X!", because the long tail is, mostly, where they grew up.
--
To really drive the point home... it's easy to laugh at the AI clocks.[0] But I invite you, dear reader, to give it a try! Try making one of those clocks! Measure how long it takes you, how many bugs you write. And how well you'd do it if you only had one shot, and/or weren't allowed to look at the output! (Nor Google anything, for that matter...)
I have tried it, and it was a humbling experience.
I use Claude code every day and it is a slam dunk for situations like the one above, fiddly UIs and the like. Seriously , some of the best money I spend. But it is not good at more abstract stuff. Still a massive time saver for me and does effectively do a lot of work that would have gotten farmed out to junior engineers.
Maybe this will change in a few years and I'll have to become a potato farmer. I'm not going to get into predictions. But to act like it can do what an engineer with 20 years of experience can do means the AI brain worm got you or it says something about your abilities.
Maybe it's expectations set by all the AI companies, idk, but this kind of mentality seems very particular to AI products and nothing else.
My biggest gripe with the hype, as there's so much talk of craftmanship here, is: most programmers I've met hate doing code reviews and a good proportion prefer rewriting to reading and understanding other people's code. Now suddenly everyone is to be a prompter and astute reviewer of a flood of code they didn't write and now that you have the tool you should be faster faster faster or there's a problem with you.
All this hype and especially the AGI talks want to treat the AI as an engineer itself. Even an assuredly senior engineer above is saying that it's better than them. So I think it's valid to ask "well can it do [thing a senior engineer does on the daily]" if we're suggesting that it can replace an engineer.
All that being said:
There's a segment of the software eng population that has their heads in the sand about it and the argument basically boils down to "AI bad". Those people are in trouble because they are also the people who insist on a whole committee meeting and trail of design documents to change the color of a button on a website that sells shoes. Most of their actual hard skills are pretty easy to outsource to an AI.
There's also a techbro segment of the population, who are selling snake oil about AGI being imminent, so fire your whole team and hire me in order to outsource your entire product to an army of AI agents. Their thoughts basically boil down to "I'm a grifter, and I smell money". Nevermind the fact that the outcome of such a program would be a smoldering tire fire, they'll be onto the next grift by then.
As with literally everything, there are loud, crazy people on either side and the truth is in the middle somewhere.
Simply put most industries started moving away from craftsmanship starting in the late 1700s to the mid 1900s. Craftsmanship does make a few nice things but it doesn't scale. Mass production lead to most people actually having stuff and the general condition of humanity improving greatly.
Software did kind of get a cheat code here though, we can 'craft' software and then endlessly copy it without the restrictions of physical objects. With all that said, software is rarely crafted well anyway. HN has an air about it that software developers are the craftsman of their gilded age, but most software projects fail terribly and waste huge amounts of money.
I consider myself a craftsman. I craft tools. I also am a manager. I also am a consultant. I am both a subcontractor and I subcontract out.
Above all else I’m a hacker.
I also use LLM’s daily and rather enjoy incorporating this new technology into what I consider my craft.
Please stop arrogantly presuming you know what is best for me to think and feel about all of this.
Taking just images, consider AI merely a different image capture mechanism, like the camera is vs. painting. (You could copy.paste many critiques about this sort of ai and just replace it with "camera") Sure it's more accessible to a non professional, in AI's case much more so than cameras wear to years of learning painting. But there's a world of difference between what most people do in a prompt online and how professionals integrating it into their workflow are doing. Are such things "art"? That's not a productive question, mostly, but there's this: when it occurs, it has every bit as much intention, purpose and from a human behind it as that which people complain is lacking, but are referring to the one-shot prompt process in their mind when they do.
You can certainly outsource "up", in terms of skill. That's just how business works, and life... I called a plumber not so long ago! And almost everyone outsources their health...
Of course you can also get useless intermediaries, which may be more akin to vibe coding. Not entirely without merit, but the human in the loop is providing questionable value. I think this is the exception rather than the norm.
Not respect as a carpenter, but perhaps respect as a businessperson or visionary.
a) Nothing about letting AI do grunt work for you is "not being a craftsman". b) Things are subcontracted all the time. We don't usually disrespect people for that.
Using that line of reasoning I could also argue "Using libraries isn't craftmanship, a real craftsman implements all functionality themselves."
We need to find such craftsmen first. A true craftmaan should be able to let the code speak for itself. And ideally they'd be able to teach well enough to have other adapt such a workflow, which inevitably includes constraints and methodologies.
That's the things I don't see enough of in these discussions. We're very afraid to talk about what AI is bad at, as if it's some sort of pampered child we need to keep pleasing. That's not how we attain progress in the craft. Maybe in the stock market, but at that point it's clear what the focus is.
It reduces craftsmanship to unskilled labor.
The design work and thinking happen somewhere else. The operator comes in, punches a clock, and chokes on MDF dust for 8 hours.
This is a GOOD thing.
The unskilled operator's position is also precarious, as you point out, but while it lasts, it's a different and (arguably) less satisfying form of work.
The LLM is not a table saw that makes a carpenter faster, it's an automatic machine that makes an owner's capital more efficient.
At some point people started universally accepting the idea that any sort of gatekeeping was a bad thing.I think by now people are starting to realize that this was a flawed idea. (at best, gatekeeping is not a pure negative; it's situational) But, despite coming to realize this I think parts of our culture still maintain this as a default value. "If more people can code, that's a _good_ thing!" Are we 100% sure that's true? Are there _no_ downsides? Even if it's a net positive, we should be able to have some discussion about the downsides as well.
Web 1.0 was full of weirdos doing cool weird stuff for the pure joy of discovery. That's the ethos we need back, and it's not incompatible with AI. The wrong turn we took was letting business overtake joy. That's a decision we can undo today by opting out of that whole ecosystem.
True. And to some extent, I've seen more 'useless but fun' projects in the last year because they can be done in an afternoon rather than a week. We need more of that.
Not to mention 20 years ago I personally (and probably others my age) had much more time to care about random weird stuff.
So, I am skeptical without some actual analysis or numbers that things really are so bad.
If you want a space where weird hacker values and doing stuff for the pure joy of discovery reign, gatekeep harder.
Ah yes, we'll also skip out on eating too.
We do not need to do things no one needs. We do not need a million differen webshops, and the next CRUD application.
We need a system which allows the earth resources being used as efficient and fair as possible.
Then we can again start apprechiating real craftmanship but not for critical things and not because we need to feed ourselves but because we want to do it.
Yes, the current system seems flawed, but is the best we came up with and is not fixed either, it is slowly evolving.
Yes, some resources are finite (energy from the sun seems quite plenty though), but don't think we will be ever able to define "fair". I would be glad with "do not destroy something completely and irremediably".
To what goals? Who gets to decide what is fair?
Who is we? and how do we decide?
The thing about capitalism is that unecessary webshop isn't getting any customers if it's truly unecessary, and will soon be out of business. We can appreciate Ghostty, because why? Because the guy writing it is independently wealthy and can fly jets around for fun, and has deigned to grace us with his coding gifts once again? Don't get me wrong, it's a nice piece of software, but I don't know that system's any better.
Also competition is a core driver for cost reduction and progress in capitalism.
And on a big picture pov: There is only one Amazon, Alibaba etc.
Yes, I know about the language / IDE / OS wars that software folks have indulged in before. But the reflexive shallow pro/anti takes on AI are way more extreme and are there even in otherwise serious people. And in general anti-intellectual sentiment, mindless follow-the-leader, and proudly ignorant stances on many topics are just out of control everywhere and curiosity seems to be dead or dying.
You can tell it's definitely tangled up with money though and this remains a good filter for real curiosity. Math that's not maybe related to ML is something HN is guaranteed to shit on. No one knows how to have a philosophy startup yet (WeWork and other culty scams notwithstanding!). Authors, readers, novels, and poetry aren't moving stock markets. So at least for now there's somewhere left for the intellectually curious to retreat
If anything, the AI takes are more much more meaningful. A Mac/PC flame war online was never going to significantly affect your career. A manager who either is all-in on AI or all-out on it can.
Language-preference wars stick around until mid-career for some, and again it predicts something. But still, serious people are not likely to get bogged down in pointless arguments about nearly equivalent alternatives at least (yaml vs json; python vs ruby).
Shallow takes on AI (whether they are pro or anti) are definitely higher stakes than all this, bad decisions could be more lasting and more damaging. But the real difference to my mind is.. AI "influencers" (again, pro or anti) are a very real thing in a way that doesn't happen with OS / language discussions. People listen, they want confirmation of biases.
I mean there's always advocates and pundits doing motivated reasoning, but usually it's corporate or individuals with clear vested interests that are trying to short-circuit inquiry and critical thinking. It's new that so many would-be practitioners in the field are eager to sabotage and colonize themselves, and forcing a situation where honest evaluations and merit-based discussion of engineering realities are impossible
But this is philosophy (and ethics/morality)
My feelings about AI, about its impact on every aspect of our lives, on the value of human existence and the purpose of the creative process, have less to do with what AI is capable of and more to do with the massive failures of ethics and morality that surround every aspect of its introduction and the people who are involved.
Humans will survive. Humanity is on the ropes.
Eh, I mean here's one about the Ulam spiral that did pretty well: https://news.ycombinator.com/item?id=2047857
The fast inverse sqrt that John carmack did not. write also does well. I know there's many more. Are you sure that's not just a caricature of Hacker News you've built up in your head?
we need to create a culture that values craftmanship
and dignifies work done by developers.
I don't think that is at ALL at odds with using AI as a coding assistant!I am not going to tell you I am a coding god, but I have been doing this for nearly 30 years and I feel I'm pretty competent craftsman.
AI has helped me to be a better craftsman. The big picture ideas are mine, but AI has helped immensely with some details.
I actually disagree with this pretty fundamentally. I've never seen hacker culture as defined by "craftsmanship" so much as about getting things done. When I think of our culture historically, it's cleverness, quick thinking, building out quick and dirty prototypes in weekend "hackathons", startup culture that cuts corners to get an MVP product out there. I mean, look at your URL bar: do you think YC companies are prioritizing artisanal lines of code?
We didn't trade craftsmanship for "Business Impact". The latter just aligns well with our culture of Getting Shit Done. Whether it's for play (look at the jank folks bring out to the playa that's "good enough") or business, the ethos is the same.
If anything, I feel like there has been more of an attempt to erase/sideline our actual culture by folks like y'all as a backlash against AI. But frankly, while a lot of us scruffy hacker types might have some concerns about AI, we also see a valuable tool that helps us move faster sometimes. And if there's a good tool that gets a thing done in a way that I deem satisfactory, I'm not going to let someone's political treatise get in my way. I'm busy building.
Hmm. No. Not really. I don't think "hacker" ever much meant this at all; mostly because "hacker" never actually was much connected to "labor for money."
"Going to work" and "being a hacker" were overwhelmingly mutually exclusive. Hacking was what you don't do on company time (in favor of the company.)
Relevant article: https://meaningness.com/geeks-mops-sociopaths
Except that's unproven. It might make you more productive, but whether you get any of that new value is untested.
I do not vibe code my core architecture because i control it and know it very well. I vibe code some webui i don't care about or a hobby idea in 1-4h on a weekend because otherwise it would take me 2 full weekends.
I fix emails, i get feedback etc.
When I do experiemnts with vibe coding, i'm very aware what i'm doing.
Nonetheless, its 2025. Alone 2026 we will add so much more compute and the progress we see is just crazy fast. In a few month there will be the next version of claude, gpt, gemini and co.
And this progress will not stop tomorrow. We don't know yet how fast it will progress and when it will be suddenly a lot better then we are.
Additionally you do need to learn how to use these tools. I learned through vibe coding that i have to specify specific things i just assume the smart LLM will do right without me telling for example.
Now i'm thinking about doing an experiemnt were i record everything about a small project i want to do, to then subscribe it into text and then feeding it into an llm to strucuture it and then build me that thing. I could walk around outside with a headset to do so and it would be a fun experiemnt how it would feel like.
I can imagine myself having some non intrusive AR Google and the ai sometimes shows me results and i basically just give feedback .
I’ve done literally dozens of short term quick turn around POCs from doing the full stack from an empty AWS account to “DevOps” to the software development -> training customers how to fish and showing them the concepts -> move on to next projects between working at AWS ProServe and now a third party consulting company. I’m familiar with the level of effort for these types of projects. I know how many fewer man hours it takes me now.
I have avoided front end work for well over a decade. I had to modify the front end part of the project we released to the customer that another developer did to remove all of the company specific stuff to make it generic so I could put it in our internal repo. I didn’t touch one line of front end code to make the decently extensive modifications, honestly I didn’t even look at the front end changes. I just made sure it worked as expected.
But how much has your hourly rate risen?
When I did do one short term project independently, I gave them the amount I was going to charge for the project based on the requirements.
All consulting companies - including the division at AWS - always eventually expand to the staff augmentation model where you assign warm bodies and the client assigns the work. I have always refused to touch that kind of work with a ten foot pole.
All of my consulting work has been working full time and salaries for either the consulting division of AWS where I got the same structured 4 year base + RSUs as every other employee or now making the same amount (with a lot less stress and better benefits) in cash.
I’m working much less now than I ever have in my life partially because I’m getting paid for my expertise and not for how much code I can pump out.
If you see what it takes to get ahead in large corporations, it’s not about those who are “passionate”, it’s about people who know how to play the game.
If you look at the dumb AI companies that YC is funding, those “entrepreneurs” aren’t doing 996 because they enjoy it. They are looking for the big exit.
https://docs.google.com/spreadsheets/d/1Uy2aWoeRZopMIaXXxY2E...
How many of them do you think started their companies out of “passion”?
Some of the ones I spotted checked had a couple of non technical founders looking for a “founding engineer” that they could underpay with the promise of “equity” that would probably be worthless.
Suffice it to say, maybe now that DEI is gone, actual good-faith efforts to recognize the hacker ethos in disparate groups, and to bring those individuals into the culture, could take place. The corporate-ization of hacking couldn't have taken place (and could be undone) with an injection of some counterculture. (The post you replied to got flagged to death. That's gotta count for some Punk Points.)
I'm tempted to say "You're not helping," as my eyes roll back in their sockets far enough to hurt. But I can also understand how threatening LLMs must appear to programmers, writers, and artists who aren't very good at their jobs.
What I don't get is why I should care.
Have you seen how this tech is being used to control narratives to subjugate populations to the will of authoritarian governments?
This shit is real. We are slowly sliding into a world where every aspect of our lives are going to be dictated by people in power with tools that can shape the future by manipulating what people think about.
If you don't care that the world is burning to the ground, good luck with that. Im not saying the tech is necessarily bad, its the way in which we are allowing it to be used. There has to be controls is place to steer this tech in the right direction or we are heading for a world I don't want to be apart of.
This just in: 90% of everything is crap. AI does not, cannot, and will not change that.
Have you seen how this tech is being used to control narratives to subjugate populations to the will of authoritarian governments?
Can't say as I have.
The only authoritarians in this thread are the ones telling us what we should and should not be allowed to do with AI.
The attitude and push back from this loud minority has always been weird to me. Ever since I got my hands on my first computer as a kid, I've been outsourcing parts of my brain to computing so that I can focus on more interesting things. I no longer have to remember phone numbers, I no longer have to carry a paper notepad, my bookshelf full of reference books that constantly needed to be refreshed became a Google search away instead. Intellisense/code completion meant I didn't have to waste time memorizing every specific syntax and keyword. Hell, IDEs have been generating code for a long time. I was using Visual Studio to automatically generate model classes from my database schema for as long as I can remember, and even generating CRUD pages.
The opportunity to outsource even more of the 'busywork' is great. Isn't this was technology is supposed to do? Automate away the boring stuff?
The only reasoning I can think of is that the most vocal opponents work in careers where that same busywork is actually most of their job, and so they are naturally worried about their future.
I absolutely agree with you, but I do think there's a difference in kind between a deterministic automation you can learn to use and get better at, and a semi-random coding agent.
The thing I'm really struggling with is that unlike e.g. code completion, there doesn't seem to be a clear class of tasks that LLMs are good at vs bad at. So until the LLMs can do everything, how do I keep myself in the loop enough that I'll have the requisite knowledge to step in when the LLM fails?
You mention how technology means we no longer have to remember phone numbers. But what if all digital contact lists had a very low chance of randomly deleting individual contacts over time? Do you keep memorizing phone numbers? I'm not sure!
FYI: I do not work for any corporations, I provide technical services directly to the public. So there is really concerns about this tech by everyday people that do not have a stake in keeping a job.
For you, what are “the interesting parts”, and why do you believe in principle a machine won’t do those parts better than you?
The arts is a good example. I still enjoy analog photography & darkroom techniques. Digital can (arguably) do it better, faster, and cheaper. Doesn't change the hobby for me.
But, at least the option is there. Should I need to shoot a wedding, or some family photos for pay, I don't bust out my 35mm range finder and shoot film. I bring my R6, and send the photos through ImagenAI to edit.
In that way, the interesting parts are whatever I feel like doing myself, for my own personal enjoyment.
Just the other day I used AI to help me make a macOS utility to have a live wallpaper from an mp4. Didn't feel like paying for any of the existing "live wallpaper" apps. Probably a side project I would never have done otherwise. Almost one shot it outside of a use-after-free bug I had to fix myself, which ended up being quite enjoyable. In that instance, the interesting part was in the finding a problem and fixing it, while I got to outsource 90% of the rest of the work.
I'm rambling now, but the TL;DR is I'm more so excited about having the option to outsource portions of something rather than always outsourcing. Sometimes all you need is a cheap piece of mass produced crap, and other times you want to spend more money (or more time) making it yourself, or buying handmade from an expert craftsman.
I'm middle-aged. 30 years ago, hacker culture as I experienced it was about making cool stuff. It was also about the identity -- hackers were geeks. Intelligent, and a little (or a lot) different from the rest of society.
Generally speaking, hackers could not avoid writing code. Whether it was shell scripts or HTML or Javascript or full-blown 3D graphics engines. To a large extent, coding became the distinguishing feature of "hackers" in terms of identity.
Nearly anybody could install Linux or build a PC, but writing nontrivial code took a much larger level of commitment.
There are legitimate functional and ethical concerns about AI. But I think a lot of "hackers" are in HUGE amounts of denial about how much of their opposition to AI springs from having their identities threatened.
I think there's definitely some truth to this. I saw similar pushback from the "learn to code" and coding bootcamp era, and you still frequently see it in Linux communities where anytime the prospect of more "normies" using Linux comes up, a not insignificant part of the community is actively hostile to that happening.
The attitude goes all the way back to eternal september.
We've survived and thrived through inflection points like this before, though. So I'm doing my best to have an adapt-or-die mindset.
"computers are taking away human jobs"
"visual basic will eliminate the need for 'real coders'"
"nobody will think any more. they'll 'just google it' instead of actually understanding things"
"SQL is human readable. it's going to reduce the need for engineers" (before my time, admittedly)
"offshoring will larely eliminate US-based software development"
etc.
Ultimately (with the partial exception of offshoring) these became productivity-enhancers that increased the expectations placed on the shoulders of engineers and expanded the profession, not things that replaced the profession. Admittedly, AI feels like our biggest challenge yet. Maybe.
I wouldn’t blame any artist that is fundamentally against this tech in every way. Good for them.
For profit products are for profit products, that are required to compensate if they are derivative of other works (in this case, there would be no AI product without the upstream training data, which checks the flag that it's derivative).
If you would like to change the laws, ok. But simply breaking them and saying 'but the machine is like a person' is still... just breaking the laws and stealing.
And the pedantry matters only because the entities criming are too big and rich and financed by the right people.
It is basically a display of the societal threshold beyond which laws are not enforced.
Usually "obtaining" is just making a bunch of HTTP requests - which is kind of how the web is designed to work. The "consent" (and perhaps "desired payment" when there is no paywall) issue is the important bit and ultimately boils down to the use case. Is it a human viewing the page, a search engine updating its index, or OpenAI collecting data for training? It is annoying when things like robots.txt are simply ignored, even if they are not legally or technically binding.
The legal situation is unsurprisingly murky at the moment. Copyright law was designed for a different use case, and might not be the right tool or regulatory framework to address GenAI.
But as I think you are suggesting, it may be an example of regulatory entrepreneurship, where (AI) companies try to move forward quickly before laws and regulations catch up with them, while simultaneously trying to influence new laws and regulations in their favor.
[Copyright law itself also has many peculiarities, for example not applying to recipes, game rules, or fashion designs (hence fast fashion, knockoffs, etc.) Does it, or should it, apply to AI training and GenAI services? Time will tell.]
Richard Stallman has his email printed out on paper for him to read, and he only connects to the internet by using wget to fetch web pages and then has them printed off.
I understand how LLMs may improve the situation for the employer, personally or with peers: no.
Any person who posts a sufficiently long text online will be mistaken for an AI.
It happens, but I think it's pretty uncommon. What's a lot more common is people getting called out for offloading tasks to LLMs in a way that just breaches protocol.
For example, if we're having an argument online and you respond with a chatbot-generated rebuttal to my argument, I'm going to be angry. This is because I'm putting an effort and you're clearly not interested in having that conversation, but you still want to come out ahead for the sake of internet points. Some folks would say it's fair game, but consider the logical conclusion of that pattern: that we both have our chatbots endlessly argue on our behalf. That's pretty stupid, right?
By extension of this, there's plenty of people who use LLMs to "manage" their online footprint: write responses to friends' posts, come up with new content to share, generate memes, produce a cadence of blog posts. Anyone can ask an LLM to do that, so what's the point of generating this content in the first place? It's not yours. It's not you. So what's the game, other than - again - trying to come out on top for internet points?
Another fairly toxic pattern is when people use LLMs to produce work output without the effort to proofread or fact-check it. Over the past year or so, I've gotten so many LLM-generated documents that simply made no sense, and the sender considered their job to be done and left the QA to me.
We are angry because we grow up in an age that content are generated by human and computer bot are inefficient. however, for newer generation, AI generated content will be a new normal, like how we see people from a big flat box (TV)
But hacker culture always sought to empower an individual (especially a smart, tech-savvy individual) against corporations, and rejection of gen AI seems reasonable in this light.
If hacker culture wasn't luddite, it's because of the widespread belief that the new digital technology does empower the individual. It's very hard to believe the same about LLMs, unless your salary depends on it
The recent results in LLMs and diffusion models are undeniably, incredibly impressive, even if they're not to the point of being universally useful for real work. However they fill me with a feeling of supreme dissapointment, because each is just this big black box we shoved an unreasonable amount of data into and now the black box is the best image processing/natural language processing system we've ever made, and depending on how you look at it, they're either so unimaginably complex that we'll never understand how they really work, or they're so brain-dead simple that there's nothing to really understand at all. It's like some cruel joke the universe decided to play on people who like to think hard and understand the systems around them.
It's been quite good reading these comments because a lot of them have put into words my own largely negative feelings about the AI ubiquitous hype, which I have found it hard to articulate. Your second paragraph, and someone else's comment about how they are attracted to computer science because they like fiddly detail and so are uninterested in a machine hiding all that, and a third comment about how so-called "busy work" is actually a good way of padding out difficult stuff and so a job of work becomes much less palatable when it is excised entirely.
The other thing I find deeply depressing is the degree to which people are thrilled (genuinely) by dreadful looking AI art and unbearable to read AI prose. Makes me think I've been kidding myself for years that people by and large have a degree of taste. Then again maybe it just means it's not to my taste..
Yeah. This cruel joke even has a name: The Bitter Lesson.
https://en.wikipedia.org/wiki/Bitter_lesson
But think about it: if digital painting were solved not by a machine learning model, but human-readable code, it would be an even more bleak and cruel joke, isn't it?
On the contrary, I'm certain such a program would be filled with fascinating techniques, and I have no dread for the idea that humans aren't special.
"The lesson is considered "bitter" because it is less anthropocentric than many researchers expected and so they have been slow to accept it."
I mean we are so many people on the planet, its easy to feel useless when you know you can get replaced by millions of other humans. How is that different being replaced by a computer?
I was not sure how AGI would come to us, but I assumed there will be AGI in the future.
Weirdest thing for me is mathematics and physics: I assumed that would be such an easy field to find something 'new' through brute force alone, im more shocked that this is only happening now.
I realized with DeepMind and Alphafold that the smartest people with the best tools are in the industry and specificly in the it industry because they are a lot better using tools to help them than normal researchers who struggle writing code.
And the dangerous part is that we are so hasty to remove that "busy work" that we fail to make sure it's done right. That willful ignorance seems counter to hacker culture which should encourage curiosity and a deeper understanding.
>It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people.
In my experience, it's often is Ai generated more often than not. And yes, it is worth calling out. If you can't engage with the public, why do you expect them to engage with you?
It's like being mad about being passed on the current round of a game, all while you clearly have a phone to your ear.
I am yet to see issues caused by restrain.
> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.
As mr Miyagi said: "Wax on. Wax off."
This may turn out very profitable for the pre-AI generations, as the junior to senior pipeline won't churn seniors at the same rate. But following generations are probably on their way to digital serfdom if we don't act.
I've seen this same thing said about Google. "If you outsource your memory to Google searching instead, you won't be able to do anything without and you'll become dumber."
Maybe that did happen, but it didn't seem to result in any meaningful change on the whole. Instead, I got to waste less time memorizing things, or spending time leafing through thousand page reference manuals, to find something.
We've been outsourcing parts of our brains to computers for decades now. That's what got me interested and curious about computers when I got my first machine as a kid (this was back in the late 90s/early 00s). "How can I automate as much of the boring stuff as possible to free myself up for more interesting things."
LLMs are the next evolution of that to an extent, but I also think they do come with some harms and that we haven't really figured out best practices yet. But, I can't help but be excited at the prospect of being able to outsource even more to a computer.
> For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.
I, for one, am glad we have technologies -- like writing, the internet, Google, and LLMs -- that let us expand the limits of what our minds can do.
[1] https://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext...
That doesn't seem exactly likely.
Culture is emergent. The more you try to define it, the less it becomes culture and the more it becomes a cult. Instead of focusing on culture I prefer to focus on values. I value craftsmanship, so I'm inclined to appreciate normal coding more than AI-assisted coding, for sure. But there's also a craftsmanship to gluing a bunch of AI technologies together and observing some fantastic output. To willfully ignore that is silly.
The OP's rant comes across as a wistful pining for the days of yore, pinning its demise on capitalists and fascists, as if they had this AI thing planned all along. Focusing on boogeymen isn't going to solve anything. You also can't reverse time by demanding compliance with your values or forming a union. AI is here to stay and we're going to have to figure out how to live with it, like it or not.
Breathless hustlecore tech industry culture is a place where finance bros have turned programmers into dogs that brag to one another about what a good dog they are. We should reject at every turn the idea that such a culture represents the totality of programming. Programming is so much more than that.
It's a message that's actually pretty relevant in an age of AI slop.
I think there's a direct through-line from hacker circles to modern skepticism of the kind of AI discussed in this article: the kind where rules you don't control determine the behavior of the machine and where most of the training and operation of the largest and most successful systems can, currently, only be accessed via the cloud portals of companies with extremely questionable ethics.
... but I don't expect hackers to be anti-AI indefinitely. I expect them to be sorting out how many old laptops with still-serviceable graphics cards you have to glue together to build a training engine that can produce a domain-specific tool that rivals ChatGPT. If that task proves impossible, then I suspect based on history this may be the one place where hackers end up looking a little 'luddite' as it were.
... because "If the machine cannot be tamed it must be destroyed" is very hacker ethos.
Bypassing arbitrary (useless, silly, meaningless, etc) rules has always been a primary motiving factor for some of us :D
Progressiveness is forward looking and a proponent of rapid change. So it is natural that LLM's are popular amongst that crowd. Also, progressivism should be accepting of and encouraging the evolution of concepts and social constructs.
In reality, many people define "progressiveness" as "when things I like happen, not when things I don't like happen." When they lose control of the direction of society, they end up just as reactionary and dismissive as the people they claim to oppose.
>AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.
>Craft, expression and skilled labor is what produces value, and that gives us control over ourselves
To me, that sums up the author's biases. You may value skilled labor, but generally people don't. Nor should they. Demand is what produces value. The later half of the piece falls into a diatribe of "Capitalism Bad".
And yes, this whole "capitalism bad" mentality I see in tech does kinda irk me. Why? Because it was capitalism that gave them the tools to be who they are and the opportunities to do what they do.
It's not hard to see why that mentality exists though. That same capitalism also gave rise to the behemoth, abusive monopolies we have today. It gave rise to the over financialization of the sector and declining product quality because you get richer doing stock buybacks and rent-seeking instead of making a better product.
Early hacker culture was also very much not pro-capitalism. The core principle of "Information should be free" itself is a statement against artificial scarcity and anti-proprietary systems, directly opposed to the capitalist ethos of locking up knowledge for profit. The FOSS we use and love rose directly from this culture, which is fundamentally communal, not capitalist.
Capitalism didn't build the internet: public spending did.
I'm not ignorant to this fact that it helped us for quite a long time but it also created climate change. Overpopulation.
We are still stuck on planet earth, have not figured out the reason for live or the origin of the universe.
I would prefer a world were we think about using all the resources earth provides sustainable and how to use them the most efficient way for the max amount of human beings. The rest of it we would use to advance society.
I would like to have Post-Scarcity Scientific Humanism
I struggle with this discourse deeply. With many posters like OP, I align almost completely - unions are good, large megacorps are bad, death to facists etc. It's when we get to the AI issue that I do a bit of a double take.
Right now, AI is almost completely in the hands of a few large corp entities, yes. But once upon a time, so was the internet, so were processing chips, so was software. This is the power of the byte - it shrinks progressively and multiplies infinitely - thus making it inherently diffuse and populist (at the end of the day). It's not the relationship to our cultural standards that causes this - it's baked right into the structure of the underlying system. Computing systems are like sand - you can melt them into a tower of glass, but those are fragile and will inevitably become sand once again. Sand is famously difficult to hold in a tight grasp.
I won't say that we should stop fighting against the entrenchment of powers like OpenAI - fine, that's potentially a worthy fight and if that's what you want to focus on go ahead. However, if you really want to hack the planet, democratize power and distribute control, what you have to be doing is working towards smaller local models, distributed training, and finding an alternative to backprop that can compete without the same functional costs.
We are this close to having a guide in our pocket that can help us understand the machine better. Forget having AI "do the work" for you, it can help you to grok the deeper parts of the system such that you can hack them better - and if we're to come out of this tectonic shift in tech with our heads above water, we absolutely need to create models that cannot be owned by the guy with the $5B datacenter.
Deepseek shows us the glimmer of a way forward. We have to take it. The megacorp AI is already here to stay, and the only panacea is an AI that they cannot control. It all comes down to whether or not you genuinely believe that the way of the hacker can overcome the monolith. I, for one, am a believer.
He's pigenholed at the same low pay rate and can't ever get a raise, until everyone in the same role also gets a raise (which will never happen). It traps people, because many union jobs can't or won't innovate, and when they look elsewhere, are underskilled (and stuck).
You mention 'deepseek'. Are you joking? It's owned by the Chinese government..and you claim to hate fascism? Lol?
Big companies only have the power now, because the processing power to run LLMs is expensive. Once there are break throughs, anyone can have the same power in their house.
We have been in a tech slump for awhile now. Large companies will drive innovations for AI that will help everyone.
Deepseek is open source, which is why I mention it. It was made by the Chinese government but it shows a way to create these models at vastly reduced cost and was done with transparent methodology so we can learn from it. I am not saying "the future is Deepseek", I am saying "there are lessons to be learned from Deepseek".
I actually agree with you on the corporate bootstrap argument - I think we ought to be careful, because if they ever figure out how to control the output they will turn off outputs that help develop local models (gotta protect that moat!), but for now I use them myself to study and learn about building locally and I think everyone else ought to get on this train as well. For now, the robust academic discourse is a very very good thing.
They all work the same way. I'm fundamentally against the idea of unions after seeing how they stifle innovation in nearly all industries they control.
Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.
It is relatively new that some corporate owned "open" source developers use things like VSCode and have no issues with all their actions being tracked and surveilled by their corporate masters.
Please do no co-opt the term "hacker".
So why is it a surprise that hackers mistrust these tools pushed by megacorps, that also sell surveillance to governments, with “suits” promising other “suits” that they’ll be making knowledge obsolete? That people will no longer need to use their brains, that people with knowledge won’t be useful?
It’s not Luddism that people with an ethos of empowering the individual with knowledge are resisting these forces
The problem is the vast masses falling under Turing's Law:
"Any person who posts a sufficiently long text online will be mistaken for an AI."
Not usually in good faith however.
Just taking what people argue for on its own merits breaks down when your capacity to read whole essays or comments chains is so easily overwhelmed by the speed at which people put out AI slop
How do you even know that the other person read what they supposedly wrote, themselves, and you aren’t just talking to a wall because nobody even meant to say the things you’re analyzing?
Good faith is impossible to practice this way, I think people need to prove that the media was produced in good faith somehow before it can be reasonably analyzed in good faith
It’s the same problem with 9000 slop PRs submitted for code review
Someone even argued that you could use prompts to make it look like it wasn't AI, and that this was the best explanation that it didn't look like ai slop.
If we can't respect genuine content creators, why would anyone ever create genuine content?
I get that these people probably think they're resisting AI, but in reality they're doing the opposite: these attacks weighs way heavier on genuine writers than they do on slop-posters.
The blanket bombing of "AI slop!" comments is counterproductive.
It is kind of a self fulfilling prophesy however: keep it up and soon everything really will be written by AI.
A lot of hackers, including the black hat kind, DGAF about your ideological purity. They get things done with the tools that make it easy. The tools they’re familiar with.
Some of the hacker circles I was most familiar with in my younger days primarily used Windows as their OS. They did a lot of reverse engineering using Windows tools. They might have used .NET to write their custom tools because it was familiar and fast. They pulled off some amazing reverse engineering feats.
Yet when I tell people they preferred Windows and not Linux you can tell who’s more focused on ideological purity than actual achievements because eww Windows.
> Please do no co-opt the term "hacker".
Right back at you. To me, hacker is about results, not about enforcing ideological purity about only using the acceptable tools on your computer.
In my experience: The more time someone spends identifying as a hacker, gatekeeping the word, and trying to make it a culture war thing about the tools you use, the less “hacker” like they are. When I think of hacker culture I think about the people who accomplish amazing things regardless of the tools or whether HN finds them ideologically acceptable to use.
Same to me as well. A hacker would "hack out" some tool in a few crazy caffeine fueled nights that would be ridiculed by professional devs who had been working on the problem as a 6 man team for a year. Only the hacker's tool actually worked and saved 8000 man-hours of dev time. Code might be ugly, might use foundational tech everyone sneers at - but the job would be done. Maintaining it left up to the normies to figure out.
It implies deep-level expertise about a specific niche in the space they are hacking on. And it implies "getting shit done" - not making things full of design beauty.
Of course there are different types of hackers everywhere - but that was the "scene" to me back in the day. Teenage kids running circles around the greybeards clucking at the kids doing it wrong.
Same. Back then, and even now, the people who were busy criticizing other people for using the wrong programming language, text editor, or operating system were a different set of people than the ones actually delivering results.
In a way it was like hacker fashion: These people knew what was hot and what was not. They ran the right window manager on the right hardware and had the right text editor and their shell was tricked out. They knew what to sneer at and what to criticize for fashion points. But actually accomplishing things was, and still is, orthogonal to being fashionable.
The gatekeepers wouldn't consider him a hacker, but that's kinda what he is now.
I love it when the .NET threads show up here, people twist themselves in knots when they read about how the runtime is fantastic and ASP.NET is world class, and you can read between the lines of comments and see that it is very hard for people to believe these things while also knowing that "Micro$oft" made them.
Inevitably when public opinion swells and changes on something (such as VSCode), all the dissonance just melts away, and they were _always_ a fan. Funny how that works.
Ah yes, true hackers would never, say, build a Debian package...
Managing complexity has always been part of the game. To a very large extent it is the game.
Hate the company selling you a SaaS subscription to the closed-source tool if you want, and push for open-source alternatives, but don't hate the tool, and definitely don't hate the need for the tool.
> Please do no co-opt the term "hacker".
Indeed, please don't. And leave my true scotsman alone while we're at it!
People who haven't lived through the transition will likely come here to tell you how wrong you are, but you are 100% correct.
That happened because technology stopped being fun. When we were kids, seeing Penny communicating with Brain through her watch was neat and cool! Then when it happened in real life, it turned out that it was just a platform to inject you with more advertisements.
The "something" that happened was ads. They poisoned all the fun and interest out of technology.
Where is technology still fun? The places that don't have ads being vomited at you 24/7. At-home CNC (including 3d printing, to some extent) is still fun. Digital music is still fun.
Here on "hacker news" we get articles like this, meanwhile my brother is having a blast vibe-coding all sorts of stuff. He's building stuff faster than I ever dreamed of when I was a professional developer, and he barely knows Python.
In 2017 I was having great fun building smart contracts, constantly amazed that I was deploying working code to a peer-to-peer network, and I got nothing but vitriol here if I mentioned it.
I expect this to keep happening with any new tech that has the misfortune to get significant hype.
But it's fundamentally a correlation, and this observation is important because something can be completely ad-free and yet disempowering and hence unpleasant to use; it's just that vice-versa is rare.
Yes, a number of ad-supported sites are designed to empower the user. Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want. When I was growing up, TV executives picked a small set of videos to make available at 10 am, and if I didn’t want to watch one of those videos I didn’t get to watch anything. It’s not even a tradeoff, TV shows had more frequent and more annoying ads.
But they'd prefer if it was shorts.
Exactly and I'm sure it was our naivete to think otherwise. As software became more common, it grew, regulations came in, corporate greed took over and "normies" started to use.
As a result, now everything is filled in subscriptions, ads, cookie banners and junk.
Let's also not kid ourselves but an entire generation of "bootcamp" devs joined the industry in the quest of making money. This group never shared any particular interest in technology, software or hardware.
Disagree. Ads hurt, but not as much as technology being invaded by the regular masses who have no inherit interest in tech for the sake of tech. Ads came after this since they needed an audience first.
Once that line was crossed, it all became far less fun for those who were in it for the sheer joy, exploration, and escape from the mundane social expectations wider society has.
It may encompass both "hot takes" to simply say money ruined tech. Once future finance bros realized tech was easier than being an investment banker for the easy life - all hope was lost.
To use the two examples I gave in this thread. Digital music is more accessible than ever before and it's going from strength to strength. While at-home subtractive CNC is still in the realm of deep hobbyists, 3d printing* and CNC cutting/plotting* (Cricut, others) have been accessible and interested by the masses for a decade now and those spaces are thriving!
* Despite the best efforts of some of the sellers of these to lock down and enshittify the platforms. If this continues, this might change and fall into the general tech malaise, and it will be a great loss if that happens.
This is why I'm finding most of this discussion very odd.
I'm sure that there are some examples who enjoy it for the interface. I think CRT term/emulator is peak aesthetic. And a few who aren't willing to invest the time to use a gui an a terminal, and they learned the terminal first.
Calling either group a luddite is stupid, but if I was forced to defend one side. Given most people start with a gui because it's so much easier. I'd rather make the argument that those who never progress onto the faster more powerful options deserve the insult of luddite.
Is this an actually serious/honest take of yours?
I've been using vim for 20 years and, while I've spent almost no time with VS Code, I'd say that a lot of JetBrains' IDEs' built in features have definitely made me faster than I ever was with vim.
Oh wait. No true vim user would come to this conclusion, right?
My larger point was it's absurd to say someone who's faster using [interface] is a luddite because they don't use [other interface] with nearly identical features.
> Oh wait. No true vim user would come to this conclusion, right?
I guess that's fitting insult, given I started with a strawman example too.
edit: I can offer another equally absurd example, (and why I say it's only slightly hyperbolic because the following is true), I can write code much faster using vim, than I can with [IDE], I don't even use tab complete, or anything similar either. I, personally, am able to write better code, faster, when there's nothing but colored text to distract me. Does that make me a luddite? I've tried both, and this fits better for me. Or is it just how comfortable you are with a given interface? Because I know most people can find tab complete useful.
Okay. That, I agree with.
Well, LLMs don't fix that problem.
(They fix the "need to train your classification model on your own data" problem, but none of you care about that, you want the quick sci-fi assistant dopamine hit.)
I think, by definition, Luddites or neo-Luddites or whatever you want to call them are reactionaries but I think that's kind of orthogonal to being "progressive." Not sure where progressive comes in.
> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.
I think that's maybe part of the problem? We shouldn't try to automate the busy work, we should acknowledge that it doesn't matter and stop doing it. In this regard, AI addresses a symptom but does not cure the underlying illness caused by dysfunctional systems. It just shifts work over so we get to a point where AI generated output is being analyzed by an AI and the only "winner" is Anthropic or Google or whoever you paid for those tokens.
> These people bring way more toxicity to daily life than who they wage their campaigns against.
I don't believe for a second that a gaggle of tumblrinas are more harmful to society than a single Sam Altman, lol.
I'm a programmer, been coding professionally for 10 something years, and coding for myself longer than that.
What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it (granted, the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world).
And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.
If anything, it seems like Balmers plea of "Developers, developers, developers" has came true, and if there will be one profession left in 100 year when AI does everything for us (if the vibers are to be believed), then that'd probably be software developers and machine learning experts.
What exactly is being de-valuated for a profession that seems to be continuously growing and been doing so for at least 20 years?
The compensation and hiring for that kind of inexpert work were completely out of sync with anything sustainable but held up for almost a decade because money was cheap. Now, money is held much more tightly and we stumbled into a tech that can cheaply regurgitate a lot of so the trivial inexpert work, meaning the bottom fell out of these untenable, overpaid jobs.
You and I may not be effected, having charted a different path through the industry and built some kind of professional career foundation, but these kids who were (irresponsibly) promised an easy upper middle class life are still real people with real life plans, who are now finding themselves in a deeply disappointing and disorienting situation. They didn't believe the correction would come, let alone so suddenly, and now they don't know how they're supposed to get themselves back on track for the luxury lifestyle they thought they legitimately earned.
Now AI makes it unbelievably easy to make those simple but bespoke software packages. The business owner can boot up Lovable and get something that is good enough. The non-software folk generally aren't scrutinizing the software they use. It doesn't matter if the backend is spaghetti code or if there are bugs here and there. If it works well enough then they're happy.
In my opinion that's the unfortunate truth of AI software development. It's dirt cheap, fast, and good enough for most people. Computer's couldn't write software before and now they can. Obviously that is real devaluation, right?
So far, the tools help many programmers write simple code more quickly.
For technically adept professionals who are not programmers, though, we still haven't seen anything really break through the ceiling consistently encountered by previous low-code/no-code tools like FoxPro, Access, Excel, VBA, IFTTT, Zapier, Salesforce etc.
The LLM-based tools for this market work differently than the comparable tools that preceded them over the last 40 years, in that they have a much richer vocabulary of output, but the ceiling that all of these tools encountered in the last has been a human one: most non-programmers don't know how to describe what they need with sufficient detail for anything much beyond a fragile, narrow toy.
Maybe GPT-8 or Gemini 6 or whatever will somehow finally be able to shatter this ceiling, and somebody will make finally make a no-code software builder that devours the market for custom/domain software. But that hasn't happened yet and it's at least as easy to be skeptical as it is to be convinced.
I was working freelance through late 2023 - mid 2025 and the shift seemed quite obvious to me. Other freelancers, agency managers, etc that I talked to could see it too. The volume of clients, and their expectations, is changing very rapidly in that space.
It isn't devaluation. It's good - it freed a lot of people to work on more ambitious things.
Neither has been true for a really long time.
You're probably fine as a more senior dev...for now.
But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.
Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off.
This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions.
Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs.
LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google.
On the other hand, when I was staffed to lead a project that did have another senior developer who is one level below me, I tried to split up the actual work but it became such a coordination nightmare once we started refining the project because he could just use Claude code and it would make all of the modifications needed for a feature from the front end work, to the backend APIs, to the Terraform and the deployment scripts.
I would have actually slowed him down.
Curious if you gave Antigravity a try yet? It auto-launches a browser and you can watch it move the mouse and click around. It's able to review what it sees and iterate or report success according to your specs. It takes screen recordings and saves them as an artifact for you to verify.
I only tried some simple things with it so far but it worked well.
I'd rather babysit a junior dev and give them some work to do until they can stand on their own than babysit an LLM indefinitely. That just sounds like more work for me.
I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow.
But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable.
Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.
And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.
And where will it be in 5-10 years?
Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".
Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.
If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.
One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.
The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.
The bulk of that capex is chips, and those chips are straight up depreciating assets.
How do you mean committed?
What are your talking about? You seem to live in a parallel universe. Every single time I tried this or someone of my colleagues, this task failed tremendously hard.
This sounds kind of logical, but really isn't.
In reality you can ASSIGN a task to a junior dev and expect them to eventually complete it, and learn from the experience as well. Sure there'll likely be some interaction between the junior dev and mentor, and this is part of the learning process - something DESIREABLE since it leads to the developer getting better.
In contrast, you really cant "assign" something to an LLM. You can of course try to, and give it some "vibe coding" assignment like "build me a backend component to read the data from the database", but the LLM/agent isn't an autonomous entity that can take ownership of the assignment and be expected to do whatever it takes (e.g. coming back to you and asking for help) to get it done. With todays "AI" technology it's the AI that needs all the handholding, and the person using the AI is the one who has effectively taken the assignment, not the LLM.
Also, given the inability of LLMs to learn on the job, using an LLM as a tool to help get things done is going to be a groundhog day experience of having to micro-manage the process in the same way over and over again each time you use it... time that would have been better invested in helping a junior dev get up to speed and in the future be an independent developer that tasks can indeed be assigned to.
We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation.
If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans.
Maybe you haven't experienced it but a lot of junior devs don't really display that much intelligence. Their operating input is a clean task list, and they take it and convert it into code. It's more like "code entry" ("data entry", but with code).
The person assigning tasks to them is doing the thinking. And they are still responsible for the final output, so if they find a computer better and cheaper at "code entry" than a human well then that's who they're assign it to. As you can see in this thread many are already doing this.
Funny you mention this because Opus 4.5 did this just yesterday. I accidentally gave it a task with conflicting goals, and after working through it for a few minutes it realized what was going on, summarized the conflict and asked me which goal should be prioritized, along with detailed pros and cons of each approach. It’s exactly how I would expect a mid level developer to operate, except much faster and more thorough.
I agree in the sense that those of us who work in for-profit businesses have benefited from employer’s willingness to spend on dev budgets (salaries included)—without having to spend their own _time_ becoming increasingly involved in the work. As “AI” develops it will blur the boundaries of roles and reshape how capital can be invested to deliver results and have impact. And if the power dynamics shift (ie. out of the class of educated programmers to, I dunno, philosophy majors) then you’re in trouble.
Handholding the human pays off in the long run more than hand holding the llm, which requires more hand holding anyway.
Claude doesn't get better as I explain concepts to it the same way a jr engineer does.
Sure - LLMs will do what they're told (to a specific value of "do" and "what they're told")
I use LLMs to build isolated components and I do the work needed to specialize them for my tasks and integrate them together. The LLMs take fewer instructions to do this and handle ambiguity far better. Additionally because of the immediate feedback look on the specs I can try first with a minimally defined spec and interactively refine as needed. It takes me far less work to write specs for LLMs than it does for other devs.
You're (unwittingly?) making an argument for using an LLM: you know what you're going to get. It does not take six months to evaluate one; six minutes suffice.
Those concepts will be in your repository long after that junior dev jumps ship because your company refused to pay him at market rates as he improved so he had to jump ship to make more money - “salary compression” is real and often out of your manager’s control.
Instead, it's them that benefit the most from using them.
It's only management that believes otherwise. Because of deceitful marketing from a few big corporations.
Not in where I live though. Competition is fierce, both in industry and academia, for most posts being saturated and most employees face "HR optimization" in their late 30s. Not to mention working over time, and its physical consequences.
I mean, not anywhere, and the data absolutely annihilates their ridiculous claims. In subsequent posts they've retreated back to "yeah, but someone somewhere has it worse", invalidating this whole absurd thread.
Their comment has little correlation with reality, and seems to be a contrived, self-comforting fiction. Most firms have implemented hiring freezes if not actively downsizing their dev staff. Many extremely experienced devs are finding the market absolutely atrocious, getting zero bites.
And for all of the "well us senior devs are safe" sentiment often seen on here, many shops seem to be more comfortable hiring cheap and eager junior devs and foregoing seniors because LLMs fill in a lot of the "grizzled wisdom". The junior to senior ratio is rapidly increasing, and devs who lived on golden handshakes are suddenly finding their ego bruised and a market where they're fighting for low-pay jobs.
Or you know, we live and experience different parts of the world? Where you are, you might be right, and where I am, I might be right.
But nuance tends to be harder than trying to find some absolute truth and finding out it doesn't match your preconceived notion about the whole world.
Even if the competition is fierce, do you think it isn't for other professions, or what's the point? Of course a job that is well-paid, has few drawbacks and let you sit indoors in front of computer, probably doing something you enjoy in general, is popular and has competition.
Are those people's lives getting better because the capital class is able to devalue more skilled jobs every year?
I get your viewpoint though, physically exhausting work is probably much worse. I do want to point out that 40 hours has always been above average, and right now its the default
No, and after my first programming job, neither does it happen in development. Make sure you join the right place, have the right boss, and set expectations up front, and you too can surely avoid it if it's important to you :) Usually you can throw in "work/life balance" somehow to gauge how they feel about it.
And yes, plenty of blue collar people are expected to be available during your personal time, for various reasons. Sometimes just quick questions (especially if you're a manager and you're having time off), sometimes emergencies that requires you to head on over to the place. Ask anyone who owned or even just managed a restaurant about that specific thing, and maybe you'll be surprised.
Programming has been devalued because more people can do it at a basic level with LLM tooling. People that I do not consider smart enough or to have put enough work in to output the things that they have nor do they really understand it themselves.
It is of course the new reality and now we all have to go find new markers/things to judge peoples output by. Thats the devaluation of the craft itself.
For what its worth, this devaluation has happened many times in this field. ASM, Compilers, managed gc languages, the cloud, abstractions have continually opened up the field to people the old timers consider unworthy.
LLMs are a unique twist on that standard pattern.
But just because more people can do something doesn't mean it's devalued, or am I misunderstanding the word? The value of programs remains the same, regardless of who composes them. The availability of computers, the internet and the web seems to have had the opposite effect so far, making entire industries much more valued than they were in the decades before.
Neither do I see ASM, compilers, and all your other examples of devalualing, it seems like it's "nichifying" the industry if anything, which requires more experts, not fewer. The more abstractions we have in reality, the more experts are needed for being able to handle those things.
Until Thanksgiving.
Our industry is being disrupted by AI. What industry in history has not been disrupted by technological progression? It's called life. And those that can adapt to life changing will continue to thrive. And those who can't will get left behind. There is no wholesale turkey slaughter.
Further:
> Our industry is being disrupted by AI... No wholesale turkey slaughter.
Is an entirely different position than the GP who is essentially betting on AI producing more jobs for hackers, which surely won't be so simple.
I'm not clear on the point you're trying to make. My comment was in response to dugidugout's analogy.
If I understand their analogy correctly, developers are the well fed turkeys and one Thanksgiving day, we're all getting slaughtered.
That is not hyperbole and fear mongering to you?
We share understanding of their analogy, but differ in the inferred application. I took it as the well fed turkeys are "developers who deny AI will disrupt their industry", not "developers" as a whole.
What do you think they're building all those datacenters for? Why do you think so much money is pouring into AI companies?
It's not to help make developers more efficient with code assistants.
Traditional computation will be replaced with bots in every aspect of software. The goal is to devalue our labor and replace it with computation performed by machines owned by the wealthy, who can lease this out.
If you can't see this coming you lack both imagination and historical perspective.
Five years ago Claude Code would have been essentially unimaginable. Consider this.
So sure, enjoy your job churning out buggy whips while you can, but you better have a plan B for when the automobiles truly arrive.
Economic waves never hit one sector and stop. The waves continues across the entire economy. You can’t think “companies will get rid of huge amounts of labor” and then stop asking questions. You need to then ask “what will companies do with decreased labor costs?” And “what could that investment look like, who will they need to hit to fulfill it?” and then “what will those workers do after their demand increases?” And so on.
Unless they do, or are severely weakened. Consider the net worth of the 1% over the last few decades. Even corrected for inflation, its growth is staggering. The wealth gap is widening, and that wealth came from somewhere.
So yes, when there is an economic boom, investment happens. However, the growth of that top %1 tells me that they've been taking more and more off the top. Sure, some near the bottom may win with the decreased labor costs and whatnot, but my point is less and less do every cycle.
Full disclosure: I'm not an economist. Hell, I probably have a highschool-level of econ knowledge at best, so this should probably be taken as a "common-sense" take on it, which I already know often fails spectacularly when economics is at play. So I'm more than open to be corrected here.
Yes, the wealth of the 1% has increased over the decades, but so has investment. The economy still dwarfs what it was decades ago. There are more jobs than there were decades ago.
Hopefully you see my point but now. The “waves” of economics effects objectively didn’t stop at the rich.
My own Amazon investment in my pension has also gone up by 10x in the last 10 years, just like Jeff's. Where did the value increase come from?
Is this idea of the stock market good for us? I don't know, but it's paper money until you sell it.
Most of the economy is making things that aren’t really needed. Why bother keeping that afloat when it’s 90% trinkets for the proles? Once they’ve got the infra to ensure compliance why bother with all the fake work which is the real opium of the masses.
A lot of newly skilled job applicants can't find anything in the job market right now.
There's a huge difference between the perspective of someone currently employed versus that of someone in the market for a role, regardless of experience level. The job market of today is nothing like the job market of 3 years ago. More and more people are finding that out every day.
But as mentioned earlier, the situation in the US seems much more dire than elsewhere. People I know who entered the programming profession in South America, Europe and Asia for these last years don't seem to have more troubles than I had when I got started. Yes, it requires work, just like it did before.
If you don't trust me, give a non-programming job a try for 1 year and then come back and tell me how much more comfy $JOB was :)
This is a ridiculous statement. I know plenty of people (that are not developers) that make around the same as I do and enjoy their work as much as I do. Yes, software development is a great field to be in, but there's plenty of others that are just as good.
A lot of non-programmer jobs have a kind of union protection, pension plans and other perks even with health care. That makes a crappy salary and work environment bearable.
There was this VP of HR, in a Indian outsourcing firm, and she something to the effect that Software jobs appear like would pay to the moon, have an employee generate tremendous value for the company and general appeal that only smart people work these jobs. None of this happens with the majority of the people. So after 10-15 years you actually kind of begin to see why some one might want to work a manufacturing job.
Life is long, job guarantee, pensions etc matter far more than 'move fast and break thing' glory as you age.
I enjoy the practice of programming well enough but i do not at all love it as a career. I don't hate it by any means either but it's far from my first choice in terms of career.
I have a mortgage, 3 kids and a wife to support. So no. I don't think I'm going to do that. Also, I like my programming job.
EDIT: Sorry I thought you were saying the opposite. Didn't realize you were the OP of this thread.
Even after the layoffs, most big tech corps still have more employees today than they did in 2020.
The situation is bad, but the lesson to learn here is that a country should handle a pandemic better than "lowering interest rate to near-zero and increasing government spending." It's just kicking and snowballing the problem to the next four years.
[0]: https://www.dw.com/en/could-layoffs-in-tech-jobs-spread-to-r...
Remember that most of the economy is actually hidden from the stock market, its most visible metric. Over half the business is privately-owned small businesses, and at the local level forcibly shutting down all but essential-service shops was devastating. Without government spending, it's hard to imagine how most of those business owners and their employees would have survived, let alone their shops.
Yet we had no bread lines, no (increase in) migratory families chasing cash labor markets, and demands on charity organizations were heavy, but not overwhelming.
But you claim "a country should handle a pandemic better..." - what should we have done instead? Criticism is easy.
That is not unique to programming or tech generally. The overall US job market is kind of shit right now.
I've done a lot of interviews, and inevitably, most of the devs I interview can't pass a trivial interview (like implement fizzbuzz). The ones who can do a decent job are usually folks we have to compete for.
In my Big Tech job, I sometimes forget that some people can really enjoy what they do. It seems like you're in a fortunate position of both high pay and high enjoyment. Congratulations! Out of curiosity, what do you work on?
But in general, every job I've had has been "high pay and high enjoyment" even when I initially had "shit pay" compared to other programmers, and the product wasn't really fun, I was still programming, an activity I still love.
Compare this to the jobs I did before, where the physical toll makes it impossible to do anything after work as you're exhausted, and even if I got paid more than my first programming job, that your body is literally unable to move once you get home, makes the pay matter less and feel less.
But for a programmer, you can literally sit still all day, have some meetings in a warm office, talk with some people, type some things into a document, sit and think for a while, and in the end of the month you get a paycheck.
If you never worked in another profession, I think you ("The Programmer") don't realize how lucky you are compared to the rest of the world.
I too have worked in shit jobs. I too appreciate that I am currently in a 70F room of my house, wearing a T-shirt and comfy pants, and able to pet my doggos at will.
I miss having jobs where at least a lot of the time i was moving around or working directly with other people. More than anything else i miss casual conversation with coworkers (which still happened with excruciating rarity even when i was doing most of my programming in an office).
I'm glad you love programming and find the career ideal. I don't mean to harp or whine, just pointing out your ideals aren't universal even amount programmers.
I understand exactly what you mean and agree, seems our ideals agree after all :)
Negativity spreads so much more quickly than positivity online, and I feel as though too many people live in self reinforcing negative comment sections and blog posts than in the real world, which gives them a distorted view.
My opinion is that LLMs are doing nothing but accelerating what's possible with the craft, not eliminating it. If anything, this makes a single developer MORE valuable, because they can now do more with less.
Now the job market is flooded due to layoffs, further justifying lack of comp adjustment - add inflation, and you have "de-valuing" in direct form.
I don't know what kind of work you do but this depends a lot on what kind of projects you work on
Of course, there is always exceptions, like programmers who need to hike to volcanos to setup sensors and what not, but generally, programmers have one of the most comfortable jobs on the planet today. If you're a programmer, I think it should come relatively easy to acknowledge this.
I find it... very strange that you think software development is less mentally taxing than physical labor.
Contrast that with working as a out-call nurse, which isn't just physically taxing as you need to actually use your body multiple times per day for various things, but people (especially when you visit them in their homes, seemingly) can be really mean, weird and just draining on you. Not to mention when people get seriously hurt, and you need to be strong when they're screaming of pain, and finally when people die, even strangers, just is really taxing no matter what methods you use for trying to come back from that.
It's just really hard for me to complain about software development and how taxing it can be, when my life experience put me through so much before I even got to be a professional developer.
- After a long day of physical labor, I come home and don't want to move.
- After a long day of software development, I come home and don't want to think.
Don't get me wrong, it's a lot harder for new developers to enter the industry compared to a decade ago, even in Western Europe, but it's still way easier compared to the length people I know who aren't programmers or even in tech.
US data does back it up, though. The tech labor sector outperformed all others in the last 10 years. https://www.bls.gov/emp/tables/employment-by-major-industry-...
There's no law of nature that says this has to continue forever, but it's a trend that's been with us since the birth of the industry. You don't need to look at AI tools or methodoligies or whatever. We have code reuse! Productivity has obviously improved, it's just that there's also an arms race between software products in UI complexity, features, etc.
If you don't keep improving how efficiently you can ship value, your work will indeed be devalued. It could be that the economics shift such that pretty much all programming work gets paid less, it could be that if you're good and diligent you do even better than before. I don't know.
What I do know is that whichever way the economics shake out, it's morally neutral. It sounds like the author of this post leans into a labor theory of value, and if you buy into that, well...You end up with some pretty confused and contradictory ideas. They position software as a "craft" that's valuable in itself. It's nonsense. People have shit to do and things they want. It's up to us to make ourselves useful. This isn't performance art.
But, probably remaining a developer who runs through tickets in JIRA without much care for collaboration could be feasible in some type of companies too.
I find the more I specify about all the stuff I thought was hilariously pedantic hyper-analysis when I was in school, the less I have to interpret.
If you use test-driven, well-encapsulated object oriented programming in an idiomatic form for your language/framework, all you really end up needing to review is "are these tests really testing everything they should."
Why wouldn't the same happen here? Instead of these programmers jamming out boilerplate 24/7, why are they unable to improve their skill further and move with the rest of the industry, if that's needed? Just like other professions adopt to how society is shaped, why should programming be an exception to that?
Of course, I won't claim it's glamorous or anything, but the idea that factory workers somehow will disappear tomorrow feels far out there, and I'm generally optimistic about the future.
People working in one of the coolest industries on Earth really do not appreciate their lives nowadays.
Industry I guess would be "startups" or just "tech", it ranges across holiday related, infrastructure, distributed networks, application development frameworks and some others.
Smallest company I worked at was 4 including the CEO, largest been 300 people. Most of them I joined when it was 5-10 people, and left once they got to around 100.
Are you sure about that?
Is there something specific you'd like to point me to, besides just replying with a soundbite?
So I guess it depends? News at 11:00.
Are you in China? India?
What does "heinous" actually mean here? I've repeated it before, but I guess one more time can't hurt: I'm not saying it isn't difficult to find a job as a developer today compared to a decade ago, but what I am saying is that it's a thing across all sectors and developers aren't hit by it worse than any other sector. Hiring freezes has been happening in not just technology companies, but across the board.
software engineering is easy? you live in bubble, try teaching programming to someone new to it and you'll realize how muuuuch effort it requires
If you want a challenge, try almost any other job than development, and you'll realize how easy all this stuff actually is.
They were passing their classes, getting jobs and completing their tasks.
So I've witnessed how maaaany things people need to learn, what things are not easy to them and so on.
I'm not saying other jobs are easy/easier, but none of my friends, who work in "traditional" industries like homebuilding, road maintenance, manufacturing, etc, etc. needed to push THIS MANY hours into it in order to get 1st job, be decent on it, improve, get promoted, etc.
Almost none of them is learning during their free time in order to get better, etc.
>If you want a challenge, try almost any other job than development, and you'll realize how easy all this stuff actually is.
I mean, difficult != hard.
software eng. is difficult cuz requires a lot of time to put into in order to be proficient.
Of course if you're in south eastern europe or in south asia where all the jobs are being offshored you're having the time of your life.
I don't know what else to say except that hasn't been my experience personally, nor the experience of my acquaintances who've re-skilled to become programmers these last few years, in Western Europe.
https://finance.yahoo.com/news/tech-job-postings-fall-across...
> Among the 27 countries analysed, European nations saw the steepest fall in tech job postings between 1 February 2020 and 31 October 2025,
> In absolute terms, the decline exceeded 40% in Switzerland (-46%) and the UK (-41%), with France (-39%) close behind.
> The United States showed a similar trend, with a decline of 35%. Austria (-34%), Sweden (-32%) and Germany (-30%) were also at comparable levels.
Don’t close your eyes and plug your ears and pretend you didn’t hear anything.
You do realise your position of luck is not normal, right? This is not how your average Techie 2025 is.
Actual data is convincing; few are providing it.
And even if I'm experienced now, I still have peers and acquaintances who are getting into the industry, I'm not sitting in my office with my eyes closed exactly.
Eh?
I'm happy for you (and envious), because that is not my experience. The job is hard. Agile's constant fortnightly deadlines, a complete lack of respect by the rest of the stakeholders for the work developers do (even more so now because "ai can do that"), changing requirements but an expectation to welcome changing requirements because that is agile, incredibly egotistical assholes that seem to gravitate to engineering manager roles, and a job market that's been dead for a few years now.
No doubt some will comment and say that if I think my job is hard I should compare it to a coal miner in the 1940's. True, but as Neil Young sang: "Though my problems are meaningless, that don't make them go away."
When I write that, I write that with the history and experience of doing other things. Deadlines, lack of respect from stakeholders, egoists and changing requirements just don't sound so bad when you compare to "Ah yeah resident 41 broke their leg completely and we need to clean up their entire apartment from the pools of blood and pus + work with the ambulance crew to get them to the hospital".
I guess it's kind of a PTSD of sorts or something, as soldiers describe the same thing coming home to a "normal life" after spending time in a battle-zone. Everything just seems so trivial compared to the situations you've faced before.
If one doesn't subscribe to traditional Marxist ideology, this argument won't land the same way, but elements of these ideas have made their way into popular ideas of value.
>the capitalist who applies the improved method of production, appropriates to surplus-labour a greater portion of the working day, than the other capitalists in the same trade […] The law of the determination of value by labour-time, a law which brings under its sway the individual capitalist who applies the new method of production, by compelling him to sell his goods under their social value, this same law, acting as a coercive law of competition, forces his competitors to adopt the new method.
From Capital, Vol 1 Chapter 12 if you're curious.
I'm not paid enough to clean up shit after an AI. Behind an intern or junior? Sure, I enjoy that because I can tell them how shit works, where they went off the rails, and I can be sure they will not repeat that mistake and be better programmers afterwards.
But an AI? Oh good luck with that and good luck dealing with the "updates" that get forced upon you. Fuck all of that, I'm out.
I enjoy making things work better. I'm lucky in that, because there's always been more brownfield work than greenfield work. I think of it as being an editor, not an author.
Hacking into vibe code with a machete is kinda fun.
I do see a shortage of entry-level positions (number of them, not salaries).
Going through the author's bio ... it seems like he's just not able to provide value in any of the high-paying positions that exist right now; not that he should be, he's just not aligned with it and that's ok.
I can see why he's desperate.
The part where writing performant, readable, resilient, extensible, and pleasing code used to actually be a valued part of the craft? I feel like I'm being gaslit after decades of being lectured on how to be a better software developer, only to be told that my craft is pointless, the only thing of value is the output, and that I should be happy spending my day babysitting agents and reviewing AI code slop.
Also, i enjoy programming. Even typing boring shit as boilerplate because i keep my brain engaged. As much as i type i keep thinking, is this really necessary? and maybe figure out something leaner. LLMs want to deprive me of enjoyment of my work (research, learn) and of my brain. No thanks, no LLM for me. And i don't care whatever garbage it outputs, i'd much prefere if the garbage was your output, or you are useless.
The only use i have for LLMs and diffusion models is to entertain myself with stupid bullshit i come up with that i would find funny. I massively enjoy projects such as https://dumbassideas.com/
Note: Not taking into account the "classic" ML uses, my rant only going to LLMs and the LLM craze. A tool made by grifters, for grifters.
But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.
If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?
Honestly I think you’re swallowing some of the hype here.
I think the biggest advantages of LLMs go to the experienced coders who know how to leverage them in their workflows. That may not even include having the LLM write the code directly.
The non-coders producing apps meme is all over social media, but the real world results aren’t there. All over Twitter there were “build in public” indie non-tech developers using LLMs to write their apps and the hype didn’t match reality. Some people could get minimal apps out the door that kind of talked to a back end, but even those people were running into issues not breaking everything on update or managing software lifecycle.
The top complaint in all of the social circles I have about LLMs is with juniors submitting LLM junk PRs and then blaming the LLM. It’s just not true that juniors are expertly solving tasks with LLMs faster than seniors.
I think LLMs are helpful and anyone senior isn’t learning how to use them to their advantage (which doesn’t mean telling the LLM what to write and hoping for the best) is missing out. I think people swallowing the hype about non-tech people and juniors doing senior work is getting misled about the actual ways to use these tools effectively.
There are just some things that need lots of extra scrutiny in a system, and the experienced ones know where that is. An LLM rarely seems to, especially for systems of anywhere near real world production size.
Advent of Code (https://adventofcode.com/2025/about) says:
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
I would advocate for Advent of Code in every workplace, but finding interest is rare. No matter how much craft is emphasized, ultimately businesses are concerned with solving problems. Even personally, sometimes I want to solve a problem so I can move on to something more interesting.
I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant.
AI just can't cope with this yet. So my team has been told that we are too slow.
Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).
I've lost your fight, but won mine before, you can sell this as risk reduction to your boss. I've never seen eng win this argument on quality grounds. Quality is rarely something that can be understood by company leadership. But having a risk reduction team that moves a bit slower and protects the company from extreme exposures like this, is much harder to cut from the process. "Imagine the law suits missing something like this would cause." and "we don't move slower, we do more than the other teams, the code is more visible, but the elimination of mistakes that will be very expensive legally and reputationally is what we're the best at"
It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface.
The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time.
LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone.
This is so incredibly true.
Something that should go in a "survival guide" for devs that still prefer to code themselves.
The fact is that no matter whether we review the LLM output or not, no matter whether we write the code entirely by hand or not, there's always going to be the possibility of errors. So it's not some bright-line thing. If you're relatively lazier and relatively less thoughtful in the way you work, you'll make more errors and more significant errors. You'll look like you're doing the work, but your teammates have to do more to make up for the problems.
Having to work around problems your coworkers introduced is nothing new, but LLMs make it worse in a few ways I think. One is just, that old joke about there being four kinds of people: lazy and stupid, industrious and stupid, smart and lazy, and industrious and smart. It's always been the "industrious and stupid" people that kill you, so LLMs are an obvious problem there.
Second there's what I call the six-fingered hands thing. LLMs make mistakes a human wouldn't, which means the problem won't be in your hypothesis-space when you're debugging.
Third, it's very useful to have unfinished work look unfinished. It lets you know what to expect. If there's voluminous docs and tests and the functionality either doesn't work at all or doesn't even make sense when you think about it, that's going to make you waste time.
Finally, at the most basic level, we expect there to be some sort of plan behind our coworkers' work. We expect that someone's thought about this and that the stuff they're doing is fundamentally going to be responsive to the requirements. If someone's phoning it in with an LLM, problems can stay hidden for a long time.
Scripts, cicd, documentation etc. The stuff that gets a PR but doesn't REALLY get the same level of review because its not really production code. But when you need to go tweak the thing it does a few months or years later... its so dense and undecipherable you spend more time figuring out how the llm wrote the damn thing than doing it all over yourself.
Should you probably review it a little harsher in the moment? sure, but thats not always feasible with things that are at the time "not important" and only later become the root of other things.
I have lost several hours this week to several such occurences.
For example they ask to have networking configs put into place and point us at these docs that are not accurate and then they expect that we’ll troubleshoot and figure out what exactly they need. It’s a complete waste of time and insulting to shove off that work onto another team because they couldn’t be fucked to read their own code and write down their requirements accurately.
If you push all that stuff at the same time, it's really easy to get away with this soft lie, "job done". They can claim they thought it was okay and it was just an honest mistake there were problems. They can lie about how much work they really did.
READMEs or diagrams that are plans for the functionality are fine. Docs that describe finished functionality are fine. Slop that dresses up unfinished work as finished work just fucks everything up, and the incentives are misaligned so everyone's doing this.
Basically they are pushing their work to the test engineers or whoever is doing testing (might be end users).
This is my biggest gripe with LLM use in practice.
The produced products however do not compare in quality to other industry's mass production lines. I wonder how long it takes until this comes all crashing down. Software mostly already is not a high quality product.. with Claude & co it just gets worse.
edit: sentence fixed.
You can still buy high goods made with care when it matters to you, but that's the exception. It will be the same with software. A lot of what we use will be mass produced with AI, and even produced in realtime on the fly (in 5 years maybe?). There will be some things where we'll pay a premium for software crafted with care, but for most it won't matter because of the benefits of rapidly produced software.
We've got a glimpse of this with things like Claude Artifacts. I now have a piece of software quite unique to my needs that simply wouldn't have existed otherwise. I don't care that it's one big js file. It works and it's what I need and I got it pretty much for free. The capability of things like Artifacts will continue to grow and we'll care less and less that it wasn't human produced with care.
Most of our private data lives in clouds now and there are already regular security nightmares of stolen passwords, photos etc. I fear that these incidents will accumulate with more and more AI generated code that is most likely not reviewed or reviewed by another AI.
Also regardless of AI I am more and more skipping cheap products in general and instead buying higher quality things. This way I buy less but what I buy doesn't (hopefully) break after a few years (or months) of use.
I see the same for software. Already before AI we were flooded with trash. I bet we could all delete at least half of the apps on our phones and nothing would be worse than before.
I am not convinced by the rosy future of instant AI-generated software but future will reveal what is to come.
Hacker news obviously suffers from severe selection bias in this regard, but for the general public I doubt even repeated security breaches of vibe coded apps will move the needle much on the perception of LLM coded apps, which means that they will still sell, which means that it doesn't matter. I doubt even most people will pick up the connection. And frankly, most security breaches have no major consequences anyway, in the grand scheme of things. Perhaps the public conscioussness will harden a bit when it comes to uploading nudes to "CheckYourBodyFat", but the truly disastrous stuff like bank access is mostly behind 2FA layers already.
We've been in that era for at least two decades now. We just only now invented the steam engine.
> I wonder how long it takes until this comes all crashing down.
At least one such artifact of craft and beauty already literally crashed two airplanes. Bad engineering is possible with and without LLMs.
Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet
The only difference is that there is now a new high-throughput, high-error (at least for now) component editing the software.
Perhaps that’s what lead to a decline in accountability and quality.
They might of course accelerate it if used unwisely, but the solution to that is arguably to use them wisely, not to completely shun them because "think of the craft and the jobs".
And yes, in some contexts, using them wisely might well mean not using them at all. I'd just be surprised if that were a reasonable default position in many domains in 5-10 years.
That's obvious. It's a matter of which makes it more likely
Is Good Engineering possible with LLMs? I remain skeptical.
But no! Programmers seem to only like working on giant scale projects, which only are of interest to huge enterprises, governments, or the open source quagmire of virtualization within virtualization within virtualization.
There's exactly one good invoicing app I've found which is good for freelancers and small businesses. While the amount of potential customers are in the tens of millions. Why aren't there at least 10 good competitors?
My impression is that programmers consider it to be below their dignity to work on simple software which solves real problems and are great for their niche. Instead it has to be big and complicated, enterprise-scale. And if they can't get a job doing that, they will pretend to have a job doing that by spending their time making open source software for enterprise-scale problems.
Instead of earning a very good living by making boutique software for paying users.
I would love to do something like what you describe. Build a simple but solid and very specialized solution. However I am not sure there is demand or if I have the right ideas for what to do.
You mention invoicing and I think: there must be hundreds of apps for what you describe but maybe I am wrong. What is the one good app you mention? I am curious now :)
The invoicing app in particular I was referring to is Cakedesk. Made by a solo developer who sells it for a fair price. Easy to use and has all the necessary functions. Probably the name and the icon is holding him back, though. As far as I understand, the app is mostly a database and an Electron/Chromium front-end, all local on your computer. Probably very simple and uninteresting for a programmer, but extremely interesting for customers who have a problem to solve.
Is it beneath YOUR dignity to create this? What an untapped market! You could be king!
Also it's absurd to an incredible degree to believe that any significant portion of programmers, left to their own devices, are eager to make "big, complicated, enterprise-scale" software.
But yes, sometimes I have to AI code small things, because there's no other solution.
I did see a few good senior engineers using AI and producing good code, but for junior and mid engineers I have witnessed the complete opposite.
Equally, my read is you're fixating on the syntax used in their comment to insulate yourself from actually engaging with their idea and point. You refuse to try to understand the parts of the system that negate the surface level popularity, eer productivity gains.
People who enjoy the productivity boost of AI are right, you can absolutely, without question build a house faster with AI.
The people who claim there's not really any reasonable productivity gains from AI are also right, because using AI to build a multistory anything, requires you to waste all that time starting with a house, to then raze it to the ground and rebuild a usable foundation.
yes, "but its useful in specific domains" is technically correct statement, but whataboutism is rarely a useful conversational response.
But then you run into classic SO problems... Like the first solution doesn't work. Nor the second one. And the third one introduces a completely different coding style. The last one is implemented in pure sh/GNU utils.
One thing it is absolutely amazing at: digesting things that have bad documentation, like openssl C api. Even then you still gotta be on the watch for hallucinations, and audit it very thoroughly.
It’s a reasonable question, and my response is that I’ve encountered multiple specific examples now of a project being delayed a week because some junior tried to “save” a day by having AI write bad code.
Good managers generally understand the concept of a misleading productivity metric that fails to reflect real value. There’s a reason, after all, why most of us don’t get promoted based on lines of code delivered. I understand why people who don’t trust their managers to get this would round it off to artisanship for its own sake.
Are there for profit companies (not non profits, research institutes etc…) that are not metric driven?
It's not until later. When it's gotten to a larger size, do you have the resources to be metric driven.
You can’t be data driven and also blind to the data
You might be optimizing for the wrong thing, but it’s not blind, it’s just a bad “model”
If you stare at your GPS and don’t pay attention to what’s in the real world outside your windshield until you careen off a cliff that would be “blindly” following your GPS. You had data but you didn’t sufficiently hedge against your data being incomplete.
Likewise sticking dogmatically to your metrics while ignoring nuance or the human factor is blindly following your metrics.
"Tickets closed" is an amazing data driven & blind to the data metric. You can have someone closing an insane number of tickets, looking amazing on the KPIs, but no one's measuring "Tickets reopened" or "Tickets created for the same issue a day later".
It's really easy to set up awful KPIs and lose all sight of what is actually happening while having data to show your bosses
Success = tickets closed, is wrong, but data driven
I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. If it doesn't, yay refactoring prompts instead of tackling the actual problem. Also I can write code for free, LLMs coding assistants aren't free. I can fit business problems amd edge cases into my brain given some time, a LLM is unaware about edge cases, legal requirements, decoupled dependencies, potential refactors or the occasional call of boss asking for something to be sneaked into the code right now. If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.
You don’t have to let the LLM write code for you. They’re very useful as a smart search engine for your code base, a smart refactoring tool, a suggestion generator, and many other ways.
I rarely have LLMs write code for me from scratch that I have to review, but I do give them specific instructions to do what I want to the codebase. They can do it much faster than I can search around the codebase and type out myself.
There are so many ways to make LLMs useful without having them do all the work while you sit back and judge. I think some people are determined to get no value out of the LLM because they feel compelled to be anti-hype, so they’re missing out on all the different little ways they can be used to help. Even just using it as a smarter search engine (in the modes where they can search and find the right sections of right articles or even GitHub issues for you) has been very helpful. But you have to actually learn how to use them.
> If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.
Okay, good luck with your hut in the forest. The rest of us will move on using these tools how we see fit, which for many of us doesn’t actually include this idea where the LLM is the author of the code and you just ask nicely and reject edits until it produces the exact code you want. The tools are useful in many ways and you don’t have to stop writing your own code. In fact, anyone who believes they can have the LLM do all the coding is in for a bad surprise when they realize that specific hype is a lie.
This probably is the issue for me, I am simply not willing to do so. To me the whole AI thing is extremely dystopian so even on a professional level I feel repulsed by it.
We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.
I want to write software that works, preferably even offline. I want tools that do not spy on me (referring to that new Google editor, forgot the name). Call me once these tools work offline on my 8GB RAM laptop with a crusty CPU and I might put in the effort to learn them.
I share that concern about massive, unforced centralization. If there were any evidence for the hypothesis that LLM inference would always remain viable in datacenters only, I'd be extremely concerned about their use too.
But from all I've seen, it seems overwhelmingly likely that we'll have very powerful ones in our phones in at most a few years, and definitely in midrange laptops and above.
Thanks for being honest at least. So many HN arguments start as a desire to hate something and then try to bridge that into something that feels like a takedown of the merits of that thing. I think a lot of the HN LLM hate comes from people who simply want to hate LLMs.
> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.
For an experienced dev using LLMs as another tool, an LLM outage isn’t a problem. You just continue coding.
It’s on the level of Google going down so you have to use another search engine or try to remember the URL for something yourself.
The main LLM players are also easy to switch between. I jump between Anthropic, Google, and OpenAI almost month to month to try things out. I could have subscriptions to all 3 at the same time and it would still be cheap.
I think this point is overblown. It’s not a true team dependency like when GitHub stop working a few days back.
Anything worth reading beyond this transparent and hopefully unsuccessful appeal to tribalism?
Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?
> the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us
What is it with this perceived right to fulfilling, but also highly paid, employment in software engineering?
Nobody is stopping anyone from doing things by hand that machines can do at 10 times the quality and 100 times the speed.
Some people will even pay for it, but not many. Much will be relegated to unpaid pastime activities, and the associated craftspeople will move on to other activities to pay the bills (unless we achieve post-scarcity first). That's just human progress in a nutshell.
If the underlying problem is that many societies define a person's worth via their employability, that seems like a problem best fixed by restructuring said societies, not by artificially blocking technological progress. "progressive hackers"...
Who says we haven't tried it out?
FTA.
I know tons of people where "tried it out" means they've seen Google's abysmal search summary feature, or merely seen the memes and read news articles about how it's wrong sometimes, and haven't explored any further.
They seem just as enthusiastic as many of the pro AI voices here on HN, while the quality of their work declines. It makes me extremely skeptical of anyone who is enthusiastic about AI. It seems to me like it's a delusion machine
We'll need to be even more intentional about when to use LLMs than we should arguably already be about any type of automation.
I was describing anecdotally what I have witnessed. Devs that I used to have a reasonably high opinion of struggling to explain or understand the PRs they are making
> Does using an LLM cause one to suddenly forget everything?
I think we can probably agree that when you stop using skills, those skills will atrophy to some extent
Can we also agree that using LLMs to generate code is different from the skill of writing code?
If so, it stands to reason that the more people rely on LLMs to generate things for them, the more their skills of creating those things by hand will atrophy
I don't think it should be very controversial to think that LLMs are making people worse at things
It is also entirely possible that people are becoming better (or faster, anyways. Extremely debatable if faster = better imo) at building software using LLMs while also becoming worse at actually writing code
I was surprised how hard many here fell for the NFT thing, too.
Various people have been wrong on various predictions in the past, and it seems to me that any implied strong overlap is anecdotal at best and wishful (why?) thinking at worst.
The only really embarrassing behavior is never updating your priors when your predictions are wrong. Also, if you're always right about all your prognoses, you should probably also not be in the HN comments but on a prediction market, on-chain or traditional :)
Just because
- crypto was massively hyped and then crashed (although it's more than recovered),
- many grifters chase hypes, and
- there's undeniably an AI hype going on at the moment
doesn't necessarily imply that AI is full of grifters or confirms any adjacent theories (as in, could be true, could be false, but the argument does not hold).
Maybe so, but would it be possible to not dismiss it elsewhere? I just don't see the causal relation between AI and crypto, other than that both might be completely overhyped, world-changing, or boringly correctly estimated in their respective impact.
Did they? I'm not saying you're wrong but I'd like to see some evidence, because NFTs were always obvious nonsense. I'm sure there were some grifters posting here, and others playing devil's advocate or refuting anti-NFT arguments that somehow went too far, but I'd be genuinely surprised if the general sentiment was not overwhelmingly negative/dismissive.
I can get bad code written for the cost of electricity now
Exactly. You can see that with the proliferation of chickenized reverse centaurs[1] in all kinds of jobs. Getting rid of the free-willed human in the loop is the aim now that bosses/stakeholders have seen the light.
[1] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...
The complexity of good code, is still complicated.
which means 1. if software development is really solved, everyone else also gets a huge problem (ceo, cto, accountants, designers, etc. etc.) so we are in the back of the ai doomsday line.
And 2. it allows YOU to leverage AI a lot better which can enable you to create your own product.
In my startup, we leverage AI and we are not worried that another company just does the same thing because even if they do, we know how to write good code and architecture and we are also using AI. So we will always be ahead.
I've seen the argument that computers let us prop up and even scale governmental systems that would have long since collapsed under their own weight if they’d remained manual more than once. I'm not sure I buy it, but computation undoubtedly shapes society.
The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.
I'm not even saying the core argument's wrong, exactly - clearly, tools build systems ("...and systems kill" - Crass). I guess I'm saying tools are value neutral. Guns don't kill people. So this argument against LLMs is an argument against all tools, unless you can explain how LLMs are a unique category of tool?
(Aside: calling out the lever sounds silly, but I think it's actually a great example. You can't do monumental architecture without levers, and the point in history where we start doing that is also the point where serious surplus extraction kicks in. I don't think that's coincidence).
In my third world country, motorbikes, scooters, etc have exploded in popularity and use in the past decade. Many people riding these things have made the roads much more dangerous for all, but particularly for them. They keep dying by the hundreds per month, not only just due to the fact that they choose to ride them at all, but how they ride them: on busy high speed highways, weaving between lanes all the time, swerving in front of speeding cars, with barely any protective equipment whatsoever. A car crash is frequently very survivable; motorcycle crash, not so much. Even if you survive the initial collision, the probability of another vehicle running you over is very high on a busy highway.
On would think, given the clear evidence for how dangerous these things are, why do people (1) ride them at all on the highway, and (2) in such a dangerous manner? One might excuse (1) by recognizing that many are poor and can't buy a car, and the motorbikes represent economic possibility: for use in courier business, of being able to work much further from home, etc.
But here is the thing about (2), A motorbike wants to be ridden that way. No matter how well the rider recognizes the danger, there is only so much time can pass before the sheer expediency of riding that way overrides any sense of due caution. Where it would be safer to stop or keep to a fixed lane without any sudden movements, the rider thinks of the inconvenience of stopping, does a quick mental comparison it to the (in their minds) the minuscule additional risk, and carries on. Stopping or keeping to a proper lane in a car require far less discipline than doing that on a motorbike.
So this is what people mean when they say tech is not value neutral. The tech can theoretically be used in many ways. But some forms of use are so aligned with the form of the tech that in practice it shapes behavior.
That's a lovely example. But is the dangerous thing the bike, or the infrastructure, or the system that means you're late for work?
I completely get what you're saying. I was thinking of tools in the narrowest possible way - of the tool in isolation (I could use this gun as a doorstop). You're thinking of the tool's interface with its environment (in the real world nobody uses guns as doorstops). I can't deny that's the more useful way to think about tools ("computation undoubtedly shapes society").
there is no safe way to ride a motorbike. even with save infrastructure, all the amount of protection that you can wear, no stress riding away from traffic, a freak accident can still kill you. there is no adequate protection for riding at that speed.
Certainly it's biased. I'm not the author, but to me there's a huge difference between computer/software as a tool, designed and planned, with known deterministic behavior/functionality, then put in the hands of humans, vs automating agency. The former I see as a pretty straightforward expansion of humanity's long-standing relationship with tools, from simple sticks to hand axes to chainsaws. The sort of automation AI-hype seems focused on doesn't have a great parallel in history. We're talking about building a statistical system to replace the human wielding the tool, mostly so that companies don't have to worry about hiring employees. Even if the machine does a terrible job and most of humanity, former workers and current users, all suffer, the bet is that it will be worth the cost savings.
ML is very cool technology, and clearly one of the major frontiers of human progress. At this stage though, I wish the effort on the packaging side was being spent on wrapping the technology in the form of reliable capabilities for humans to call on. Stuff like OCR at the OS level or "separate tracks" buttons in audio editors. The market has decided instead that the majority of our collective effort should go towards automated liability-sinks and replacing jobs with automation that doesn't work reliably.
And the end state doesn't even make sense. If all this capital investment does achieve breakthroughs and creat true AGI, do investors really think they'll see returns? They'll have destroyed the entire concept of an economy. The only way to leverage power at that point would be to try to exercise control over a robot army or something similarly sci-fi and ridiculous.
See the nuclear bomb for an example.
Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?
Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.
So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...
I am surprised (and also kind of not) to see this lack of critical reflection on HN of all places.
Saying "progress is progress" serves nobody, except those who drive "progress" in directions that benefits them. All you do by saying "has always changed things" is taking "change" at face value, assuming it's something completely out of your control, and to be accepted without any questioning it's source, it's ways or its effects.
> So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...
Amazing depiction of extremes as the only possible outcomes. Either take everything that is thrown at us, or go back into a supposed "dark age" (which, BTW, is nowadays understood to not have been that "dark" at all) . This, again, doesn't help have a proper discussion about the effects of technology and how it comes to be the way it is.
I'm not surprised at all anymore.
I constantly feel like the majority of voices on this site are in favor of maximizing their own lives no matter the cost to everyone else. After all, that's the ethos that is dominating the tech industry these days
I know I'm bitter. All I ever wanted was to hang out with cool people working on cool stuff. Where's that website these days? It sure isn't this one
So are you able, realisticly, to stop progress around a whole planet? Tbh. getting an alignment across the planet to slow down or stop AI would be the equivilent of stoping capitalism and actually building a holistic planet for us.
I think ai will force the hand of capitalism but i don't think we will be able to create a star trek universe without getting forced
There was progress in the Middle Ages, hence the difference between the early and late Middle Ages. Most information was mouth to mouth instead of written down.
"The term employs traditional light-versus-darkness imagery to contrast the era's supposed darkness (ignorance and error) with earlier and later periods of light (knowledge and understanding)."
"Others, however, have used the term to denote the relative scarcity of written records regarding at least the early part of the Middle Ages"
I talk about a time were we had no proper female rights. Females can only vote around the globe since 1893 (https://en.wikipedia.org/wiki/Women%27s_suffrage)
A refrigerator was only common in 1913.
Before all of that, we spend a LOT of time on just surviving.
I'm more surprised that seemingly educated people have such simplistic views as "technology = progress, progress = good hence technology = good". Vaccines and running water are tech, megacorps owned "AI" being weaponised by surveillance obsessed governments is also tech.
If you don't push back on "tech" you're just blindingly accepting whatever someone else decided for you. Keep in mind the benefits of tech since the 80s have mostly been pocketed by the top 10%, the pleb still work as much, retire as old, &c. despite what politicians and technophiles have been saying
Tech also gave us vaccines and indoor plumbing and the clothes I am wearing.
It's the morals and courage to live by those morals which creates good. Progress is by definition towards a goal. If that goal is say
> to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity
and ensure our basic inherent (not government-given) rights to
> life, liberty, and pursuit of happiness
then all good.
If it is to enrich me at the cost of thee, create a surveillance state that rounds up and kills undesirables at scale, destroys our basic inherent rights, then tech not good
You don't like leaded gasoline? You must want us to walk everywhere. Come on...
Speaking of wonky analogies, have you considered that other people have access to these hammers and are aiming for your head ? And that some people might not want to be hit on the head by a hammer
This is not an interesting conversation.
Any software engineer who shares this sentiment is doing their career a disservice. LLMs have their pitfalls, and I have been skeptical of their capabilities, but nevertheless I have tried them out earnestly. The progress of AI coding assistants over the past year has been remarkable, and now they are a routine part of my workflow. It does take some getting used to, and effectively using an AI coding assistant is a skill in and of itself that is worth mastering.
I’m not sure what these people are NOT seeing. Maybe I’m somehow fortunate with visibility into what AI can do today, and what it will do tomorrow. But I’m not doing anything special. Just paying attention and keeping an open mind.
I’ve been at this for 40 years, working professionally for more than 30. I’ve seen lots.
One pattern I’ve seen repeating is folks who seem to stop leaning at some point. I don’t understand this, because for me learning everyday is what fuels me. And those folks eventually die on the vine, or they become the last few greybeards working on COBOL.
We are alive at a very interesting time in tech. I am excited about that. I am here for it.
it already tells me enough to stay away from using AI tools for coding. and that's just one reason, if i consider all the others, then that's more than enough.
Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
(In fairness Segways seem to have a weird afterlife in certain cities helping to make tourists more annoying; there are sometimes niche uses for even the most pointless tech fads.)
Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
My relative came to me to make a small business website for her. She knew I was a "coder". She gave me a logo and what her small business does.I fed all of it into Vercel v0 and out came a professional looking website that is based on the logo design and the business segment. It was mobile friendly too. I took the website and fed it to ChatGPT and asked it to improve the marketing copy. I fed the suggestions back to v0 to make changes.
My relative was extremely happy with the result.
It took me about 10 minutes to do all of this.
In the past, it probably would have taken me 2 weeks. One week to design, write copy, get feedback. Another week to code it, make it mobile friendly, publish it. Honestly, there is no way I could have done a better job given the time constraint.
I even showed my non-tech relative how to use v0. Since all changes requested to v0 was in english, she had no trouble learning how to use it in one minute.
Quantitative should be easy. OpenAI's ARR is $20b in 2025. Up nearly 6x over last year. If it isn't useful, people wouldn't pay for it.
These things are wicked, and unlike some new garbage javascript framework, it's revolutionary technology that regular people can actually use and benefit from. The mobility they provide is insane.
https://old.reddit.com/r/ElectricUnicycle/comments/1ddd9c1/i...
There is something to be said for the protective shell of a vehicle.
But - even funnier - the thing is an urbanist tech-bro toy? My days of diminishing the segway's value are certainly coming to a middle.
That being said the metaverse happened but it just wasn't the metaverse those weird cringy tech libertarians wanted it to be. Online spaces where people hang out are bigger than ever. Segways also happened they just changed form to electric scooters.
In any case, Segways promised to be a revolution to how people travel - something I was already doing and something that the marketing was predicated on. 3DTVs - a "better" way to watch TV, which I had already been doing. NFTs - (among other things) a financially superior way to bank, which I had already been doing. Metaverse - a more meaningful way to interact with my team on the internet, which I had already been doing.
Personally I wouldn't mind if they went back to being small again
If a PC crashes when I uses more than 20% of its soldered memory, i throw it away.
If a mobile phone refuses to connect to a cellular tower, I get another one.
What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.
I can't even get the most expensive model on Claude to use "ls" correctly, with a fresh context window. That is a command that has been unchanged in linux for decades. You exaggerate how reliable these tools are. They are getting more useless as more customers are added because there is not enough compute.
Just yesterday, AirDrop wouldn't work until I restarted my Mac. Google Drive wouldn't sync properly until I restarted it. And a bug in Screen Sharing file transfer used up 20 GB of RAM to transfer a 40 GB file, which used swap space so my hard drive ran out of space.
My regular software breaks constantly. All the time. It's a rare day where everything works as it should.
LLMs have certainly gotten to the point where they seem about as reliable as the rest of the tools I use. I've never seen it say 2+2=5. I'm not going to use it for complicated arithmetic, but that's not what it's for. I'm also not going to ask my calculator to write code for me.
There are plenty of people manufacturing their expectations around the capabilities of LLMs inside their heads for some reason. Sure there's marketing; but for individuals susceptible to marketing without engaging some neurons and fact checking, there's already not much hope.
Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.
That’s very much a false analogy. In the 60s, cars were very reliable (not as much as today’s cars) but it was already an established transportation vehicle. 60s cars are much closer to todays cars than 2000s computers are to current ones.
"reliability" can mean multiple things though. LLM invocations are as reliable (granted you know how program properly) as any other software invocation, if you're seeing crashes you're doing something wrong.
But what you're really talking about is "correctness" I think, in the actual text that's been responded with. And if you're expecting/waiting for that to be 100% "accurate" every time, then yeah, that's not a use case for LLMs, and I don't think anyone is arguing for jamming LLMs in there even today.
Where the LLMs are useful, is where there is no 100% "right or wrong" answer, think summarization, categorization, tagging and so on.
the quality of being able to be trusted or believed because of working or behaving well
For a tool, I expect “well” to mean that it does what it’s supposed to do. My linter are reliable when it catches bad patterns I wanted it to catch. My editor is reliable when I can edit code with it and the commands do what they’re supposed to do.So for generating text, LLMs are very reliable. And they do a decent job at categorizing too. But code is formal language, which means correctness is the end result. A program may be valid and incorrect at the same time.
It’s very easy to write valid code. You only need the grammar of the language. Writing correct code is another matter and the only one that is relevant. No one hire people for knowing a language grammar and verifying syntax. They hire people to produce correct code (and because few businesses actually want to formally verify it, they hire people that can write code with a minimal amount of bugs and able to eliminate those bugs when they surface).
Ah, then LLMs are actually very reliable by your definition. They're supposed to output semi-random text, and whenever I use them, that's exactly what happens. Except for the times I create my own models and software, I basically never see any cases where the LLM did not output semi-random text.
They're not made for producing "correct code" obviously, because that's a judgement only a human can do, what even is "correct" in that context? Not even us humans can agree what "correct code" is in all contexts, so assuming a machine could do so seems foolish.
That's the crux of the problem. Many proponents of LLMs over promise the capabilities, and then deny the underperformance through semantics. LLMs are "reliable" only if you're talking about the algorithms behind the scene and you ignore the marketing. Going off the marketing they are unreliable, incorrect, and do not do what they're "supposed to do".
FWIW, I agree LLMs are massively over-sold for the average person, but for someone who can dig into the tech, use it effectively and for what it works for, I feel like there is more interesting stuff we could focus on instead of just a blanket "No and I won't even think about it".
The problem is that, historically speaking, you have two choices;
1. Resist as long as you can, risking being labeled a Luddite or whatever.
2. Acquiesce.
Choice 1 is fraught with difficulty, like a dinosaur struggling to breathe as an asteroid came and changed the atmosphere it had developed lungs to use. Choice 2 is a relinquishment of agency, handing over control of the future to the ones pulling the levers on the machine. I suppose there is a rare Choice 3 that only the elite few are able to pick, which is to accelerate the change.
My increased cynicism about technology was not something that I started out with. Growing up as a teen in the late-80's/early-90's, computers were hotly debated as being either a fad that would die out in a few years or something that was going to revolutionize the way we worked and give us more free time to enjoy life. That never happened, obviously. Sure, we get more work done in less time, but most of us still work until we are too broken to continue and we didn't really gain anything by acquiescing. We could have lived just fine without smartphones or laptops (we did, I remember) and all the invasive things that brought with it such as surveillance, brain-hacking advertising and dopamine burnout. The massive structures that came out of all the money and genius that went into our tech became megacorporations that people like William Gibson and others warned us of, exerting a level of control over us that turned us all into batteries for their toys, discarded and replaced as we are used up. It's a little frightening to me, knowing how hyperbolic that used to sound 30 years ago, and yet, here we stand.
Generative AI threatens so much more than just altering the way we work, though. In some cases, its use in tasks might even be welcomed. I've played with Claude Code, every generative model that Poe.com has access to, DeepSeek, ChatGPT, etc...they're all quite fascinating, especially when viewed as I view them; a dark mirror reflecting our own vastly misunderstood minds back to us. But it's a weird place to be in when you start seeing them replace musicians, artists, writers...all things that humanity has developed over many thousands of years as forms of existential expression, individuality, and humanness because there is no question that we feel quite alone in our experience of consciousness. Perhaps that is why we are trying to build a companion.
To me, the dangers are far too clear and present to take any sort of moderate position, which is why I decided to stop participating in its proliferation. We risk losing something that makes us us by handing off our creativity and thinking to this thing that has no cognizance or comprehension of its own existence. We are not ready for AI, and AI is not ready for us, but as the Accelerationists and Broligarchs continue to inject it into literally every bit of tech they can, we have to make a choice; resist or capitulate.
At my age, I'm a bit tired of capitulating, because it seems every time we hand the reigns over to someone who says they know what they are doing, they fuck it up royally for the rest of us.
And by any metric, the average citizen of a developed country is wildly better off than a century or two ago. All those moments of change in the past that people wrung their hands over ultimately improved our lives, and this probably won’t be any different.
Yep. Makes sense.
> And by any metric
Can you cite one? Just curious. I enjoy when people challenge the idea that the advancement of tech doesn't always result in a better world for all because I grew up in Detroit, where a bunch of car companies decided that automation was better than paying people, moved out and left the city a hollowed out version of itself. Manufacturing has returned, more or less, but now Worker X is responsible for producing Nx10 Widgets in the same amount of time Worker Y had to produce 75 years ago, but still gets paid a barely livable wage because the unchecked force of greed has made it so whatever meager amount of money Worker X makes is siphoned right back out of their hands as soon as the check clears. So, from where I'm standing, your version of "improvement" is a scam, something sold to us with marketing woo and snake oil labels, promising improvement if we just buy in.
The thing is, I don't hate making money. I also don't hate change. Quite the opposite, as I generally encourage it, especially when it means we grow as humans...but that's generally not the focus of what you call "change," is it? Be honest with yourself.
What I hate is the argument that the only way to make it happen is by exploiting people. I have a deep love technology and repair it in my spare time for people to help keep things like computers or dishwashers out of landfills, saving people from having to buy new things in a world that treats technology as increasingly disposable, as though the resources used to create are unlimited. I know quite a bit about what makes it tick, as a result, and I can tell you first hand that there's no reason to have a microphone on a refrigerator, or a mobile app for an oven. But you and people like you will call that change, selling it as somehow making things more convenient while our data is collected, sorted and we spend our days fending of spam phone calls or contemplating if what we said today is tomorrow's thought crime. Heck, I'm old enough to remember when phone line tapping was a big deal that everyone was paranoid about, and three decades later we were convinced to buy listening devices that could track our movements. None of this was necessary for the advancement of humanity, just the engorgement of profits.
So what good came of it all? That you and I can argue on the Internet?
It's just exhausting to read the 1000th post of people saying "If we replace jobs with AI, we will all be having happy times instead of doing boring work." It's like reading a Kindergartner's idea of how the world works.
People need to pay for food. If they are replaced, companies are not going to make up jobs just so they can hire people. They are under no responsibility or incentive to do that.
It's useless explaining that here because half of the shills likely have ulterior reasons to be obtuse about that. On top of that, many software developers are so outside the working class that they don't really have a concept of financial obligation, some refusing to have friends that aren't "high IQ", which is their shorthand for not poor or "losers".
I do Vibe Code occasionally, Claude did a decent job with Terraform and SaltStack recently, but the words ring true in my head about how AI weakens my thinking, especially when it comes to Python or any programming language. Tread carefully indeed. And reading a book does help - I've been tearing through the Dune books after putting them off too long at my brother's recommendation. Very interesting reflections in those books on power/human nature that may apply in some ways to our current predicament.
At any rate, thank you for the thoughtful & eloquent words of caution.
I'm sure github has documents out there somewhere that explain this, but typing that prompt took me two minutes. I'm able daily to get fast answers to complex questions that in years past would have taken me potentially hours of research. Most of the time these answers are correct, and when they are wrong it still takes less time to generate the correct answer than all that research would have taken before. So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder. And also realize that code, and the ability to write working code, is a small part of what we do every day.
So what people do is collecting documentations. Give them a glance (or at least the TOC), the start the process to understand the concepts. Sure you can ask the escape code for setting a terminal title, but will it says that not all terminals support that code? Or that piping does not strip out escape codes? That’s the kind of gotchas you can learn from proper manuals.
There's a real danger in that they use so many resources though. Both in the physical world (electricity, raw materials, water etc.) as well as in a financial sense.
All the money spent on AI will not go to your other promising idea. There's a real opportunity cost there. I can't imagine that, at this point, good ideas go without funding because they're not AI.
If an amazing world changing technology like LLMs shows up on your doorstep and your response is to ignore it and write blog posts about how you don't care about it then you aren't curious and you aren't really a hacker.
https://eur01.safelinks.protection.outlook.com/?url=https%3A...
Edit: Ha I see you edited "empty the dishwasher" to "hand wash the dishes". My thoughts exactly.
There's no hope for these people.
Valuation is fundamentally connected to scarcity. 'Devaluation' is just negative spin for making it plentyful.
When cicumstances changed to make something less scarce, one cannot expect to get the same value for it because of past valuation. That is just rent-seeking.
Just the job for an AI agent!
So what I did is this - I wrote the app in Django, because it's what I'm familiar with.
Then in the view for the search page, I picked apart the search terms. If they start with "01" it's an old phone number so look in that column, if they start with "03" it's a new phone number so look in that column, if they start with "07" it's a mobile, if it's a letter followed by two digits it's a site code, if it's numeric but doesn't have a 0 at the start it's an internal number, and if it doesn't match anything then see if it exists as a substring in the description column.
There we go. Very fast and natural searching that Does What You Mean (mostly).
No Artificial Intelligence.
All done with Organic Home-grown Brute Force and Ignorance.
Because that's sometimes just what you need.
I really don't see the harm in using them this way that can't also be said about traditional search engines. Search engines already use algorithms, it's just swapping out the algorithm and interface. Search engines can bias our understanding of anything as much as any LLM, assuming you attempt to actually verify information you get from an LLM.
I'm of the opinion that if you think LLMs are bad without exception, you should either question how we use technology at all or question this idea that they are impossible to use responsibly. However I do acknowledge that people criticize LLMs while justifying their usage, and I could just be doing the same thing.
I still can barely believe a human being could write this, though we have all read this sort of sentence countless times. Which "structure of power and violence" replicated itself into the brains of people, making them think like this? Everything "exists to reinforce and strengthen existing structures of power and violence" with these people, and they will not rest until there's anything left to attack and destroy
The biological senses and abilities were constantly augmented throughput the centuries, pushing the organic human to hide inside deeper layers of what you call as yourself.
What's yourself without your material possessions and social connections? There is no such thing as yourself without these.
Now let's wind back. Why resist just one more layer of augmentation of our senses, mind and physical abilities?
perhaps a being that has the capacity for intention and will?
Knowledge is shaped by constraints which inform intention, it doesn't "drive it."
"I want to fly, I intend to fly, I learn how to achieve this by making a plane."
not
"I have plane making knowledge therefore I want and intend to fly"
However, I totally understand that constraints often create a feedback loop where reasoning is reduced to the limitations which confine it.
My Mom has no idea that "her computer" != "windows + hp + etc", and if you were to ask her how to use a computer, she would be intellectually confined to a particular ecosystem.
I argue the same is true for capitalism/dominant culture. If you can't "see" the surface of the thing that is shaping your choices, chances are your capacity for "will" is hindered and constrained.
Going back to this.
> What's yourself without your material possessions and social connections? There is no such thing as yourself without these.
I don't think my very ability to make choices comes from owning stuff and knowing people.
And no, I don't need AI for this level of inquiry.
Then why should I care about your opinions of them if you have zero experience using them?
I think of it like frozen dinners. Frozen dinners are not the same as home cooked meals. There is a place for frozen dinners, fast foods, home cooked meals, and nice restaurants. Plus, many of us spend extra time and money making specialty food that may be as good as anything. Frozen dinners don't take away from that.
I think it's the same for coding and AI use. It might eventually enhance coding overall and help bring an appreciation to what engineers are doing.
Hobby or incidental coders have vastly expanded capabilities. Think of the security guy that needs one program to parse through files for a single project. Those tasks are reasonably attainable today without buying and studying the sed/awk guide. (Of course, we should all do that)
Professionals might also find value using AI tools like they would use a spell checker or auto-complete that can also lookup code specs or refer to other project files for you.
The most amazing and useful software, the software that wows us and moves us or inspires us, is going to be crafted and not vibed. The important software will be guided by the hands of an engineer with care and competence to the end.
I've tried it multiple times, but even after spending 4 hours on a fresh project I don't feel like I know what the hell is going on anymore.
At that point I'm just guessing what the next prompt is to make it work. I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
I don't understand how anyone can work like that and have confidence in their code.
Peter Naur argues that programming is fundamentally an activity of theory building, not just program text production. The code itself is merely the artifact of the real work.
You must not confuse the artifact (the source code) with the mind that produced the artifact. The theory is not contained in the text output of the theory-making process.
The problems of program modification arise from acting on the assumption that programming is just text production; the decay of a program is a result of modifications made by programmers without a proper grasp of the underlying theory. LLMs cannot obtain Naur's Ryleian "theory" because they "ingest the output of work" rather than developing the theory by doing the work.
LLMs may _appear_ to have a theory about a program, but this is an illusion.
To believe that LLMs can write software, one must mistakenly assume that the main activity of the programmer is simply to produce source code, which is (according to Naur) inaccurate.
Are they mainly using certain frameworks that already have a rigid structure, thus allowing LLMs to not worry about code structure/software architecture?
Are they worry-free and just run with it?
Not asking rhetorically, I seriously want to know.
This is the default state for a lot of programmers, so vibe coding doesn't feel any different.
Poll this API endpoint in this file and populate the context with the result. Only a few lines of code.
Update all API calls to that endpoint with a view into the context.
I can give the AI those steps as a list and go adjust styles on the page to my liking while it works. This isn’t the kind of parallelism I’ve found to be common with LLMs. Often you are stuck on figuring out a solution. In that case AI isn’t much help. But some code is mostly boilerplate. Some is really simple. Just always read through everything it gives you and fix up the issues.
After that sequence of edits I don’t feel any less knowledgeable of the code. I completely comprehend every line and still have the whole app mapped in my head.
Probably the biggest benefit I’ve found is getting over the activation energy of starting something. Sometimes I’d rather polish up AI code than start from a blank file.
Anyway, the point I'm getting to was it was glorious to understand what every bit of every register and every I/O register did. There were NO interposing layers of software that you didn't write yourself or didn't understand completely. I even wrote a disassembler for the BASIC ROM and spend many hours studying it so I could take advantage of useful subroutines. People even published books that had that all mapped out for you (something like "Secrets of the TRS-80 ROM Decoded").
Recently I have been helping a couple teenagers in my neighborhood learn Python a couple hours a week. After installing Python and going through the foundational syntax, you bet I had them write many of those same games. Even though it was ASCII monsters chasing their character on the screen, they loved it.
It was similar to this, except it was real-time with a larger playfield:
https://www.reddit.com/r/retrogaming/comments/1g6sd5q/way_ba...
I've never really worked on such a low level, the closest I've gotten before is bytecode - which while satisfying - just isn't as satisfying as having to imagine the binary moving around the CPU and registers (and busses too).
I'm even finding myself looking at computers in a totally different way, it's a similar feeling to learning a declarative, or functional language (coming from a procedural language) - except with this amazing hardware component too.
Hats off to you though, I'm not sure I'd have had the patience to code under those conditions!
Those hackers you're so lamenting are gonna make it, but you aren't.
Not that there's anything wrong with crafting, but for those of us who just care about building things, LLM's are an absolute asset.
I think its amazing what giant vector matrices can do with a little code.
Writing code is very easy if you know the solution and the semantics of the coding platform. But knowing the solution is a difficult task, even in a business settings where the difficulty are more communication issues. Knowing the semantics of the coding platform is also a difficult one, because you’ll probably be using others’ code and you’ll face the same communication issue (lack of documentation, erroneous documentation, etc…)
So being good at programming does not really means knowing code. It’s more about knowing how to bypass communication barriers to get the knowledge you need.
Without an explanation of what they author is calling out as flaws, it is hard to take this article seriously.
I know engineers I respect a ton who have gotten a bunch of productivity upgrades using "AI". My own learning curve has been to see Claude say "okay, these integration tests aren't working. Let me write unit tests instead" and go on when it wasn't able to fix a jest issue.
It seems that most people preferring natural language over programming languages don't want to learn the required programming language and ending up reinventing their own worse one.
There is a reason why we invented programming languages as an interface to instruct the machine and there is a reason why we don't use natural language.
"Hacker" was a recognition that there existed a crusty old entrenched system (mostly not through any fault of any individual) and that it is good to poke and chip away at it, though exploring the limits of new technology.
Whatever we're doing now here, it's emphatically not that.
FAANG is not really a thing here and people are much more tech-luddite, privacy paranoid.
Sure, it can be overdone. But at the same time, it shouldn't be undersold.
Opting in to weirdness and curiosity is the only bug worth keeping which will eventually become a norm
> [...] making it increasingly hard to learn things [...]
I find chatting with AI and drilling it for details is often more effective than other means of searching for the same information, or even asking random co-workers. It's all about how you use it.
> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress. I’d even go as far and say they are intentional.
> AI systems being egregiously resource intensive is not a side effect — it’s the point.
WTF? There's nothing for me to learn from this post.
A year ago, no reasonable person would use AI for anything but small-scoped autocomplete. Now it can author entire projects without oversight. Inevitably every failure case for LLM's is corrected, everything people said LLM's "could never do" they start doing within 6 months of that prognostication.
For example, I spent a bunch of dollars to let Claude figure out how to setup a VSCode workspace with a multi-environment uv monorepo with a single root namespace and an okayish VSCode linting support (we still failed to figure out how to enable a different python interpreter for each folder for Ruff, but that seems to be a Ruff extension limitation).
This article lacks nuance, and could be summarized as "LLMs are bad" Later, I suspect this author (and others of this archetype) will moderate and lament "What I really meant was: I don't like corporations lying about LLMs, or using them maliciously; I didn't imply they don't have uses". The words in the article do not support this.
I believe this pattern is rooted in social-justice-oriented (Is that still the term?) USA left politics. I offer no explanation for this conflation, but an observation.
It's nuanced, can be abused, but can be beneficial when used responsibly in certain ways. It's a tool. It's a powerful tool, so treat it like a powerful tool: learn about it enough to safely use it in a way to improve your life and those around you.
Avoiding it completely whilst confidently berating it without experience is a position formed from fear, rather than knowledge or experience. I'm genuinely very surprised this article has so many points here.
So I’m not even surprised it’s having so many internet points. As if they were the sign of quality, then the opposite. Bored not very smart people thinking the more useless junk they consume, the better off they’ll become. Doesn’t work that way.
The big tech will build out compute in a never seen speed and we will reach 2e29 Flops faster than ever.
Big tech is competing with each other and they are the ones with the real money in our capitalistic world but even if they would find some slow down between each others, countries are also now competing.
In the next 4 years and the massive build out of compute, we will see a lot clearer how the progress will go.
And either we hit obvous limitations or not.
If we will not see an obvious limitation, fionas opinion will have 0 relevance.
The best chance for everyone is to keep a very very close eye on AI to either make the right decisions (not buying that house with a line of credit; creating your own product a lot faster thanks to ai, ...) or be aware what is coming.
Thanks for the fish and enjoy the ride.
When the AI hype is over and the bubble has burst, I'll still be here, writing quality software using my brain and my fingers, and getting paid to do it.
Either way its a lost cause.
If I ever want to deliver a game I might outsource my hex grid to AI. But back in those days I could have probably used a library.
Is hacking about messing around with things? You can still do it, ignore AI, ignore prior art. You can reimplement STL because std vector is "not fast enough". Is hacking about making things? Then again, AI boilerplate is little different than stitching together libraries in practice.
The real conclusion is:
> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress.
which you can tell from the title. There are zero arguments made to support this. It's just faux-radical rambling. I'm amazed how people are impressed by this privileged middle-class babble. This is an absolutely empty-headed article that AI could spit out dozens of versions of.
I care how well your AI works. I also care how it works, like I care about how transistors work. I do not want to build my own transistors*, although I like to speculate about how different ones could be built, just like I like to think about different machine learning architectures. The skills that I learned when I learned computers put me in an ideal position to understand, implement, and use machine learning.
The reason I care about how well your AI works is because I am going to use it to accomplish my own goals. I am not going to fetishize being a technician in an art most people don't know, I am not a middle-class profession worshiper. I get it, your knowledge of a rare art guarantees that you eat. If your art becomes obviated by technology (like the art of doing math by hand, which you could once live very well on from birth to death), you have to learn something else.
But I care how well your AI works because I am trying to accomplish things in the world, not build an identity. I think AI is bad, and I'm a bit happy that it's bad, because it means that I can use it to bridge myself to the next place before it gets good enough not to need me. The fact that I know how computers work means that I can make the AI do what I want in a way that somebody who didn't have my background couldn't. The first people that were dealing with computers were people who were good at math.
Life is not going to be good for the type of this year's MBP js programmer who learned it because the web was paying, refused to learn anything else so only gradually became a programmer after node came around, and only used the trendy frameworks that it seemed they were hiring for, who still has no idea how a computer works. AI is actually going to give everything back to the nerds, because AI assistance might eventually mean you're only limited by your imagination (within the context of computers.) Nerds are imaginative. The kind of imagination that has been actively discouraged in tech for a long time, since it became a profession for marketers and middlemen.
I almost guarantee this call for craftsmen against AI is coming from someone who builds CRUD apps for a living. To not be excited about what AI can do for the things that you already wanted to create, the things you dream of and couldn't find enough people with enough skills to dream with you to get it done; to me that's a sign that you're just not into computers.
My fears of AI is that it will be nerfed, made so sycophantic that it sucks down credits and gets distracted so often that it makes it impossible to work, be used to extract my ideas and give them to someone with more capital and manpower who can jump in front of me (the Amazon problem), that governments will be bribed into making it impossible to run them locally, that governments will be bribed into letting corporations install them on all our computers so they can join in on the surveillance and control. I'm worried about the speakwrite. I'm worried about how it will make dreams possible for evil men. I am not worried about losing my identity. I'm not insecure like that.
* although I have of course, in school, by stringing a bunch of NANDs together. I was a pioneer of the WAS-gate, which is when you turn on the power and a puff of smoke comes out of one of your transistors.
This is such a bizarre sentiment for any person interested in technology. AI is, without any doubt, the most fascinating and important technology I have seen developed in my lifetime. A decade ago the idea of a computer not only holding a reasonable conversation with a human, but being able to talk with a human on deep and complex subjects seemed far out of reach.
No doubt there are many deep running problems with it, any technology with such a radical breakthrough will have them. But none of that takes away from how monumental of an achievement it is.
Looking down at people for using it or being excited about it is such an extreme position. Also the insinuation that the only reason anybody uses it because they are forced into it, is completely bizarre.
The first aspect is the “I don’t touch AI with a stick”. AI is a tool. Nobody is obligated to touch it obviously, but it is useful in certain situations. So I disagree with the author’a position to avoid using AI. It reads like stubbornness for the sake of avoiding new tech.
The second angle is the “bigtech corporate control” angle. And honestly, I don’t get this argument at all. Computers and the digital world has created the biggest distopian world we have ever witnessed. From absurd amounts of misinformation and propaganda fueled by bot farms operated at government levels, all the way to digital surveillance tech. You have that strong of an opinion against big tech and digital surveillance, blaming AI for that, while enjoying the other perils of big tech, is virtue signaling.
Also, what’s up with the overuse of “fascism” in places where it does not belong?
AI lets you do that faster.
AI may suggest a dumb way, so you have to think, and tell it what to do.
My rate of thinking is faster than typing, so the bottleneck has switched from typing to thinking!
Don't let AI think for you. Do actual intensional arch design.
Programmers that don't know CS who only care about hammering the keyboards because they're artisans have little future.
AI also give me back my hobby after having kids -- time is valuable, and AI is energy efficient.
We are truly living in a cambrian explosion -- lot of slop will be produced, but market and selection pressure will weed those out.
Unless you're neuralinking to AI, you're still typing.
What changed is what you type. You type less words to solve your problem. The machine does the conversion from less words to more words. At the expense of less precision: the machine can do the conversion to the incorrect sequence of more words.
Is AI resource-intensive by design? That doesn’t make any sense to me. I think companies are furiously working toward reducing AI costs.
Is AI a tool of fascism? Well, I’d say anything that can make money can be a tool of fascism.
I can sort of jive with the argument that AI is/will be reinforcing the ideals of those in power, although I think traditional media and the tooling that AI intends to replace like search engines accomplished that just fine.
What we are left with is, I think, an author who is in denial about their special snowflake status as a programmer. It was okay for the factory worker to be automated away, but now that it’s my turn to be automated away I’m crying fascism and ethics.
Their friends behave the way they do about AI because they know it’s useful but know it’s unpopular. They’re trying to save face while still using the tool because it’s so obviously useful and beneficial.
I think the analogy is similar to the move from film to digital. There will be a tiny amount of people who never buy in, there will be these “ashamed” adopters who support the idea of film and hope it continues on, but for themselves personally would never go back to film, and then the majority who don’t see the problem with letting film die.
I have a little pet theory brewing. Corporate work claims that we hire junior devs who become intermediate devs, who then become senior devs. The doomsday crowd claim that AI has replaced junior and intermediate devs, and is coming for the senior devs next.
This has felt off to me because I do way more than just code. Business users don’t want get into the details of building software. They want a guy like me to handle that.
I know how to talk to non-technical SMEs and extract their real requirements. I understand how to translate this into architecture decisions that align with the broader org. I know how to map it into a plan that meets those org objectives. And so on.
I think that really what happens is nerds exist and through osmosis a few of them become senior developers. They in turn have junior and intermediate assistant developers to help them deliver. Sometimes those assistants turn out to be nerds themselves, and they spontaneously transmute into senior developers!
AI is replacing those assistant human developers, but we will still need the senior developers because most business people want to sit with a real human being to solve their problem.
I will, however, get worried when AIs start running businesses. Then we are in trouble.
The entire open source movement would like a word with you.
I would also recommend you to peruse the last 50 years for completely reproductible, homegrown or open computing hardware systems you can build yourself from scratch without requiring overly expensive or exotic hardware. Yes, homegrown CPUs exist, but they "barely work" and often still rely on logic gates. Can you produce 74xx series ICs reliably in a homelab setting? Maybe, but for most of us, probably not. And certainly not for the guys ranting about "companies taking over".
If can't build your computing devices from scratch, store bought is fine. If you can, you're the exception and not the rule.
Used right, Claude Code is actually very impressive. You just have to already be a programmer to use it right - divide the problem into small chunks yourself, instruct it to work on the small chunks.
Second example - there is a certain expectation of language in American professional communication. As a non native speaker I can tell you that not following that expectation has real impact on a career. AI has been transformational, writing an email myself and asking it to ‘make this into American professional english’
The youthful desire to rage against the machine?
I guess it depends on what you define as "tech", but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups. Some even threatened Intel with x86 clones.
It wasn't until the late '90s that NVIDIA was the clear GPU winner, for instance. It had serious competition from 3DFX, ATI, and a bunch of other smaller companies.
Most of them used intel, motorola or zilog tech at some capacity. Most of them with a clock used dallas semiconductor tech; Many of them with serial ports also used either intel or maxim/analog devices chips.
Many of those implementations are patented, and their inner designs were generically, "trade secrets". Most of the clones and rebrands were actually licensed (most of 80x51 microcontrollers and z80 chips are licensed tech, not original). As a tinkerer, you'd receive a black box (sometimes literally) with a series of pins and a datasheet.
If anything, i'd say you have much more choice today than in the 80s/90's.
We shape the world through our choices, generally under the umbrella of deterministic systems. AI is non-deterministic, but instead amplifies the concerns by a few wealthy corporations / individuals.
So is AI effective at generating marketing material or propagating arguably vapid value systems in the face of ecological, cultural, and economic crisis? I'd argue yes. But effective also depends on an intention, and that's not my intention, so it's not as effective for me.
I think we need more "manual" choice, and more agency.
AI speeds me up a tremendous amount in my day job as a product engineer.
I just think the things they are effective at are a net negative for most of us.
It's possible to use AI chatbots against the system of power, to help detect and point out manipulation, or lack of nuance in arguments, or political texts. To help decipher legalese in contracts, or point out problematic passages in terms of use. To help with interactions with the sate, even non-trivial ones like FOI requests, or disputing information disclosure rejections, etc.
AI tools can be used to help against the systems of power.
Persuasion tip: if you write comments like this, you are going to immediately alienate a large portion of your audience who might otherwise agree with you.
Think of old SAP systems with a million obscure customization - any medium to large codebase that is mostly vibe coded is instantly legacy code.
In your hole analogy: People don't care if a mine is dug by a bot or planned by humans until there is structural integrity issues or tunnels that are collapsing and nobody is able to read the map properly.
What a loaded sentence lol. Implying being a hacker has some correlation with being progressive. And implying somehow anti-AI is progressive.
> AI systems being egregiously resource intensive is not a side effect — it’s the point.
Really? So we're not going to see AI users celebrating over how much less power DeepSeek used, right?
Anyway guess what else is resource intensive? Making chips. Follow the line of logic you will find computers consolidate powers and real progressive hackers should use pencil and paper only.
Back to the first paragraph...
> almost like a reflex, was a self-justification of why the way they use these tools is fine, while other approaches were reckless.
The irony is off the roof. This article is essentially: when I use computational power how I like, it's being a hacker. When others use computational power their way, it's being fascists.
I didn't read it that way. "Progressive hacker circles" doesn't imply that all hackers are progressive, it can just be distinguishing progressive circles from conservative ones.
- inventing scientific racism and (after that was debunked) reinventing other academic pretenses to institutionalize race-base governance and society
- forcibly sterilizing people with mental illnesses until the 1970s, through 2005 via coercion, and until the present via lies, fake studies, and ideological subversion
- being outspokenly antisemitic
Personally, I think it’s a moral failing we allow such vile people to pontificate about virtues without being booed out of the room.
I mean, yeah, that kind of checks out. The quoted part doesn't make much sense to me, but that most hackers are progressives (as in "enact progress by change", not the twisted American version) should hardly come as a surprise. The opposite would be that a hacker could be a conservative (again, not the US version, but the global definition; "reluctant to change"), which is pretty much a oxymoron. Best would be to eschew political/ideological labels in total, and just say we hackers are unpolitical :)
In the end? no one cares. I get just as much done (maybe more), while doing less work. Maybe some of my skills will atrophy, but I'll strengthen others.
I'm still auditing everything for quality as I would my own code before pushing it. At the end of the day, it usually makes fewer typos than I would. It certainly searches the codebase better than I do.
All this hype on both ends will fade away, and the people using the tools they have to get things done will remain.
"It took both time and experience before the workers learned to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used."
- Karl Marx. Das Kapital Vol 1 Ch 15: Machinery and Modern Industry, 1867
Tech can always be good, how its used is what makes it bad, or not.
I… what…?
... but, it is definitely worth considering whether the status quo is tolerable and whether we as technical creatives are willing to work with tools that live within it.
cachonk!
snap your cuffs, wait fot it eyebrows!
and demonstrate your mastery ,to the muterings of the golly gee's
it will last several more months untill the , GASP!!!, bills ,maintenance costs, regulatory burdens, and various legal issues combine to, pop AI's balloon, where then AI will be left automating all of the tedious, but chair filling, beurocratic/secretarial/appretice positions through out the white collar world. technology is slowly pushing into other sectors, where legacy methods and equipment can now be reduced to a free app on a phone, more to the point, a free, local only app. fact is that we are way over siliconed going forward and that will bite as well, terra bite phones for $100, what then?
My opinion: This sort of low-evidence writing is all too common in tech circles. It makes me wish computer science and engineering majors were forced to spend at least one semester doing nothing but the arts.
The most striking inconsistency emerges in how the author frames the people who use LLM tools. Early in the piece, colleagues experimenting with AI coding assistants are described in the language of addiction and pathology: they are “sucked into the belly of the vibecoding grind,” experiencing “existential crisis,” engaged in “harmful coping.” The comparison to watching a friend develop a drinking problem is explicit and damning. This framing treats AI adoption as a personal failure, a weakness of character, a moral lapse. Yet only paragraphs later, the author pivots to acknowledging that people are “forced to use these systems” by bosses, UI patterns, peer pressure, and structural disadvantages in school and work. They even note their own privilege in being able to abstain. These two framings cannot coexist coherently. If using AI tools is coerced by material circumstances and power structures, then the addiction metaphor is not just inapt but cruel — it assigns individual blame for systemic conditions. The author wants to have it both ways: to morally condemn users while also absolving them as victims of circumstance.
This tension extends to the author’s treatment of their own social position. Having acknowledged that abstention from LLMs requires privilege, they nonetheless continue to describe AI adoption as a “brainworm” that has infected even “progressive hacker circles.” The disgust is palpable. But if avoiding these tools is a luxury, then expressing contempt for those who cannot afford that luxury is inconsistent at best and self-congratulatory at worst. The acknowledgment of privilege becomes a ritual disclaimer rather than something that actually modifies the moral judgments being rendered.
The author’s claims about intentionality represent another significant weakness. The assertion that AI systems being resource-intensive “is not a side effect — it’s the point” is presented as revelation, but it functions as an unfalsifiable claim. No evidence is offered that anyone designed these systems to be resource-hungry as a mechanism of control. The technical requirements of training large models, competitive market pressure to scale, and the emergent dynamics of venture capital investment all offer more parsimonious explanations that don’t require attributing coordinated malicious intent. Similarly, the claim that “AI systems exist to reinforce and strengthen existing structures of power and violence” is stated as though it were established fact rather than contested interpretation. This is the central claim of the piece, and yet it receives no argument — it is simply asserted and then built upon, which amounts to begging the question.
The essay also suffers from a pronounced selection bias in its examples. Every person described using AI tools is in crisis, suffering, or compromised. No one uses them mundanely, critically, or with benefit. This creates a distorted picture that serves rhetorical purposes but does not reflect the range of actual use cases. The author’s friends who share their anti-AI sentiment are mentioned approvingly, establishing clear in-group and out-group boundaries. This is identity formation masquerading as analysis — good people resist, compromised people succumb.
There is a false dichotomy running through the piece that deserves attention. The implied choice is between the author’s total abstention, not touching LLMs “with a stick,” and being consumed by the pathological grind described earlier. No middle ground exists in this telling. The possibility of critical, limited, or thoughtful engagement with these tools is never acknowledged as legitimate. You are either pure or contaminated.
Reality doesn’t work this way! It’s not black and white. My take: AI is a transformative technology and the spectrum of uses and misuses of AI is vast and growing.
The philosophical core of their argument also contains an unexamined equivocation. The author invokes the extended cognition thesis — the idea that tools become part of us and shape who we are — to make AI seem uniquely threatening. But this same argument applies to every tool mentioned in the piece: hammers, pens, keyboards, dictionaries. The author describes their own fingers “flying over the keyboard, switching windows, opening notes, looking up words in a dictionary” as part of their extended cognitive process. If consulting a dictionary shapes thought and becomes part of our cognitive process, what exactly distinguishes that from asking a language model to check grammar or suggest a word? The author never establishes what makes AI categorically different from the other tools that have already become part of us. The danger is assumed rather than demonstrated.
There is also a genetic fallacy at work in the argument about power. The author suggests AI is bad partly because of who controls it — surveillance capitalists, fascists, those with enormous physical infrastructure. But this argument conflates the origin and ownership of a technology with its inherent properties. One could make identical arguments about the printing press, the telephone, or the internet itself. The question of whether these tools could be structured differently, owned differently, or used toward different ends is never engaged. Everything becomes evidence of a monolithic system of control.
Finally, there is an unacknowledged irony in the piece’s medium and advice. The author recommends spending less time on social media and reading books instead, while writing a blog post clearly designed for social sharing, complete with the vivid metaphors, escalating moral stakes, and calls to action that characterize viral content. The post exists within and depends upon the very attention economy it criticizes. This is not necessarily hypocrisy — we all must operate within systems we find problematic — but the lack of self-awareness about it is notable given how readily the author judges others for their compromises.
The essay is most compelling when it stays concrete: the phenomenology of writing as discovery, the real pressures workers face, the genuine concerns about who controls these systems and toward what ends. It is weakest when it reaches for grand unified theories of intentional domination, when it mistakes assertion for argument, and when it allows moral contempt to override the structural analysis it claims to offer. The author clearly cares about human flourishing and autonomy, but the piece would be stronger if that care extended more generously to those navigating these technologies without the privilege of refusal.
I didn't hear the author criticizing the character of their colleagues. On the contrary, they wrote a whole section on how folks are pressured or forced to use AI tools. That pressure (and fear of being left behind) drives repeated/excessive exposure. That in turn manifests as dependence and progressive atrophy of the skills they once had. Their colleagues seem aware of this as evidenced by "what followed in most of them, almost like a reflex, was a self-justification of why the way they use these tools is fine". When you're dependent on something, you can always find a 'reason'/excuse to use. AA and other programs talk about this at length without morally condemning addicts or assigning individual blame.
> For most of us, self-justification was the maker of excuses; excuses, of course, for drinking, and for all kinds of crazy and damaging conduct. We had made the invention of alibis a fine art. [...] We had to drink because at work we were great successes or dismal failures. We had to drink because our nation had won a war or lost a peace. And so it went, ad infinitum. We thought "conditions" drove us to drink, and when we tried to correct these conditions and found that we couldn't to our entire satisfaction, our drinking went out of hand
Framing something as addictive does not necessarily mean that those suffering from it are failures/weak/immoral but you seem to have projected that onto the author.
Their other analogy ("brainworm") is similar. Something that no-one would willingly sign up for if presented with all the facts up front but that slips in and slowly develops into a serious issue. Faced with mounting evidence of the problem, folks have a strong incentive to downplay the issue because it's cognitively uncomfortable and demands action. That's where the "harmful coping" comes in: minimizing the severity of the problem, avoiding the topic when possible, telling yourself or others stories about how you're in control or things will work out fine, etc.
It is the tool obsessed people who treat everything like a computer game that like "AI" for software engineering. Most of them have never written anything substantial themselves and only know the Jira workflow for small and insignificant tickets.
Generally speaking people just cannot really think this way. People broadly are short term thinkers. If something is convenient, people will use it. Is it easier to spray your lawn with pesticides? Yep, cancer (or biome collapse) is a tomorrow problem and we have a "pest" problem today. Is it difficult to sit alone with your thoughts? Well good news, Youtube exists and now you don't have to. What happens next (radicalization, tracking, profiling, propaganda, brain rot) is a tomorrow problem. Do you want to scroll at the end of the day and find out what people are talking about? Well, social media is here for you. Whether or not it's accidentally part of a privatized social credit system? Well again, that 's a problem for later. I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._
I don't see any way out of it. People can't seem to avoid these patterns of behavior. People asking for regulation are about as realistic as people hoping for abstinence. It's a correct answer in principle but just isn't going to happen.
I think that can be offset if you have a strong motivation, a clear goal to look forward to in a reasonable amount of time, to help you endure through the discomfort:
Before I had enough financial independence to be able to travel at will, I was often stuck in a shit ass city, where the most fun to be had was video games and fantasizing about my next vacation coming up in a month or 2, and that helped me a lot in coping with my circumstances.
Too few people are allowed or can afford even this luxury of a pleasant future, a promise of a life different/better than their current.
I wonder how much of that is "nature vs. nurture"?
Like the Tolkienesque elves in fantasy worlds, would humans be more chill too if our natural lifespans were counted in centuries instead of decades?
Or is it the pace of society, our civilization, that always keeps us on edge?
I mean I'm not sure if we're born with a biological sense of mortality, an hourglass of doom encoded into our genes..
What if everybody had 4 days of work per week, guaranteed vacation time every few months, kids didn't have to wake up at 7/8 in the morning every day, and progress was measured biennially, e.g. 2 years between school grades/exams, and economic performance was also reviewed in 2 year periods, and so on, could we as a species mellow the fuck out?
Dogs barely set food aside; they prefer gorging, which is a good survival technique when your food spoils and can be stolen.
Bees, at the other end of the spectrum, spend their lives storing food (or "canning", if you will - storing prepared food).
We first evolved in areas that were storage-adverse (Africa), and more recently many of us moved to areas with winters (both good and needful storage). I think "finish your meal, you might not get one tomorrow" is our baseline survival instinct; "Winter is coming!" is an afterthought, and might be more nurture-based behavior than the other.
For the first time in human history most people don't have to worry about famine, wars, disasters, or disease upending their lives; they can just wait it out in their homes.
Will that eventually translate to a more relaxed "instinct"?
I don't get all the whining of people about having to adapt. That's a constant in our industry and always has been. If what you were doing was so easy that it fell victim to the first generation of AI tools that are doing a decent enough job of it, then maybe what you were doing was a bit Ground Hog day to begin with. I've certainly been involved with a lot of projects where a lot of the work felt that way. Customer wants a web app thing with a log in flow and a this and a that. 99% of that stuff is kind of very predictable. That's why agentic coding tools are so good at this stuff. But lets be honest, it was kind of low value stuff to begin with. And it's nice that people over-payed for that for a while but it was never going to be forever.
There's still plenty of stuff these tools are less good at. It gets progressively harder if you are integrating lots of different niche things or doing some non standard/non trivial things. And even those things where it does a decent job, it still requires good judgment and expertise to 1) be able to even ask for the right thing and then 2) judge if what comes back is fit for purpose.
There's plenty of work out there supporting companies with decades of legacy software that are not going to be throwing away everything they have overnight. Leveling up their UIs with AI powered features, cross integrating a lot of stuff, etc. is going to generate lots of work and business. And most companies are very poorly equipped to do that in house even if they have access to agentic coding tools.
For me AI is actually generating more work, not less. I'm now taking on bigger things that were previously impossible to take on without involving more people. I have about 10x more things I want to do than I have bandwidth for. I have to take decisions about doing things the stupid old way because it's better/faster or attempting to generate some code. All new tools do is accelerate the pace and raise the ambition levels. That too is nothing new in our industry. Things that were hard are now easy, so we do more of them and find yet harder things to do next. We're not about to run out of hard things to do any time soon.
Adapting is hard. Not everyone will manage. Some people might burn out doing that or change career. And some people are in denial or angry about that. And you can't really expect others to loose a lot of sleep over this. Whether that's unfair or not doesn't really matter.
> “We programmers are currently living through the devaluation of our craft”
my interpretation of what the author means by devaluation is the general trend that we’re seeing in LLMs
The theory that I hear from investors is as LLMs generally improve, there will exist a day where a LLMs default code output, coupled with continued hardware speeds, will become _good enough_ for the majority of companies - even if the code looks like crap and is 100x slower than it needs to be
This doesn’t mean there won’t be a few companies that still need SWEs to drop down and do engineering, but tbh, the majority of companies today just need a basic web app - and we’ve commoditized web app dev tools to oblivion. I’d even go as far to argue that what most programmers do today isn’t engineering, it’s gluing together an ecosystem of tooling and or API’s.
Real engineering seems to happen outside of work on open source projects, at the mav 7 on specialized teams, or at niche deeply technical startups
EDIT: I’m not saying this is good or bad, but I’m just making the observation that there is a trend towards devaluing this work in the economy for the majority of people, and I generally empathize with people who just want stability and to raise a family within reasonable means
Now I write just about everything in Rust because why not? If I can vibe code Rust about as fast as Python, why would I ever use Python outside of ML?