Describing what you want is programming! Code is great for that because it is more precise and less ambiguous than natural languages.
The part about search engines is missing a key element. When you do a search engine search or query a LLM, you interact with the system, using your own brain to interpret and refine the results. The idea with programming is having the machine do it all by itself with no supervision, that's why you need to be extra precise.
It is not so different from having a tradesman build you something. If you have a good idea of what you want and don't want to supervise him constantly, you have to be really precise with your request, casual language is not enough. You need technical terms and blueprints, in the same way that programmers have their programming language.
I do like AI as a tool, it's great at a lot of things, but it not the panacea that so many believe, especially CEOs unfortunately.
"when edge cases emerge that the AI didn't anticipate"
The only "anticipation" that is happening within those tools is on token level, the tools have no idea (and are fundamentally unable to even have an idea) what the code is even supposed to do in the real world.
When you were composing your reply, did you just start typing, then edit and compose your thoughts better a few times before hitting the reply button?
I ask, because that's what I do. Most of the time, I never know what the next word is going to be, I just start typing. Sometimes I'll think it out, or even type out a whole screed until I run out of thoughts... then review it several times before hitting "reply".
By your logic, I'm no more advanced than any other LLM. I think there's a serious misunderstanding of the depths at which the internal state of the LLM is maintained across token outputs. It's just doing the same thing I do (and I suspect most other people do, which is decide then make up a convincing story that agrees with the decision, on a word by word basis)
Other times, when I'm trying to explain something technical or complex, there's a word, or a name I can't remember... it drives me nuts, if I'm in a hurry, I'll just use a synonym that's almost as good, and work around it. Yesterday, for example, it took a while to I remember the name Christopher Walken via the Fat Boy slim video on YouTube.
The only difference is we have the ability to edit it first, before the all powerful "reply" button. Then of course, there's edit... but that's like agentic LLMs.
What about the people who just want to have a pretty good idea of what the actual code is doing? Like, at a highish level, such as "reading some Typescript and understanding roughly how React/Redux will execute it." Not assembly, not algorithms development, but just nuts and bolts "how does data flow through this system." AI is great at making a good stab at common cases, but if you never sit down and poke through the actual code you are at best divining from shadows on the cave wall (yes it's shadows all the way down, but AI is so leaky that it can't really be considered an abstraction).
Just the other day I had GPT 4o spit out extremely plausible looking SQL that "faked" the join condition because two tables didn't have a simple foreign key relationship, so it wrote out
select * from table_a
JOIN
table_b ON table_b.name = 'foo'
Perfectly legal SQL that is quietly doing something that was entirely nonsensical. Nobody would intentionally write a JOIN... ON clause like this, and in context an outer join made no sense at all, but buried in several CTEs it took a nonzero amount of time to spot.That day will either happen or not in my lifetime. If it happens, no amount of interacting with shitty 2025 tools will prepare me for that because as soon as any such tool would actually understand what it's doing, no amount of weird coping strategies people develop for the current generation of tools will be necessary or work anymore.
So I can just sit back, program as I always did and just wait for the day till this stuff is actually more efficient than me in coding. Then I switch to that tool and no amount of prompt massaging knowledge from yesteryears will help me with that.
And then I need to wait for the tool solving what's usually really holding me up in my work, namely guessing the best future design based on incomplete current information from the present.
When that day comes, I will start to worry about my job. Not earlier.
First day on the job or what?
https://en.wikipedia.org/wiki/Aarne%E2%80%93Thompson%E2%80%9...
You know, I get it, earn those clicks. Spun that hype. Pump that valuation.
Now, go watch people on YouTube like Armin Ronacher (just search, you’ll find him), actually streaming their entire coding practice.
This is what expert LLM usage actually looks like.
People with six terminal running Claude are a lovely bedtime story, but please, if you’re doing it, do me a favour and do some live streams of your awesomeness.
I’d really love to see it.
…but so far, the live coding sessions showing people doing this “everyday 50x engineer” practice don’t seem to exist, and that makes me a bit skeptical.
2. An AI ad generator is one of the worst possible uses of AI I can think of.
People who think this would work and want to make it happen walk among us.
That’s how Hershey Kisses are made.
I’ve always been more of a Lindt kind of person. Not top of the heap (around here, the current kick is “Dubai Chocolate,” with $20 chocolate bars), but better than average.
I try to move quickly, and not break anything. It does work, but it’s more effort, takes longer, and is more expensive (which is mainly why “move fast and break things” is so popular).
I’m looking forward to “Artisanal” agents, that create better stuff, but won’t have a free tier, and will require experienced drivers.
Apparently, hundreds of millions, maybe billions, of people like Hershey's chocolate (I believe there's a difference in the American version and the Asian/European version, all are bad but the American is beyond sickly sweet and awful). Fine. I will try not to judge, but my god is Hershey's chocolate just awful. I wish I could share a proper dark with every one of those people and tell them to let it melt rather than chew it, to see how amazing chocolate can be (I make it by hand from Pingtung beans that I roast and shell myself, but you can get excellent "bean to bar" chocolate in every major city these days). I wish I could share a cup of proper cappucino from beans roasted that day with everyone that gets a daily Starbucks. I wish I could share a glass of Taihu with everyone that ends the day slamming a couple Buds or Coor's.
But, I guess because it's cheap, or easy, or just because it's what they're used to and now they actually like it, people choose what to me is so terrible as to be almost inedible.
I guess I'm like a spoiled aristocrat or something, scoffing at the peasants for their simple pleasures, but I used to be a broke student, and when my classmates were dropping 5$ a meal on the garbage dining hall burgers, I was making simple one pot paella-like dishes with like 1$ of ingredients, because that tasted better and was healthier, so, I don't know.
Anyway, vibecoded apps probably are bad, but they're bad in the way a Hershey's bar is bad: bad enough to build a global empire powerful enough to rival a Pharaoh, so powerful that it convinces billions that their product is actually good.
Coming back to software - I believe the author is correct. We will be able to standardize prompts that can create secure deployments and performant applications. We will have agents that can monitor these and deal with 95% of the issues that happen. The other 5% I have no clue. Most of what industry does today needs standarized architecture based on specs anyway. Human innovation via resume driven design generally overcomplicates things.
To me it looks like a rather bleak outlook on the future, if we all are supposed to work like that.
3rd party dependence is out of convenience, not necessity.
The point is with open weight models that are SOTA, this isn’t techno feudalism as some people here seem to claim.
My experience is always that there is a complexity threshold at which things start to take longer, not shorter, with the use of AI. This is not one-off scripts or small programs. But when you have systems that touch a lot of context, different languages and parts of the stack, IA sucks for how to design that code except for probably some advice or ideas. And even there it does not always get it right.
Give any AI any atypical problem and you will see it spit big hallucinations. Take the easy ones and then, yes you are faster, but those can and have been done 1 million times. It is just a bounded (in complexity) accelerator for your typical, written many times code. Give it assembly to optimize, SIMD or something with little documentation and you will see how it performs. Bad.
It is the tool for one-off scripts, scaffolding and small apps. Beyond that, it falls short.
It is like a very fast start with a lot of tech debt added for later.
So don't give it atypical problems. Hammers are really bad at driving in screws.
Use the tools that are APPROPRIATE FOR THE SPECIFIC TASK. Even if that means we're not always using the shiny toy you're desperate to get metrics up on. You'd look like a putz going into a machine shop and yelling at the guy working a lathe "but we just bought this laser engraver, go use it now!" Why is it different for programmers or other creatives?
Maybe the AI vendors need to start broadening their products too-- sell a better lathe to go with the laser engraver.
There are a lot of problems where the developer knows roughly what they want in terms of code already. The labour savings is mostly "not having to type in 500 lines of code" and it's wasted if they instead have to spend the same amount of time trying to craft prompts and babysit the LLM. I think there's a lot of potential in the "spicy autocomplete" flow for that use case-- start with a few key lines and let the system use pattern recognition and templates to quickly infer the other 450, all within the workflow they're already productive in.
AI just helps me with scaffolding and fast scripts. It does not help me at building better solutions.
It just does a better job when I do not know enough about the topic and sometimes I found it drives me the worst path.
For example, the other day I asked for "the fastest way to get a docker container with gcc15". It offered me Ubuntu + build the source via a checkout. So I suggested that it would be slow to wait to compile it (plus the problems if you do not compile with the right flags, etc.). So I suggested GUIX. I had some problems with GUIX daemon and channels, etc.
I ended up asking, after some fighting with GUIX. Maybe there is a container with gcc15? And it did exist.
So I wasted more time than if I had just found a gcc15 container myself...
There were also multiple things that could be improved, like I had to proactively ask for a non-root container, which did not give me.
So if you do not know what you want exactly, it kind of sucks and can do much better more often than not, but when you know what you are doing, just drive it a bit to scaffold but keep correcting.
All in all: I do not see how a tool like this can replace an expert that is fitting all stuff into a system that needs integration, whose software needs long-term maintenance, etc. It is not as good as advertised and anyway, when you ask something complex, it can seem to be working at first, but there is a point where it starts to make mistakes and lose additional context. Now you have a bunch of code where you understand a small part of it and have to maintain it.
I am not sure if this is good for software development at all in the sense that you are going to pay the bills in different phases of the project. It is not like you cannot do anything: it is just that the whole discipline must take into account context, maintenance and the human cognitive part (bc the human must keep understand what she is doing to be able to remain effective).
I also saw it spitting a lot, I mean, a lot of additional unnecessary code at times that just adds a ton of noise and that a professional programmer would make it much simpler (and hence, maintainable).
I still use it every day for things it does well: a bit of scaffolding here and there, an algorithm that is typical and I could clean up (depending of what I am doing), or to ask why something does not work to spot a bug or one-off scripts. To give me full solutions.... no, it actually sucks at that in a way that is not that obvious to people that did not work at it for some time and know the potential problems you will find down the road if you abuse it.
It is also quite ok at code reviews for pieces of code.
One of my dad's anecdotes was of someone who was very proud of the fact that they could multiply numbers with a slide rule faster than any of the then-newfangled electronic hand calculators.
A lot of stuff changed between him being born in 1939 and when he took early retirement in the late 90s.
Kinda weird that it's possible he might be one of the first generation programmers and I might be one of the last.
Such rapid change.
I am 100% willing to admin that the guy was cool and had good reason to be proud about that.
I really think AI is tremendously over-hyped and AGI is just selling and making money for the people who believe it is even possible.
These tools are probabilistic parrots and the proof is that when you give them something for which not much documentation exist they start to hallucinate a lot.
This transition really feels like that. If the metaphor holds (and who knows if it will).
1: the transition will take longer than people expect. I was programming assembler well into the 90's. AI is at the level compilers were in the 50's, where pretty much everybody had to understand assembler.
2: the ability to understand code rather than the spec documents AI work from is valuable and required, but will be required in smaller numbers than most of us expect. Coding experience helps make better spec sheets, but the other skills the original post espouses are also valuable, if not more so. And many of us have let those skills atrophy.
[1] 10 years is questionable. Is being paid $100 for a video game with ~100 hours of work put into it professional work? I have about 3 years of work doing assembly for an actual salary.
For me, claude creates plenty of bugs and hallucinations with just one.
Anything besides extremely simple things or extremely overprompted commands comes out 100% broken and wrong.
It is loosely based on reality and not in line with reality.
Assuming everything else the author believes is true, the real camps are "money" and "less money". Those camps already determine the success of businesses to a large extent. But especially in SWE where we traditionally cared less about degrees and more about skill, it's a new thing that "skill" and "experience" are directly cash related, something you can buy and out-source.
Looking for work and need a better github portfolio? Just up your Claude spend. Find yourself needing a promotion at work and in possession of some disposable income? Just pay out of pocket for the AI instead of expecting overtime from your employer or working nights and weekends, because you know you'll make up the difference when you're in charge of your department.
There is some historical precedent for this sort of thing; just read up on the buying and selling of army commissions. That works as well as you might expect, because when "expertise" is purchased like this it turns out that the officers you get are incompetent, and they mostly just fed soldiers into a meat-grinder. https://en.wikipedia.org/wiki/Purchase_of_commissions_in_the...
Honestly, if you have to ask then you'll probably not understand the answer, but here's some related questions to ponder. What's the problem with having money in politics as much as possible? What's the problem with eliminating all leadership with relevant domain-expertise, replacing it with people who know how to "play the game"? What's the problem with class-based societies in general? What's the problem with ignoring all fundamentals, denying expertise can even exist, and just full on embracing superficial optics everywhere? We've been in the fuck-around phase for a while now, but we're moving closer to the find-out phase.
> Are we just invoking the spectre of fallen comrades in battle as an emotional plea?
No. The emotional plea would be that IT and SWE actually created upward mobility for a lot of talented people who otherwise would not have been able to buy their way into the American middle class without, for example, joining the army to risk death for the benefit of elites. It will be sad to see backwards movement on that for sure, but we don't even need to invoke this argument.
The more rational argument is simply that meritocracy works better than classism. Even if you're fine with feeding people into the meat grinder on the off chance you get some personal glory, it's not just bad for the victims, it's bad for general morale, the army, the country involved, etc. Substitute these words with money/markets/shareholders/industry or whatever if it helps you to understand.
or people running mechanical calculators
or people doing math by hand
or people doing math without zeros and the decimal point.
Yes, it's sad that a skill might be subsumed into the technology stack. But do any of us miss having to really, REALLY understand what's involved in creating/sending IP packets across ethernet, or WiFi?
Sure, the tools are unreliable, but they'll get better over time. There will still be people trying to eek out the last bit of performance, or get rid of another byte of code, the Demo scene will live forever, in some fashion. It just won't be a work requirement any more.
We're the accountants manually recomputing spreadsheets on paper. We call ourselves "Software Engineers", well, now it's time to actually Engineer Software.
On a related note, my wife today suggested we buy a replacement Ninja toaster oven for just half the price I'd seen anywhere from a website odd~name.shop that I'd never heard of... the site looked normal, even slick, but a little research turns up the domain didn't exist a month ago.
Now perhaps this is a new business that failed to mention that fact, instead of an AI generated scam website, but I could not be certain without more effort than I wished to exert.
And this is a simple example of my worries from OP's line of thinking--I fear that AI will be increasingly bulldozing us past our cognitive capacity to function normally.
This is one of the main reasons I got out, and AI is just making it worse by using ambiguous language to describe a solution.
Unlike the free market, I have no interest in contributing to the vast pile of shit software that already exists.
Another tool in the box. import pdb is my way still
At least this is good material to imagine some funny future sci-fi scenarios like compiler developers optimizing for AI generated code similarly to how hardware developers sometimes optimize for code generated by some dominant compiler's output. In the far future anthropologists discovering dead programming languages inside long untouched AI generative pipelines and trying to decipher them :)
Programming is (to a rough approximation) turning fuzzy specs from humans into well crafted solutions that solve a problem. Sometimes using code.
We are in a new era of computing where probabilistic AND deterministic machines can be built and put to use by humans.
That’s super exciting!
And there it is: inevitable. The whole article is written in a pseudo-religious manner, probably with the help of "AI" to collate all known talking points.
I think the author is not working on anything that matters. His company is one of a million similar companies that ride the hype wave.
What matters is real software written before 2023 without "AI", which is now stolen and repackaged.
Anytime I think the AI bubble can't go any higher I'm reminded of the fact there are people who genuinely believe this. These are the people boardrooms are listening to despite the evidence for actual developer productivity savings being dubious at best: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
What happens when the money goes away and they realize they've been duped into joining a cult?
And who's gonna build the new stuff and not just spit out interpretations based on the already working examples? Man, the AI promoters are something.
Also, it fails to iterate on complex features. If you are just creating CRUDs, it may work. But even on CRUD scenarios I've seen it completely lose context and use wrong values and things go broke in ways that are hard to track or update.
I'm surprised people work with it and say those things. Are they really using the same tool I use?
I'm sure the problem isn't my prompting, because I've tried watching many videos of people doing the same, and I see the same issues as I've said.
HA HA HA HA HA HA HA HA HA HA HA HA
omg, thanks for the laugh - "bug-free quality in 2-5 years" pfffffft I'm not holding my breath - rather, I think that by then, the hype will have finally lost some steam as companies crash and burn with their shitty, "almost working" codebases.
Are some of us not, already? I sure have been in roles where I felt like it, writing glue to move things from one AWS service to another.
A perhaps bigger concern is how flimsy the industry itself is. When investors start asking where their returns are at, it's not going to be pretty. The likes of OpenAI and Anthropic are deep in the red, absolutely hemorrhaging money, and they're especially exposed since a big part of their income is from API-deals with VC-funded startups that in turn also have scarlet balance sheets.
Unless we have another miraculous breakthrough that makes these models drastically cheaper to train and operate, or we see massive increases in adoption from people willing to accept significantly higher subscription fees, I just don't see how this is going to end the way the AI optimists think it will.
We're likely looking at something similar to the dot com bubble. It's not that the technology isn't viable or that it's not going to make big waves eventually, it's just that the world needs to catch up to it. Everything people were dreaming of during the dot com bubble did eventually come true, just 15 years later when the logistics had caught up, smartphones had been invented, and the web wasn't just for nerds anymore.
I guess the argument of AI optimists is that these breakthroughs are likely to happen given the recent history. Deep learning was rediscovered like, what, 15 years ago? "Attention is all you need" is 8 years old. So it's easy to assume that something is boiling deep down that will show impressive results 5-10 years down the line.
We have no idea how many of them we need till AGI or at least replacing software engineers though.