This has been my experience almost every time I use AI: superficially it seems fine, once I go to extend the code I realize it's a disaster and I have to clean it up.
The problem with "code is cheap" is that, it's not. GENERATING code is now cheap (while the LLMs are subsidized by endless VC dollars, anyway), but the cost of owning that code is not. Every line of code is a liability, and generating thousands of lines a day is like running up a few thousand dollars of debt on a credit card thinking you're getting free stuff and then being surprised when it gets declined.
It’s one of those things which has always strikes me funny about programming is how less usually really is more
For one off, this is fine. For anything maintainable, that needs to survive the realities of time, this is truly terrible.
Related, my friend works in a performance critical space. He can't use abstractions, because the direct, bare metal, "exact fit" implementation will perform best. They can't really add features, because it'll throw the timing of others things off to much, so usually have to re-architect. But, that's the reality of their problem space.
To me, abstraction is an encapsulation of some concept. I can't understand how they're practically different, unless you encapsulate true nonsense, without purpose or resulting meaning, which I can't think of an example of, since humans tend to categorize/name everything. I'm dumb.
Historically pure machine code with jumps etc lacked any from of encapsulation as any data can be accessed and updated by anything.
However, you would still use abstractions. If you pretend the train is actually going 80.2 MPH instead of somewhere between 80.1573 MPH to 80.2485 MPH which you got from different sensors you don’t need to do every calculation that follows twice.
> In software, an abstraction provides access while hiding details that otherwise might make access more challenging
I read this as "an encapsulation of a concept". In software, I think it can be simplified to "named lists of operations".
> Historically pure machine code with jumps etc lacked any from of encapsulation as any data can be accessed and updated by anything.
Not practically, by any stretch of the imagination. And, if the intent is to write silly code, modern languages don't really change much, it's just the number of operations in the named lists will be longer.
You would use calls and returns (or just jumps if not supported), and then name and reference the resulting subroutine in your assembler or with a comment (so you could reference it as "call 0x23423 // multiply R1 and R2"), to encapsulate the concept. If those weren't supported, you would use named macros [2]. Your assembler would used named operations, sometimes expanding to multiple opcodes, with each opcode having a conceptually relevant name in the manual, which abstracted a logic circuit made with named logic gates, consisting of named switches, that shuffled around named charge carriers. Say your code just did a few operations, the named abstraction for the list of operations (which all these things are) there would be "blink_light.asm".
> If you pretend the train is actually going 80.2 MPH instead of somewhere between 80.1573 MPH to 80.2485 MPH which you got from different sensors you don’t need to do every calculation that follows twice.
I don't see this as an abstraction as much as a simple engineering compromise (of accuracy) dictated by constraint (CPU time/solenoid wear/whatever), because you're not hiding complexity as much as ignoring it.
I see what you're saying, and you're probably right, but I see the concepts as equivalent. I see an abstraction as a functional encapsulation of a concept. An encapsulation, if not nonsense, will be some meaningful abstraction (or a renaming of one).
I'm genuinely interested in an example of an encapsulation that isn't an abstraction, and an abstraction that isn't a conceptual encapsulation, to right my perspective! I can't think of any.
[1] https://en.wikipedia.org/wiki/Abstraction_(computer_science)
[2] https://www.tutorialspoint.com/assembly_programming/assembly...
Incorrect definition = incorrect interpretation. I edited this a few times but the separation is you can use an abstraction even if you maintain access to the implementation details.
> assembler
Assembly language which is a different thing. Initially there was no assembler, someone had to write one. In the beginning every line of code had direct access to all memory in part because limited access required extra engineering.
Though even machine code itself is an abstraction across a great number of implementation details.
> I don't see this as an abstraction as much as a simple engineering compromise (of accuracy) dictated by constraint (CPU time/solenoid wear/whatever), because you're not hiding complexity as much as ignoring it.
If it makes you feel better consider the same situation with 5 senators X of which have failed. The point is you don’t need to consider all information at every stage of a process. Instead of all the underlying details you can write code that asks do we have enough information to get a sufficiently accurate speed? What is it?
It doesn’t matter if the code could still look at the raw sensor data, you the programmer prefer the abstraction so it persists even without anything beyond yourself enforcing it.
IE: “hiding details that otherwise might make access more challenging”
You can use TCP/IP or anything else as an abstraction even if you maintain access to the lower level implementation details.
> You are thinking of assembly language which is a different thing. Initially there was no assembler, someone had to write one.
This is why I specifically mention opcodes. I've actually written assemblers! And...there's not much to them. It's mostly just replacing the names given to the opcodes in the datasheet back to the opcodes, with a few human niceties. ;)
> consider the same situation with 5 senators X of which have failed
Ohhhhhhhh, ok. I kind of see. Unfortunately, I don't see the difference between abstraction and encapsulation here. I see the abstraction as being speed as being the encapsulation of a set of sensors, ignoring irrelevant values.
I feel like I'm almost there. I may have edited my previous comment after you replied. My "no procrastination" setting kicked in, and I couldn't see.
I don't see how "The former is about semantic levels, the later about information hiding." are different. In my mind, semantic levels exist as compression and encapsulation of information. If you're saying encapsulation means "black box" then that could make sense to me, but "inaccessible" isn't part of the definition, just "containment".
Like the the other day, I gave it a bunch of use cases to write tests for, the use cases were correct the code was not, it saw one of the tests broken so it sought to rewrite the test. You risking suboptimal results when an agent is dictating its own success criteria.
At one point I did try and use seperate Claude instances to write tests, then I'd get the other instance to write the implementation unaware of the tests. But it's a bit to much setup.
Get two other, different, LLMs to thoroughly review the code. If you don’t have an automated way to do all of this, you will struggle and eventually put yourself out of a job.
If you do use this approach, you will get code that is better than what most software devs put out. And that gives you a good base to work with if you need to add polish to it.
SHOW AN EXAMPLE OF YOU ACTUALLY DOING WHAT YOU SAY!
by level of compute spend, it might look like:
- ask an LLM in the same query/thread to write code AND tests (not good)
- ask the LLM in different threads (meh)
- ask the LLM in a separate thread to critique said tests (too brittle, testing guidelines, testing implementation and not out behavior, etc). fix those. (decent)
- ask the LLM to spawn multiple agents to review the code and tests. Fix those. Spawn agents to critique again. Fix again.
- Do the same as above, but spawn agents from different families (so Claude calls Gemini and Codex).
—-
these are usually set up as /slash commands like /tests or /review so you aren’t doing this manually. since this can take some time, people might work on multiple features at once.
But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated.
I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter.
The commoner can only hammer the prompt repeatedly with "this doesn't work can you fix it".
So yes, our jobs are changing rapidly, but this doesn't strike me as being obsolete any time soon.
1. Declare in advance that AI is being used.
2. Provided verbatim the questions and answer session.
3. Explain why the answer given by the AI is good answer.
Part of the grade will include grading 1, 2, 3
Fair enough.
AI can explain the underlying process of manual computation and help you learn it. You can ask it questions when you're confused, and it will keep explaining no matter how off the topic you go.
We don't consider tutoring bad for learning - quite the contrary, we tutor slower students to help them catch up, and advanced students to help them fulfill their potential.
If we use AI as if it was an automated, tireless tutor, it may change learning for the better. Not like it was anywhere near great as it was.
If we can do more now in a shorter time then let's teach people to get proficient at it, not arbitrarily limit them in ways they won't be when doing their job later.
This actually reminds me so strongly of the Pakleds from Star Trek TNG. They knew they wanted to be strong and fast, but the best they could do is say, "make us strong." They had no ability to evaluate that their AI (sorry, Geordi) was giving them something that looked strong, but simply wasn't.
It's slop all the way down.
Despite whatever nasty business practices and shitty UX Windows has foisted on the world, there is no denying the tremendous value that it has brought, including impressive backwards compatibility that rivals some of the best platforms in computing history.
AI shovelware pump-n-dump is an entirely different short term game that will never get anywhere near Microsoft levels of success. It's more like the fly-by-nights in the dotcom bubble that crashed and burned without having achieved anything except a large investment.
Ask a robot arm "how should we improve our car design this year", it'll certainly get stuck. Ask an AI, it'll give you a real opinion that's at least on par with a human's opinion. If a company builds enough tooling to complete the "AI comes up with idea -> AI designs prototype -> AI robot physically builds the car -> AI robot test drives the car -> AI evaluates all prototypes and confirms next year's design" feedback loop, then theoretically this definitely can work.
This is why AI is seen as such a big deal - it's fundamentally different from all previous technologies. To an AI, there is no line that would distinguish class I from II.
Y is causing Z and we should fix that. But if we stop and study the problem, we might discover that X causes the class of Y problem so we can fix the entire class, not just the instance. And perhaps W causes the class of X issue. I find my job more and more being about how far up this causality tree can I reason, how confident am I about my findings, and how far up does it make business sense to address right now, later, or ever?
Over the last 16 years, Red Bull has won 8 times, Mercedes 7 times and Mclaren 1. Which means, regardless of the change in tracks and conditions, the winners are usually the same.
So either every other team sucks at "understanding the requirements and the technical challenges" on a clinical basis or the metaphor doesn't make a lot of sense.
But my whole point was that race to race, it really isn't that much different for the teams as the comment implied and I am still kind of lost how it fits to SWE unless you're really stretching things.
Even then, most teams dont even make their own engines etc.
You’ve got the big bet to design the car between the season (which is kinda the big architectural decisions you make at the beginning of the project). Then you got the refinement over the season, which are like bug fixings and performance tweaks. There’s the parts upgrade, which are like small features added on top of the initial software.
For the next season, you either improve on the design or start from scratch depending on what you’ve learned. In the first case, It is the new version of the software. In the second, that’s the big refactor.
I remember that the reserve drivers may do a lot of simulations to provide data to the engineers.
Also in software, we can do big refactors. F1 teams are restricted to the version they’ve put in the first race. But we do have a lot of projects that were designed well enough that they’ve never changed the initial version, just build on top of it.
Uh, it's not the issue. The issue is that there isn't that much demand for the second class of job. At least not yet. The first class of job is what feeds billions of families.
Yeah, I'm aware of the lump of labour fallacy.
It wanders off the path like if I responded with, "that's also not the issue. The issue is that people need jobs to eat."
This is simply not how expert programmers work. Programming is planning, and programming languages are surprisingly good planning tools. But, of course, becoming an expert is hard and requires not only some general aptitude but also substantial time, experience and effort.
My theory is that this is a source of diverging views on LLMs for programming: people who see programming languages as tools for thought compared to people who see programming languages as, exclusively, tools to make computers do stuff. It's no surprise that the former would see more value in programming qua programming, while the latter are happy to sweep code under the rug.
The fundamental problem, of course, is that anything worth doing in code is going to involve pinning down a massive amount of small details. Programming languages are formal systems with nice (well, somewhat nice) UX, and formal systems are great for, well, specifying details. Human text is not.
Then again, while there might be a lot of details, there are also a lot of applications where the details barely matter. So why not let a black box make all the little decisions?
The question boils down to where you want to spend more energy: developing and articulating a good conceptual model up-front, or debugging a messy system later on. And here, too, I've found programmers fall into two distinct camps, probably for the same reasons they differ in their views on LLMs.
In principle, LLM capabilities could be reasonably well-suited to the up-front thinking-oriented programming paradigm. But, in practice, none of the tools or approaches popular today—certainly none of the ones I'm familiar with—are oriented in that direction. We have a real tooling gap.
i'd postulate this: most people see llms as tools for thought. programmers also see llms as tools for programming. some programmers, right now, are getting very good at both, and are binding the two together.
But my biggest objection to this "engineering is over" take is one that I don't see much. Maybe this is just my Big Tech glasses, but I feel like for a large, mature product, if you break down the time and effort required to bring a change to production, the actual writing of code is like... ten, maybe twenty percent of it?
Sure, you can bring "agents" to bear on other parts of the process to some degree or another. But their value to the design and specification process, or to live experiment, analysis, and iteration, is just dramatically less than in the coding process (which is already overstated). And that's without even getting into communication and coordination across the company, which is typically the real limiting factor, and in which heavy LLM usage almost exclusively makes things worse.
Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.
To be clear, I think these tools absolutely have a place, and I use them where appropriate and often get value out of them. They're part of the field for good, no question. But this take that it's a replacement for engineering, rather than an engineering power tool, consistently feels like it's coming from a perspective that has never worked on supporting a real product with real users.
They didn't say that software engineering is over - they said:
> Software development, as it has been done for decades, is over.
You argue that writing code is 10-20% of the craft. That's the point they are making too! They're framing the rest of it as the "talking", which is now even more important than it was before thanks to the writing-the-code bit being so much cheaper.
Simon I guess vb-8558's comment inn here is something which is really nice (definitely worth a read) and they mention how much coding has changed from say 1995 to 2005 to 2015 to 2025
Directly copying line from their comment here : For sure, we are going through some big changes, but there is no "as it has been done for decades".
Recently Economic Media made a relevant video about all of this too: How Replacing Developers With AI is Going Horribly Wrong [https://www.youtube.com/watch?v=ts0nH_pSAdM]
My (point?) is that this pure mentality of code is cheap show me the talk is weird/net negative (even if I may talk more than I code) simply because code and coding practices are something that I can learn over my experience and hone in whereas talk itself constitutes to me as non engineers trying to create software and that's all great but not really understanding the limitations (that still exist)
So the point I am trying to make is that I feel as if when the OP mentioned code is 10-20% of the craft, they didn't mean the rest is talk. They meant all the rest are architectural decisions & just everything surrounding the code. Quite frankly, the idea behind Ai/LLM's is to automate that too and convert it into pure text and I feel like the average layman significantly overestimates what AI can and cannot do.
So the whole notion of show me the talk atleast in a more non engineering background as more people try might be net negative not really understanding the tech as is and quite frankly even engineers are having a hard time catching up with all which is happening.
I do feel like that the AI industry just has too many words floating right now. To be honest, I don't want to talk right now, let me use the tool and see how it goes and have a moment of silence. The whole industry is moving faster than the days till average js framework days.
To have a catchy end to my comment: There is just too much talk nowadays. Show me the trust.
I do feel like information has become saturated and we are transitioning from the "information" age to "trust" age. Human connections between businesses and elsewhere matter the most right now more than ever. I wish to support projects which are sustainable and fair driven by passion & then I might be okay with AI use case imo.
Like Linus’ observation still stands. Show me that the code you provided does exactly what you think it should. It’s easy to prompt a few lines into an LLM, it’s another thing to know exactly the way to safely and effectively change low level code.
Liz Fong-Jones told a story on LinkedIn about this at HoneyComb, she got called out for dropping a bad set of PR’s in a repo, because she didn’t really think about the way the change was presented.
You're absolutely right about coding being less than 20% of the overall effort. In my experience, 10% is closer to the median. This will get reconciled as companies apply LLMs and track the ROI. Over a single year the argument can be made that "We're still learning how to leverage it." Over multiple years the 100x increase in productivity claims will be busted.
We're still on the upslope of Gartner's hype cycle. I'm curious to see how rapidly we descend into the Trough of Disillusionment.
What happened in the middle was I didn’t know what I wanted. I hadn’t worked out the right data model for the application yet, so I couldn’t tell Claude what to do. And if you tell it to go ahead and write more code at that point, very bad things will start to happen.
The underlying point is just that while it was very cognitively expensive to back up a good design with good code back in 2000, it's much cheaper now. And therefore, making sure the design is good is the more important part. That's it really.
Personally I don’t see it happening. This is the bitter reality the LLM producers have to face at some point.
I...don't think this is true at all. "The design of the car is more important than what specific material you use" does not mean that the material is unimportant", just that it is relatively* less important. To put a fake number on it, maybe 10% less important.
I think people who have domain knowledge and good coding skills will probably benefit the most from this LLM producer stuff.
> One can no longer know whether such a repository was “vibe” coded by a non-technical person who has never written a single line of code, or an experienced developer, who may or may not have used LLM assistance.
I am talking about what it means to invert that phrase.
In software development, code is in a real sense less important than the understanding and models that developers carry around in their heads. The code is, to use an unflattering metaphor, a kind of excrement of the process. It means nothing without a human interpreter, even if it has operational value. The model is never part of the implementation, because software apart from human observers is a purely syntactic construct, at best (even there, I would argue it isn't even that, as syntax belongs to the mind/language).
This has consequences for LLM use.
I'm pretty sure the way I was doing things in 2005 was completely different compared to 2015. Same for 2015 and 2025. I'm not old enough to know how they were doing things in 1995, but I'm pretty sure there very different compared to 2005.
For sure, we are going through some big changes, but there is no "as it has been done for decades".
Agile has completely changed things, for better or for worse.
Being a SWE today is nothing like 30 years ago, for me. I much preferred the earlier days as well, as it felt far more engineered and considered as opposed to much of the MVP 'productivity' of today.
I also don't feel less productive or lacking in anything compared to the newer developers I know (including some LLM users) so I don't think I am obsolete either.
Yes I know, Lisp could do this the whole time. Feel free to offer me a Lisp job drive-by Lisp person.
I think a lot of people in the industry forget just how much change has come from 30 years of incremental progress.
They always say and are saying again
All to basically mimic what curses can do very easily.
Also credit where credit is due. Origin of this punchline:
https://nitter.net/jason_young1231/status/193518070341689789...
https://programmerhumor.io/ai-memes/code-is-cheap-show-me-th...
But actually good code, with a consistent global model for what is going on, still won't come from Opus 4.5 or a Markdown plan. It still comes from a human fighting entropy.
Getting eyes on the code still matters, whether it's plain old AI slop, or fancy new Opus 4.5 "premium slop." Opus is quite smart, and it does its best.
But I've tried seriously using a number of high-profile, vibe-coded projects in the last few weeks. And good grief what unbelievable piles of shit most of them are. I spend 5% of the time using the vibe-coded tool, and 95% of the time trying to uncorrupt my data. I spend plenty of time having Opus try to look at the source to figure out what went wrong in 200,000 lines of vibe-coded Go. And even Opus is like, "This never worked! It's broken! You see, there's a race condition in the daemonization code that causes the daemon to auto-kill itself!"
And at that point, I stop caring. If someone can't be bothered to even read the code Opus generates, I can't be bothered to debug their awful software.
I'm sorry, but this is an indicator for me that the author hasn't had a critical eye for quality in some time. There is massive overlap between "bad" and "functional." More than ever. The barrier-to-entry to programming got irresponsibly low for a time there, and it's going to get worse. The toolchains are not in a good way. Windows and macOS are degrading both in performance and usability, LLVM still takes 90% of a compiler's CPU time in unoptimized builds, Notepad has AI (and crashes,) simple social (mobile) apps are >300 MB download/installs when eight years ago they were hovering around a tenth of that, a site like Reddit only works on hardware which is only "cheap" in the top 3 GDP nations in the world... The list goes on. Whatever we're doing, it is not scaling.
I'd think there'll be a dip in code quality (compared to human) initially due to "AI machinery" due to its immaturity. But over-time on a mass-scale - we are going to see an improvement in the quality of software artifacts.
It is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
It's like in the clothing or manufacturing industry I think. Artisans were able to produce better individual results than the average industry machinery, at least initially. But overtime - industry machinery could match the average artisan or even beat the average, while decisively beating in scale, speed, energy efficiency and so on.
I see this type error of thinking all the time. Engineers don't make objects of type A, we make functions of type A -> B or higher order.
Once you look at the present engineering org compositions see what's the error in thinking.
There are other analogy issues in your response which I won't nitpick
Much of contemporary software is functionally equivalent but more expensive to run and produce than previous generations. Chat, project management, document editing, online stores… all seem to have gotten more expensive to produce and run with little to no gain in functionality.
Complexity in software production and tooling keeps increasing yet functionally software is more or less the same as 20 years ago (obv. excluding advancements depending on hardware like video, 3D rendering, LLMs, etc.
To highlight the gaps in your analogy; machinery still fails to match artisan clothing-makers. Despite being relatively fit, I've got wide hips. I cannot buy denim jeans that both; fit my legs, _and_ my waist. I either roll the legs up or have them hemmed. I am not all that odd, either. One size cannot fit all.
Whether it could is distinct from whether it will. I'm sure you've noticed the decline in the quality of clothing. Markets a mercurial and subject to manipulation through hype (fast fashion is just a marketing scheme to generate revenue, but people bought into the lie).
With code, you have a complicating factor, namely, that LLMs are now consuming their own shit. As LLM use increases, the percentage of code that is generated vs. written by people will increase. That risks creating an echo chamber of sorts.
Quick check: Do you want to go back to pre-industrial era then - when according to you, you had better options for clothing?
Personally, I wouldn't want that - because I believe as a customer, I am better served now (cost/benefit wise) than then.
As to the point about recursive quality decline - I don't take it seriously, I believe in human ingenuity, and believe humans will overcome these obstacles and over time deliver higher quality results at bigger scale/lower costs/faster time cycles.
This does not follow. Fast fashion as described is historically recent. An an example, I have a cheap t-shift from the mid-90s that is in excellent condition after three decades of use. Now, I buy a t-shirt in the same price range, and it begins to fall apart in less than a year. This decline in the quality of clothing is well known and documented, and it is incredibly wasteful.
The point is that this development is the product of consumerist cultural presuppositions that construct a particular valuation that encourages such behavior, especially one that fetishizes novelty for its own sake. In the absence of such a valuation, industry would take a different direction and behave differently. Companies, of course, promote fast fashion, because it means higher sales.
Things are not guaranteed to become better. This is the fallacy of progress, the notion that the state of the world at t+1 must be better than it was at t. At the very least, it demands an account of what constitutes "better".
> I don't take it seriously, I believe in human ingenuity, and believe humans will overcome these obstacles
That's great, but that's not an argument, only a sentiment.
I also didn't say we'll experience necessarily a decline, only that LLMs are now trained on data produced by human beings. That means the substance and content is entirely derived from patterns produced by us, hence the appearance of intelligence in the results it produces. LLMs merely operate over statistical distributions in that data. If LLMs reduce the amount of content made by human beings, then training on the generated data is circular. "Ingenuity" cannot squeeze blood out of a stone. Something cannot come from nothing. I didn't say there can't be this something, but there does need to be a something from which an LLM or whatever can benefit.
> it is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
Your take essentially is "let's live in a shoe box, packaging pipelines produce them cheaply en masse, who needs slow poke construction engineers and architects anymore"
What the role of an engineer in the new context - I am not speculating on.
No it's not, your whole premise is invalid both in terms of financing the effort and in the AI's ability to improve beyond RNG+parroting. The AI code agents produce shoe boxes, your claim is that they can be improved to produce buildings instead. It won't happen, not until you get rid of the "temperature" (newspeak for RNG) and replace it with conceptual cognition.
Elegantly, agents finally give us an objective measure of what "good" code is. It's code that maximizes the likelihood that future agents will be able to successfully solve problems in this codebase. If code is "bad" it makes future problems harder.
An analogous argument was made in the 90's to advocate for the rising desire for IDEs and OOP languages. "Bad" code came to be seen as 1000+ lines in one file because you could simply conjure up the documentation out-of-context, and so separation of concerns slipped all the way from "one function one purpose" to something not far from "one function one file."
I don't say this as pure refusal, but to beg the question of what we lose when we make these values-changes. At this time, we do not know. We are meekly accepting a new mental prosthesis with insufficient foresight of the consequences.
After using AI to code, I came to the same conclusion myself. Interns and juniors are fully cooked:
- Companies will replace them with AI, telling seniors to use AI instead of juniors
- As a junior, AI is a click away, so why would you spend sleepless nights painstakingly acquiring those fundamentals?
Their only hope is to use AI to accelerate their own _learning_, not their performance. Performance will come after the learning phase.
If you're young, use AI as a personal TA, don't use it to write the code for you.
Also, that projects page on his website is atrocious; hate to be "that guy" but I don't trust the author's insight since "personal projects" seems to include a lot more than just his work; the first several PRs I looked at where all vibed.
I'm not interested in re-implementations of the same wheel over and over again telling me and people who know how to write real software (have been doing it since I was 12) that we are becoming unnecessary bc you can bully an extremely complex machine built on a base theory of heuristics abstracted out endlessly (perceptually) to re-invent the same specs in slightly different flavors.
> 100% human written, including emdashes. Sigh. If you can't write without emdashes, maybe you spend too much time with LLMs and not enough time reading and learning on your own. Also people can lie on the Internet, they do it all the time, and if not then I'm doing it right now.
The hubris on display is fascinating.
Yes and no. Code is not art, but software is art.
What is art, then? Not something that's "beautiful", as beauty is of course mostly subjective. Not even something that works well.
I think art is a thing that was made with great care.
It doesn't matter if some piece of software was vibe-coded in part or in full, if it was edited, tested, retried enough times for its maker to consider it "perfect". Trash is something that's done in a careless way.
If you truly love and use what you made, it's likely someone else will. If not, well... why would anyone?
1. To maintain it (to refactor or extend it).
2. To test it.
3. To debug it (to detect and fix flaws in it).
4. To learn (to get better by absorbing how the pros do it).
5. To verify and improve it (code review, pair programming).
6. To grade it (because a student wrote it).
7. To enjoy its beauty.
These are all I can think of right now, and they are ordered from most common to most rare case.
Personally, I have certainly read and re-read SICP code to enjoy its beauty (7), perhaps mixed in with a desire to learn (4) how to write equally beautiful code.
How much of this is mass financial engineering than real value. Reading a lot of nudges how everyone should have Google or other AI stock in their portfolio/retirement accounts
Things have changed drastically now, engineers with these tools(like claude code) have become unstoppable.
Atleast for me, I have been able to contribute to the codebases i was unfamiliar with, even with different tech stacks. No, I am not talking about generating ai slop, but I have been enabled to write principal engineer level code unlike before.
So i don't agree with the above statement, it's actually generating real value and I have become valuable because of the tools available to me.
I've just spent some time with Opus to extend basic metrics in a static language, with guides, where to look, what set of metrics etc, it's making quite a few mistakes for not a hard task...
I’ve spent the last week unwinding my coworkers slop who said the same thing.
The whole narrative of "inevitability" is the stock behavior of tech companies who want to push a product onto the public. Why fight the inevitable? All you can do is accept and adapt.
And given how many companies ask vendors whether their product "has AI" without having the slightest inkling of what that even means or whether it even makes sense, as if it were some kind of magical fairy dust - yeah, the stench of hype is thick enough you could cut it with a knife.
Of course, that doesn't mean it lacks all utility.
However I think - Nadh, ronacher, the redis bro - these are people who can be trusted. I find Nadh's article (OP) quite balanced.
I think he’s soured a bit on the 10x claim echoing many of the quality concerns expressed by others in this thread: https://lucumr.pocoo.org/2026/1/18/agent-psychosis/
Yeah...now that prompt injection is a fact of life and basically unsolvable - we can't really afford this luxury anymore.
Put the agent on the wheel and observe it as it tries ruthlessly to pass the test. These days, likely it will manage to pass the tests after 3-5 loops, which I find fascinating.
Close the loop, and try an LLM. You will be surprised.
Historically, it would take a reasonably long period of consistent effort and many iterations of refinement for a good developer to produce 10,000 lines of quality code that not only delivered meaningful results, but was easily readable and maintainable. While the number of lines of code is not a measure of code quality—it is often the inverse—a codebase with good quality 10,000 lines of code indicated significant time, effort, focus, patience, expertise, and often, skills like project management that went into it. Human traits.
Now, LLMs can not only one-shot generate that in seconds,
Evidence please. Ascribing many qualities to LLM code that I haven't (personally) seen at that scale. I think if you want to get an 'easily readable and maintainable' codebase of 10k lines with an LLM you need somebody to review its contributions very closely, and it probably isn't going to be generated with a 1 shot prompt.The real "cost" of software is reliance: what risk do your API clients or customers take in relying on you? This is just as true for free-as-in-beer software as for SaaS with enterprise SLA's.
In software and in professions, providers have some combination of method and qualifications or authority which justifies reliance by their clients. Both education and software have reduced the reliance on naked authority, but a good track record remains the gold standard.
So providers (individuals and companies) have to ask how much of their reputation do they want to risk on any new method (AI, agile, ...)? Initially, it's always a promising boost in productivity. But then...
So the real question is what "Show me" means - for a quick meet, an enterprise sale, an enduring human-scale consumer dependence...
So, prediction: AI companies and people that can "show me" will be the winners.
(Unfortunately, we've also seen competitive advantage accrue to dystopian hiding of truth and risk, which would have the same transaction-positive effect but shift and defer the burden of risk. Let's hope...)
There is a net positive gain on the automated testing side of things but I think a bad developer, even with AI will not be able to out-compete a good 10x developer without AI. The costs of incorrect abstractions is just too high and LLMs don't really help avoid those.
You have to ask the right questions. It's a bit like in Hitchhiker's Guide to the Galaxy... Where a super-intelligent computer took millions of years to calculate the meaning of life as the number 42. The wrong question will always waste computation.
> Proceeds to write literal books of markdown to get something meaningful
>> It requires no special training, no new language or framework to learn, and has practically no entry barriers—just good old critical thinking and foundational human skills, and competence to run the machinery.
> Wrote a paragraph about how it is important to have serious experience to understand the generated code prior to that
>> For the first time ever, good talk is exponentially more valuable than good code. The ramifications of this are significant and disruptive. This time, it is different.
> This time is different bro I swear, just one more model, just one more scale-up, just one more trillion parameters, bro we’re basically at AGI
This quote is from Torvalds, and I'm quite sure that if he weren't able to write eloquent English no one would know Linux today.
Code is important when it's the best medium to express the essence of your thoughts. Just like a composer cannot express the music in his head with English words.
I just re-watched the video (currently halfway) & I feel like the point of Linux is something which you are forgetting but it was never intended to grow so much and Linux himself in the video when asked says that he never had a moment where he went like oh this went big.
In fact he talks about when the project was little. On how he had gratitude when the project had 10 people maybe 100 people working on it and then things only grow over a very large time frame (more than 25-30years? maybe now 35 just searched 34)
He talks about how he got other people's idea which he couldn't have thought of things themselves and when he first created the project he just wanted to show off to the world to look at what I did (and he did it mainly for both the end result of the project and programming itself too) and then he got introduced to open source (free software) by his friend and he just decided to have it open source.
My point is it was neither the code nor the talk. Linus is the best person to maintain Linux, why? Because he has been passionate over it for 25 years. I feel like Linux would be just as interested in talking about the code and any improvements now with maybe the same vigour as 34 years ago. He loves his creation & we love Linux too :)
Another small point I wish to add is that if talk was the only thing, then you are missing the point because Linux was created because hurd was getting delayed (so all talks no code)
Linux himself says that if the hurd kernel would've been released earlier, Linux wouldn't have been created.
So all talk no code Hurd project (which from what I hear right now is still a bit limbo as now everyone [rightfully?] uses linux) is what led to creation of linux project.
Everyone who hasn't watched Linus's ted ed should definitely watch it.
The Mind Behind Linux | Linus Torvalds | TED : https://www.youtube.com/watch?v=o8NPllzkFhE
Classical indicators of good software are still very relevant and valid!
Building something substantial and material (ie not an api wrapper+gui, to-do list) that is undeniably well made, while being faster and easier than it used to be, still takes a _lot_ of work. Even though you don't have to write a line of code, it moves so fast that you are now spending 3.5-4 days of your work week reading code, using the project, running benchmarks and experimental test lanes, reviewing specs and plans, drafting specs, defining features and tests.
The level of granularity needed to get earnestly good results is more than most people are used to. It's directly centered at the intersection between spec heavy engineering work and writing requirements for a large, high quality offshore dev team that is endearingly literal in how they interpret instructions. Depending on the work, I've found that I average around one 'task' per 22-35 lines of code.
You'll discover a new sense of profound respect for the better PMs, QA Leads, Eng Directors you have worked with. Months of progress happen each week. You'll know you're doing it right when you ask an Agent to evaluate the work since last week and it assumes it is reviewing the output of a medium sized business and offers to make Jira tickets.
expertise and effort is and will continue to be for the forseeable future essential.
talk, like this, still cheap.
- A good and experienced developer who knows how to organize and structure systems will become more productive.
- An inexperienced developer will also be able to produce more code but not necessarily systems that are maintainable.
- A sloppy developer will produce more slop.
Both Code and talk are cheap. Show me the trust. Show me how I can trust you. Show me your authenticity. Show me your passion.
Code used to be the sign of authenticity. This is whats changing. You can no longer guarantee that large amounts of code let's say are now authentic, something which previously used to be the case (for the most part)
I have been shouting into the void many times about it but Trust seems to be the most important factor.
Essentially, I am speaking it from a consumer perspective but suppose that you write AI generated code and deploy it. Suppose you talked to AI or around it. Now I can do the same too and create a project sometimes (mostly?) more customizable to my needs for free/very-cheap.
So you have to justify why you are charging me. I do feel like that's only possible if there is something additional added to value. Trust, I trust the decision that you make and personally I trust people/decisions who feel like they take me or my ideas into account. So, essentially not ripping me off while actively helping. I don't know how to explain this but the most thing which I hate is the feeling of getting ripped off. So justifiable sustainable business who is open/transparent about the whole deal and what he gets and I get just gets my respect and my trust and quite frankly, I am not seeing many people do that but hopefully this changes.
I am curious now what you guys of HN think about this & what trust means to you in this (new?) ever-changing world.
Like y'know I feel like everything changes all the time but at the same time nothing changes at the same time too. We are still humans & we will always be humans & we are driven by our human instincts. Perhaps the community I envision is a more tight knit community online not complete mega-sellers.
Thoughts?
What soft-skill buzzword will be the next one as the capital owners take more of the supposed productivity profits?
I hate this trend of using adjectives to describe systems.
Fast Secure Sandboxed Minimal Reliable Robust Production grade AI ready Let's you _____ Enables you to _____
But somewhat I agree, code is essentially free, you can shit out infinite amounts of code. Unless it's good, then show the code instead. If your code is shit, show the program. If your program is shit, your code is worse, but you still pursing an interesting idea (in your eyes), show the prompt instead of the slop generated. Or even better communicate an elaborate version of the prompt.
>One can no longer know whether such a repository was “vibe”
This is absurd. Simply false, people can spot INSTANTLY when the code is good, see: https://news.ycombinator.com/item?id=46753708
I think that's always been true. The ideas and reasoning process matter. So does the end product. If you produced it with an LLM and it sucks, it still sucks.