> This is often the part that slows down software development. Trying to figure out what a vague, title only, feature request actually means.
But that is exactly what Software Engineering is!. It's 2026 and the notion that you can get detailed enough requirements and specifications that you can one-shot a perfect solution needs to die.
In my experience AI has made us able to iterate on features or ideas much faster. Now most of the friction comes from alignment and coordination with other teams. My take is that to accelerate processes we should reduce coordination overhead and empower individuals and teams to make decisions and execute on them.
It's 2026 and the idea that even with detailed-enough requirements you can one-shot even a workable (let alone perfect) solution also needs to die. Anthropic failed to build even something as simple as a workable C compiler, not only with a perfect spec (and reference implementations, both of which the model trained on) but even with thousands of tests painstakingly written over many person-years. Today's models are not yet capable enough to build non-trivial production software without close and careful human supervision, even with perfect specs and perfect tests. Without a perfect spec and a perfect human-written test suite the task is even harder. Maybe in 2027.
" It lacks the 16-bit x86 compiler that is necessary to boot Linux out of real mode. For this, it calls out to GCC (the x86_32 and x86_64 compilers are its own).
It does not have its own assembler and linker; these are the very last bits that Claude started automating and are still somewhat buggy. The demo video was produced with a GCC assembler and linker.
The compiler successfully builds many projects, but not all. It's not yet a drop-in replacement for a real compiler. The generated code is not very efficient. Even with all optimizations enabled, it outputs less efficient code than GCC with all optimizations disabled.
The Rust code quality is reasonable, but is nowhere near the quality of what an expert Rust programmer might produce. "
For faffing about with a multi agent system that seems like a pretty successful experiment to me.
Source: https://www.anthropic.com/engineering/building-c-compiler
Edit: Like I think people don't realize not even 7 months ago it wasn't writing this at all.
As an example, I did an exploratory attempt to add custom software over some genuinely awful windows software for a scientific imaging station with a proprietary industrial camera. Five days later Claude and I had figured out how to USB-pcap sample images and it's operationalized and smoothly running for months now. 100% of the code written by Claude, it's all clean (reviewed it myself) pretty much all I did was unstuck it at a few places, "hey based on the file sizes it looks like the images are being sent as a 16-bit format")
For day to day work, I'll often identify a bug, "hey, when I shift click on this graphical component, it's not doing the right thing". I go tell Claude to write a RED (failing) integration test, then make it pass.
Zero lines of code manually written. Only occasionally do I have to intervene and rearchitect. Usually thus involves me writing about ten lines of scaffold code, explaining the architectural concept, and telling it to just go
My first thought when reading Anthropic's description of the experiment was that it is unrealistically easy. It's hard to come up with realistic jobs in the 10-50KLOC range that would be this easy for an LLM. That it failed only shows how much further we still have to go.
I get that it's "novel" creation vs porting, but given that they reported that the C compiler cost them $20k in API costs, the Bun rewrite must be at least $200k, maybe even closer to a million. Pure madness.
Anthropic can always fire the Opus/Mythos token machine gun on any problem (bugs, features, security) to ensure PR success, and there would be plenty of AI-sphere startups already drinking the kool-aid that would consider the whole vibe-coding thing to Bun's benefit.
I can make a c compiler in a couple weeks just by looking up open source libraries and copying them.
I can't make any software that people will pay me money to use without taking months/years of development, research, expiramentation and iteration.
Just because the original people who invented compilers had to be genius, doesn't mean anyone has to spend much time or thought in copying that work now.
If I got detailed specs, I’d just be a coding robot. I push that work off onto juniors.
And yes, architecture and how to actually implement the designs are also part of the requirements.
The code is just the implementation, the actual problem that needs solving is one abstraction level higher.
When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.
In order to get good results with LLMs we need to do something similar. Vague requirements get vague results.
This has significantly helped devs and made sure that requirements are very clear.
Honestly, with the first step, it seems the PMs are already halfway there to implementation of the feature so I wonder if in the future they'll just do everything themselves and a few devs will be around as SDETs rather than full blown implementers.
Super glad to have gotten out when I did...
PMs turning their brain off and letting the LLMs extrapolate from quick and dirty bashing of text into a template (or, PMs throwing customer feedback at a slackbot to generate a jira ticket form it) can be better than PMs doing nothing but passing ill-defined reqs directly into the ticket, but that's a low bar. And it doesn't by itself solve the problems of the details that got generated for this ticket subtly conflicting with the details that got generated for (and implemented) in a different ticket 8 months ago.
I am a very AI-forward person, but hallucinations are becoming more pernicious than ever even as they get less frequent, especially if the code actually works. A human absolutely has to guide these processes at a macro level for sustainability for SaaS as it evolves with business needs.
Maybe for one and done systems with no maintenance/no updates/no security patches you can reduce humans to SDETs, but systems like that are more the exception than the norm.
At least with concurrent and distributed systems stuff (which is really all I know nowadays), it is great at getting a prototype, but the code is generally mediocre-at-best and pretty sub-optimal. I don't know if it's because it is trained on a lot of mediocre and/or buggy code but for concurrency-heavy stuff I've been having to rewrite a lot of it myself.
I think that AI is great for getting a rough POC, and admittedly often a rough POC is good enough for a project (and a lot of projects never get beyond a rough POC), but I think software engineers will be needed for stuff that needs to be more polished.
Even still, other professions interact with the real social world which is not necessarily the case with programming. A lawyer will always be needed because judgments are and must be made by humans only. Software on the other hand can be built and tested in its own loop, especially now with human readable specifications. For example, I wanted to build an app and told Claude and it planned out the features, which I reviewed and accepted, then it built, wrote tests, used MCPs including the browser for interacting with the UI and taking screenshots of it, finding any bugs and regressions, and so on until an hour later it came back with the full app. Such a loop is not possible in other professions.
I'm guessing they've tried (or been induced to try by upper management), but given up because they don't know how to debug any problems that arise due to the LLM working itself into a corner.
Coding-agent LLMs act a lot like junior devs. And junior devs are: eager to write code before gathering requirements; often reaching for dumb brute-force solutions that require more work from them and are more error-prone, rather than embracing laziness/automation; getting confused and then "spinning their wheels" trying things that clearly won't work instead of asking for help; not recognizing when they've created an X-Y problem, and have then solved for their Y but not actually solved for the original problem X; etc.
The way you compensate for those inexperience-driven flaws in junior devs' approach, is to have them paired with, or fast-iteration-code-reviewed by, senior devs.
Insofar as a PM has development experience, it's usually only to the level of being a "junior dev" themselves. But to compensate for LLMs-as-junior-devs, they really need senior-dev levels of experience.
The good PMs know all of this, and so they're generally wary to take responsibility for driving the actual coding-agent development process on all but the most trivial change requests. A large part of a PM's job is understanding task assignment / delegation based on comparative advantage; and from their perspective, it's obvious that wielding LLMs in solution-space (as opposed to problem-space, as they do) is something still best left to the engineers trained to navigate solution-space.
Just lol. Is this what you guys mean by productivity boost?
Comical. LLM’s aren’t all that great - it’s more that most orgs are horribly inefficient. Like it’s amazing how bad they are.
That’s why Elon succeeded with spacex - he saw how horrible inefficient the industry was. And used that thinking to take a gamble and it’s paid off.
Considering that that’s been a running complaint for like 50 years, it doesn’t seem like project management is going to get better on its own at this point. So, yes, an LLM does represent a productivity boost in that area.
When the org is misaligned, mismanaged, has poor customer feedback loops, bad product market fit, too much bureaucracy, etc etc no amount of AI slop is going to make a meaningful impact on its bottom line. In fact, it will likely do the opposite through combination of exponentially increasing complexity, combined with worker force deskilling, layoffs, and rising token prices. Real bottleneck is and always has been communication & alignment.
It might make the employees _happier_ in the interim though, which, I believe, is what we're predominantly seeing during this AI mania. People fed up with the bullshit jobs of rewriting the same service for the 5th time in 2 years or creating TPS reports weekly just for their manager to throw them directly in the trash are absolutely giddy that they no longer have to do this manually. I think we need to question the economic value of these jobs in the first place, though.
I've worked at big tech prior to LLMs becoming a thing, and consistently saw projects of 20-50 people carried by 2-3 individuals that actually understood what needed to be done. I don't think this ratio will be any better with genAI, and I also don't think that tokenmaxxing has any meaningful correlation with impact. Bullshit jobs (and questionable personal projects) just get done faster now. Yay, I guess.
And then someone copy pastes it into Claude and now those inaccuracies become part of the code and tests.
It's the equivalent of writer's block and is why a common advice given to writers is to put anything they can onto the page then edit it later.
The PM has historically often not had a detailed enough mental model of the implementation to spot the hard parts in advance or a detailed enough mental model of the customer desires to know if it's gonna be the right thing or not.
Those are the things that killed waterfall.
You can use LLM tools to help you improve both those areas. Synthesizing large amounts of text and looking for inconsistencies.
But the 80th-percentile-or-lower person who was already not working hard to try to get ahead of those things still isn't going to work any harder than the next person and so won't gain much of a real edge.
Normally waterfall works where the scope is extremely-well defined and articulated in design plans. Which shortens dev time because prior to AI code was mostly deterministic. Here we have to do waterfall level of documentation while iterating on a non-deterministic solution (code gen) to non-deterministic requirements (per usual).
It's bonkers.
I still think the technology is cool though.
And to answer the questioner.. Have you worked with a PM? Most of the ones I've worked with try to be simultaneously in charge yet not responsible for anything. Validating something implies skill and responsibility.
We see it with code too right? It’s harder to review code than to write it.
On top of that the LLM can work so fast that the amount of things that need validating grows!
This is where humans get lazy and the problems come in IMO. Whether its a PM not validating their ticket, or a dev doing a bad code review.
Add on to that that the incentives currently are to move fast and trust the AI.
It becomes clear to me that a lot of that review work either won’t be done at all, or won’t be nearly thorough enough.
Reviewing code is harder than reviewing text because code does something and has interdependencies and therefore must be correct in its function, do not mix the two. This is like saying an editor reviewing an article or novel is harder than actually writing the novel which is blatantly incorrect.
Hahahahahaha. Sorry, I couldn't help myself; this reads like satire. The answer is "real life experience says otherwise".
If your technology relies on humans using it in ways that go against the ways they are inclined to use them, then that is an issue with the technology.
Are advanced calculators bad because a student could use the CAS to ace calculus homework, exams or the SAT without actually learning the material?
Is copy/paste bad because a person could use it to copy/paste code from one place to another without noticing some of the areas they need to update in the new location, adding bugs and missing a chance to learn some more subtleties of the system?
Is Git bad because a manager could use it to just measure performance by number of lines of code committed instead of doing more work to actually understand everyone's performance?
Many tools can be used lazily in ways that will directly work against a long term goal of improving knowledge and productivity.
ok, so for some of the jobs we're doing plausible sounding goo is just fine. and that's kinda sad. but the 'just playing around' case is fine for PSG, this isn't a serious effort but just seeing how things might work out without much effort.
taking the remainder, where understanding and intent are important, the role of the ai is produce PSG, but the intentional person now goes through everything and plucks out all the nonsense. this may take more or less time than simply writing it, but we should understand this is resulting in less real engagement by the ultimate author. where this is actually interesting is a parallel to Burrough's cutup method - where source text and audio were randomly scrambled and sometimes really clever and novel stuff pops out.
but to say the current model of vibe coding has much to offer in the second case is really quite unclear. to the extent to which coding is the production of boilerplate is really a problem with APIs and abstraction design. if we can get LLMs to mitigate some of that I the short term without causing too much distraction, that's fine, but we should really be using that to inform the solution to the fundamental problem.
so for me what's missing in your model is how LLMs are supposed to be used 'properly'. I don't think laziness is really the right cut here, make-work is make-work, and there's plenty of real work to be done. but in what sense does LLM usage for code actually improve our understanding of these systems and get us more agency?
> People who use AI because they are trying to avoid doing work fall into a completely different category than people using AI as a force multiplier and for skills/capabilities enhancements / quality improvement.
This statement is absolutely true. There are ways to use LLM tools to significantly improve the quality of your work instead of to avoid doing hard work. (And the result can easily become something that requires more hard thought, not less.)
Some that I frequently enjoy that are usable even if you don't want the machine to generate your actual code at all: * consistency-check passes asking it to look for issues or edge cases * evaluation of test coverage to suggest any missed tests or proposed new ones * evaluation of feasibility of different refactoring approaches (chasing down dependencies and call trees much more faster than I would be able to do by hand, etc)
> to the extent to which coding is the production of boilerplate is really a problem with APIs and abstraction design. if we can get LLMs to mitigate some of that I the short term without causing too much distraction, that's fine, but we should really be using that to inform the solution to the fundamental problem.
I generally would disagree with this, though. I don't think there's solely a problem with abstraction design, I think the inherent complexity of many systems in the business world is very high (though obviously different implementations make it different levels of painful). If that's a problem, it's a people/social one, not a technology problem.
In my future we lean into the fact that people want features, they want complexity, for many things - everybody's ideal just-for-them workflow/tooling would look slightly different than the next person's - and use these tools to build things that do more, not less. Like the evolution of spellcheck from something you manually ran, to something that constantly ran, to something that can autocorrect generaly-usefully when typing on a touchscreen.
Let's get back to finding more features/customization to delight users with.
This isn’t actually an argument for or against anything, I don’t know why people say this. It is entirely possible that people are using this brand new, historically unprecedented tool wrong.
Cars have been a huge success in spite of requiring people to learn a bunch of new things use them.
The classic "you're holding it wrong" was about the iPhone 4: sure, people could learn to hold the iPhone in such a way that they didn't block the particular parts of the antenna that were (supposedly) the problem. But "holding an iPhone" is a fairly natural thing to do, and if the way that people are going to do it naturally doesn't allow its antenna to connect properly, then that's a technology problem, not a human problem.
If the selling point for AI is "you can just talk to it, and it will do stuff for you!" (which may or may not be yours, personally, but it is for a lot of people), then you have to be able to acknowledge that "describing a problem or desire using natural language" is something that humans already do naturally. Thus, if they have to learn to describe their problem in very specific ways in order to get the AI to do what they want, and most people are not doing that, then that's a failure of the technology.
For the specific case at hand, what's being described is similar to the problem of self-driving cars: you're selling the benefit as being the AI taking a lot of the work off your shoulders; all you have to do is constantly check its work just in case it makes a mistake. Which is something that we already know, empirically and with lots and lots of data, that humans are bad at.
Once again, it's a technology issue. Not a human issue.
Some people are lazy, plain and simple. If they want to blindly accept what the LLM tells them without critical analysis and review then that's on them.
Yes please, I've seen the vibecoded slop PMs put out every day because software engineering is simply not a skill they have, and I'd love to make a LOT of money fixing their crap once it dies in production <3
I can tell you right now most pm’s are absolutely useless and glorified project managers who don’t know how to think and get in the way - and don’t know how to enable engineers to be more productive.
This was substantially predicted by Fred Brooks in 1986 in the classic No Silver Bullets [1] essay under the sections "Expert Systems" and "Automatic Programming".
In it, he lays out the core features of vibe coding and exactly the experience we are having now with it: Initial success in a few carefully chosen domains and then a reasonable but not ground breaking increase in productivity as it expands outside of those domains.
[1] https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.p...
The LLMs turn out fully formed clones of stuff for which there exists copious amounts of code openly searchable on the web doing the exact same thing.
LLMs require developer-like specification, task/subtask breakdown and detail where such example code already exists.
As a professional prior to LLMs, how many problems that you work on have many existing free solutions but you neglected to use that code and decided to spend days doing it yourself?
I’ve often reimplemented things at work that exist elsewhere. If I could just copy & paste whole solutions from GitHub and change the branding/naming slightly, I could make curl in an afternoon.
I can only think of hobby projects, like writing yet another emulator, expression parser or media processor in a new language I'm trying to master.
In a professional setting, you would always diligently explore libraries and only implement your own if there is no suitable alternative.
Only when the existing free solutions are licensed with something like GPL. Now I can just say, write me a C webserver library similar to mongoose and I get the functionality without the license burden.
And you now own full responsibility for maintenance.
I need a python script that
1) reads /etc/hosts
2) find values of specific configured hosts (read from a .conf which) eg server1, localhost, etc
3) it'll assign a name to those configs eg if the .conf has
[Env1]
192.168.0.1 production-read
192.168.0.2 production-write
192.168.0.27 amqp
[Env2]
192.168.0.101 production-read
192.168.0.201 production-write
192.168.1.127 amqp
Basically format:
[CONFIG_NAME]
<ip> <hostname>
Like an usual hosts file
4) And each of those will be stored in memory
5) if in /etc/hosts it matches one of those, it sets the "current env" as the configname
5) It'll create an icon on the top-right of ubuntu 22 default gnome with
6) that icon could be the text of the current config name or if nothing matches, "custom" text would show
7) When the user clicks the "tray"/appindicator(or whatever gnome is calling them) it'll list the config names in a simple gtk/gnome
8) When the user clicks one config, we create a backup of /etc/hosts in ~/.config/backups/ named hosts-%UNIX_TIMESTAMP%
9) we then apply it to hosts file (find only the line with the hostnames to change and modify only those)
And that one-shotted a simple gnome app indicator env switcher. Had to fix a few lines here and there but it mostly just worked. If you give the proper spec to the LLM, it'll do it right. You can even fake a DSL to describe what you want and it'll figure it out.This is one of the reasons I like the OpenBSD and suckless projects. There are solutions that are technically correct, but are overengineered.
That's (as shown in my sample prompt) one great thing I've been using LLMs for: making GUIs for arcane Linux-based OS/userland settings that I have no interest in doing "sudo gedit yadda yadda" or learning man pages for. It's been 30+ years, we deserve a better desktop experience.
I've used suckless packages in the past, but it feels to me too close the GNOME/Apple way of giving zero settings and having opinionated defaults whose opinions do not ring well for me. I have zero desire to change my shortcuts/hotkeys to something random devs chose based on their past computer experience, mostly unix-based. Muscle memory > *.
An LLM will just say, "Sure! Here's the fully implemented code that gets the data and give it to the user. " and be done with it.
> What data should I retrieve, and where should I get it from? Please specify at least: ...
And it then goes on to ask just exactly what is necessary, being all constructive about it.
But the point still stands: in most contexts, the LLM will fill in the blanks with what it deems appropriate like an overconfident intern at best and a bull in a China shop at worst.
It's the wrong thing for important things under the hood (like durability and security requirements) that are not tangible to them.
When we talk about "the" bottleneck being specs it just isnt the case that it's the only thing LLMs do poorly. Theyre really bad at a lot of stuff in the SDLC.
They're also good at providing results which are bad but look ok if you either dont look too closely or dont know what you're looking for.
LLMs just take the same vague or poor requirements and make them look believable until you dig in to them.
You make it sound like writing good requirements is easy.
If it were easy we wouldn't need all these concepts around PMF, product pivots and the like. And even before that was Peter Naur's paper "Programming as Theory Building" [1].
If you truly understand the problem you're solving with software then requirements can be easy. But usually we don't, not right away, and so we have to build up our understanding of the problem first in order to solve it.
Even then, the problem we solve may not have been the problem paying users will have, so you can have "good requirements" and still have a bad business, or even the opposite where you somehow build a working business despite bad requirements, because you hit upon a customer's need quite by mistake.
Nothing about any of this precludes LLMs being helpful, though nothing guarantees LLMs will be helpful either.
[1]: https://cekrem.github.io/posts/programming-as-theory-buildin...
"Make a facebook clone" is the vague human promise to the end user. The reality is that it leads to so many assumptions which are insurmountable due to the vague interpretation so you have to change your requirements in the end to claim success.
Thus everything turns into a mediocre compromise. There is no exceptional outcome, which is what makes a marketable product. There are just corpses everywhere.
You need something better to both define requirements and implement them than this technology.
Anyone who thought that gap could be shrunk substantially lives in delululand.
Hence why we haven’t seen this explosion of ‘really great’ products come out.
Many will continue to parrot ‘bro but the models changed I swear’. I’m sure they did. But you’re missing the damn point.
That's why we write programs in programming languages and not English. Because they are much more efficient at giving precise instructions than natural language.
"what does X means? how will it work?"
while a programmer will ask, about all cases.
The dudes in Eastern-Wherever not asking what something means is the scary part. You only find out at the end how deeply confused everyone was when making the thing. You can fix it with attention and management, but then only some projects sometimes are profitably outsourced and you still need competency.
Can't good marketing teams, backed up by World Class Product people, sell anything we build, more or less?
</devil's advocate>
In several companies I have seen product managers joining teams and failing to even have minor requirement ready for months during “onboarding” of the PM. And then code being ready but taking months to release because DevOps is busy or QA can’t find time.
The pace of release of software has been disconnected from the coding part for the longest time, and we have been quiet about it.
The annoying thing is that giving an LLM vague instructions like "make a Facebook clone" does work... in certain limited cases. Those being mostly the exact things a not-very-creative "ideas person" would think to try first. Which gave the "ideas people" totally the wrong idea about what these things can do.
These same "ideas people" have been contracting human software developers to "make them a Facebook clone" (and other requests of similar quality) for decades now.
And every so often, the result of one of those requests would end up out there on the internet; most recently on Github. (Which is, once there's enough of them laying about, already enough to allow a coding-agent LLM trained on Github sources to spew out a gestalt reconstruction of these attempts. For better or worse.)
But for the most common of these harebrained ideas (both social-media-feed websites and e-commerce marketplace websites fit here), entire frameworks or "engines" have also been developed to make shipping one of these derivative projects as easy as shipping a Wordpress.org site. You don't rewrite the code; you just use the engine.
And so, if you ask an LLM to build you Facebook, it won't build you Facebook from scratch. It'll just pull in one of those frameworks.
And if you're an "ideas person", you'll think the LLM just did something magical. You won't necessarily understand what a library ecosystem even is; you won't realize the LLM didn't just generate all the code that powers the site itself, spitting out something perfectly functional after just a minute.
This is a big HN LLM discussion divide. I am in the same no-specs work background camp, and so the idea that the humans who input that into dev teams are suddenly going to get anything out of an LLM if they directly input the same is laughable. In my career most orgs there has been no product person and we just talked directly to end users.
For that kind of org, it will accelerate some parts of the SWEs job at different multipliers, but all the non-dev work to get there with discussions, discovery, iteration, rework, etc remains.
If the input to your work is a 20 page specification document to accompany multi-paragraph Jira tickets with embedded acceptance criteria / test cases / etc, then yes there is a danger the person creating that input just feed it into an LLM.
Probably why I haven't ended up in any.
https://web.archive.org/web/20161211074810/http://www.commit...
Ideation: Throw ideas back & forth, cross reference with knowledge bases, generate design documents. Documentation: Generate large parts of docs. Development: Clear. Deployment: Generate deployment manifests, tooling around testing, knowledge around cloud platforms.
Every single step can be done better & faster with AI. Not all of them, but a lot.
Even development. Yes some part of your job involves understanding the problem better than anyone & making solutions. But some parts are also purely chore. If you know you keed a button doing X, then designing that button, placing it, figuring out edge cases with hover & press states, connecting to the backend etc - this is chore that can be skipped. Same principle applies to almost all steps.
A typical example of trying to add a new significant capability involves many meetings (days, weeks, months, etc. )with the business to understand how their work flows between systems X, Y and Z as well as all of the significant exceptions (e.g. we handle subset A this way and subset B that way, but for the final step we blend those groups together, except for subset C which requires special process 97).
Then with that understanding comes the system solutioning across multiple systems that can be a blend of internal system or vendor's system, each with different levels of ability to customize, which pushes the shape of the final solution in different directions.
There is certainly value in speeding up coding, but it's just one piece of the puzzle and today LLM's can't help with gathering the domain information and defining a solution.
I'm not saying this is the correct thing, but companies are implementing it and it is "working". I don't think keeping our head in the sand is helping.
But the LLM is not aware of how the business works and why, so someone needs to work with the business to extract the information. Typically it's not well documented.
Are they reasonably documented/audited/put into any sort of version control like a lot of internal tooling? Or are they the kind of the thing that gets whacked together on the fly in a "move spreadsheet data from A to B", "I want a list of people's schedules with custom highlighting" kind of things.
Not doubting your productivity increase, I'm just curious how people quantify that when they say it.
looks like orgs have to have engineers on for optics. like having a legal staff with no lawyers, or a cybersecurity staff with no IT or certified people. Software has famously not needed state licenses or industry certification, but maybe thats a direction to consider to give utility to company optics.
In fact, these disagreements and disbeliefs create opportunities and salients in the market.
Anecdotally, I see a lot of problems/solutions content about AI that doesn't reflect at all the challenges I face. But trying to tell people that there are other ways of doing things, especially when it conflicts with token-maxxing, is a lost cause
1: When was the last time you worked on a project where you thought the average IQ was 140? I don’t even think I have worked on a project where the maximum IQ was 140.
2: Who thinks the IQ of people on the project determines its success? There’s so much more to it than just “high capability team members” (to give IQ a generous interpretation).
3: (math joke) A sequence like (AI IQ - Human IQ) can be negative and monotonicly increasing and still never reach 0.
On the other hand, it feels like we've been over this tens of times recently, on HN specifically and IRL at work. Another blog post isn't going to convince leaders that this is how the world works when they are socially and financially incentivized to pretend like AI really will speed things up. So now I just wait for their AI projects to fail or go as slowly as previous projects and hope they learn something.
Humanity knows how to solve starvation. Clear routes were laid out long ago. The work is in adoption.
So I am spending my days gardening and obsessively working on personal coding projects with these agentic tools. Y'know, building a high performance OLTP database from scratch, and a whole new logic relational persistent programming environment, a synthesizer based on some funky math, an FPGA soft processor. Y'know, normal things normal people do.
So I know what these tools are capable of in a single person's hands. They're amazing.
But I hear the stories from my friends employed at companies setting minimum token quotas or having leaderboards of people who are "star AI coders" telling people "not to do code reviews" and "stop doing any coding by hand" and I shake my head.
I dipped my toes into some contract work in the winter and it was fine but it mostly degraded into dueling LLMs on code reviews while the founder vibe coded an entire new project every weekend.
These tools suck for team work or any real team software engineering work.
I'll just let this shake out and sit out until the industry figures it out. The only places that are going to be sane to work at are places with older wiser people on staff who know how to say "slow down!" and get away with it.
In the meantime, quantities of cut rhubarb $5 a bunch in Hamilton, Ontario area for sale. Also asparagus. Lots and lots of asparagus.
But for a small studio, or independent developer, LLMs are a big game changer. Being able to do a mediocre job at 5 people's jobs is a huge leap over trying to get by without those jobs - relying on third party assets or other sorts of content, or even worse - doing a really awful job of trying to improv those jobs. See the UI of basically any program ever that was clearly laid out by a programmer and not a designer. Or there's the whole trying to rip off stuff from dribbble, but lacking the skills to do so. Whereas with AI, you can suddenly competently rip off everything and everybody - it's basically their entire MO.
What are the chances that this is the Gell-Mann amnesia effect? Sounds like the textbook definition of it.
Personally, I find the exact opposite to be true. LLMs only help me when I already know exactly what I'm doing.
I got the opportunity to rewrite our aging login page just as a fun experiment. I sat down with one of our analysts and we just went to town in a zoom trying out stuff with claude until we made something pretty sweet. Ran it through all our systems for accessibility, performance, etc and it came out clean. Made a PR and fired up a test that day in production. I haven't written a lick of our front end framework ever in my entire life and we were able to build something that has had a marked improvement in our user engagement in a day.
Do you have any idea what has caused this engagement improvement and indeed do you actually have any metrics or is it hearsay?
It is much easier to knock something up in a day as you have done, but often the reason manual things take longer is they are based on actual testing and research which takes longer than a day however you do it. The manual way gives you much more data on the hows and whys, and will inform you much more in the future when you need to change again instead of just 'ai did it last time, lets use it again!'
This wasn't a half assed test but a legitimate effort to improve something that we never prioritized
We had a legitimate 25% reduction in users giving up logging in in a system that has millions of users.
We ran a 50-50 AB test for several weeks to confirm the data and then turned it on completely
edit: If you haven't already read my post, I'd also like to say that the benefit AI gives us is that I worked on something I never get to work on, the analyst got to try a hunch he always had, and we got to see it go live in a day. If it didn't' work out, we were out a day of work which beats the few weeks of an effort prior to AI that we would spend on something just to find out it didn't work.
To wit, the answer pre-AI was to hire an expert on that thing, and you would then critically assess their work product, despite being unable to build it yourself.
Eg: I had a product manager say to me that he envisions a future where any meeting with stakeholders that does not result in an interactive prototype by the end of the meeting would be considered a failure. This feels directionally correct to me.
The other thing I expect to see is Vibecoding being the "Excel 2.0" where it allows significant self-serve of building interactive apps that's engaged in a continual war with IT to turn them into something with better security guarantees, proper access control & logging, scalability, change management etc.
But the larger historical point here is that every revolutionary transition produces, in the early stages, "Steam Horses". The invention of the steam engine had people imagining that the future of transportation would involve horse shaped objects, powered by steam, pulling along conventional carts. It wasn't until later developments that we understood the function of transportation as divorced from the form.
I started talking about Steam Horses originally in the context of MOOCs, which was a classic Steam Horse idea.
Just learn something like balsamiq. You don't need code to build out a prototype. Just like you don't need actors and a camera when a few sketches can capture a scene.
The human their cumulative experience over a career of the nuances behind every decision and their evolved context at their given company. This context allows them to take that one-line spec and extract tons of detail from it by knowing who wrote the ticket, what was the "trigger" for the ticket, what other work is being done in tandem that might need to be incorporated, etc.
LLMs can be given this context but it's a manual process of transcription into its prompt/memory/skills and that content must be continually updated and refined. It just pushes lots of work to spec writing from the more intuitive nature of feature development a lot of us have a level of mastery over. Then you must constantly have a back-and-forth to refine the output.
Any senior engineer knows that a lot of that communication is wasted energy. If I have a good idea of what I'm building I can develop the feature in a focused flow of output that I refine in an almost unconscious way because I don't need to translate intent into words, just code, and that process is incredibly automatic after years of developing software.
When all the effort is placed into writing specs, re-prompting and then reviewing (often over and over again), that intuitive and automatic ability to build software degrades. Think of a time when you were mostly focused on PR reviews and not contributing to a project. You may have been able to help developers build better code, but if you were to jump into that project to contribute, there would be a real and painful effort to re-familiarize yourself and reconstruct that intuitive familiarity of the project.
LLMs have many very useful qualities but so far I fear an over reliance on them can be more a hinderance than a benefit.
No, the code is actually almost always correct. The way it’s added is probably not what you’re going to like, if you know your code base well enough. You know there’s some ceremony about where things are added, how they are named, how much comments you’d like to add and where exactly. Stuff like that seems to irritate people like me when not being done right by the agent, and it seems to fail even if it’s in the AGENTS.md.
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
Almost 2 decades in IT and I absolutely do not believe this can ever happen. And if it does, it’s so rare, it’s not even worth talking about it.
That's not my experience, especially when the inputs are bugs or performance issues. It frequently hallucinates and misdiagnosis without a guiding hand. However, it can still RCA and analyze well and improve efficiency if you keep an eye on what it's doing and push it the right direction.
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
I think you run into a ceiling how fast a person can digest and analyze the info compared to a machine
^ I say shouldn’t because I work in research engineering. Most of the needs of our users are pretty unique. We’ve had people come in and try and specify every piece of work, -and ended up building a crud app no one wanted or used.
We recently completed a highly complex product involving cameras, laser sensors, custom hardware, OS setup, infrastructure, shared memory handling, conveyor systems, and line-camera integration. From prototype to a fully deployed production system in just 3.5 months — now running in a manufacturing plant processing 20,000 scans daily.
We are entering a new era, and those who fail to adapt will be left behind. We are not “vibe coders”; we use AI with precision and intent. The human remains the architect.
If that sounds familiar, it’s because it’s what dang did over the course of several years.
It’s taken a few weeks. I started right around May, and now it’s able to render large HN threads (900+ comments) within a factor of five of production HN performance. (Thank you to dang for giving actual performance numbers to compare against.)
A couple days ago, mostly out of curiosity, I ran Claude with “/goal make this as fast as HN.” Somewhat surprisingly, it got the job done within a couple hours. I kept the experiment on separate branches, because the code is a mess, just like all AI generated code starts as. But the remarkable part is that it worked, and I can technically claim to have recreated HN within a few weeks.
The real work is in the specifications. My port of HN is missing around a hundred features. Things from favorited comments, to hiding threads, to being able to unvote and re-vote.
But catching up to HN is clearly a matter of effort (time spent actually working on the problem with Claude), not complexity. Each feature in isolation is relatively easy. Getting them all done within a short time span without ruining the codebase is the hard part. And I think that’s where a lot of people get tripped up: you can do a lot, but you have to manage it tightly, or else the codebase explodes into an unreadable mess.
It’s true that if you don’t do that crucial step of “manage the results”, you’ll end up making more work for yourself in the long run, by a large factor. But it’s also true that AI sped me up so much that I was able to do in weeks what would’ve otherwise taken years (and did take dang years). I’m not claiming parity, just that I got close enough to be an interesting comparison point.
AI can clearly accelerate us. But we need to be disciplined in how we use it, just like any other new tool. That doesn’t change the fact that it does work, and I think people might be underestimating how good the results can be.
I think projects where correct is very clearly defined can benefit from LLM acceleration, as you're describing here.
But so much of modern software development is figuring out what the right thing to build is. And in those situations, I don't think LLMs provide nearly as much benefit.
Problem for model producers is - the revenues they get from this mode of work is tiny relative to what they need.
Therein lies the paradox. And the problem is, interacting with llm’s is akin to a slot machine.
And on top of that, llm producers want you to view it that way - that’s how they generate revenue and can play games
To some extent, we tell as many lies as we can get away with. Some answers are more convenient then others.
"Why" this is taking so long, like "why did this fail?" are prone to broadly agreed lies. Sometimes this is for obvious blame liability reasons. Often, this is because the lie conflicts with some "meta."
One such fallacy is the idea that software=value. Code= money, because it cost money to write. Features=revenue. Etc.
Irl.. startups produce features very quickly because they actually need features. They start with zero features.
But... LinkedIn, visa or even Facebook.... What they are short on is opportunities to develop code with value. Ie... Something that will increase revenue.
FB aren't resource constrained. They're demand constrained. If there were a "write code, make revenue" opportunity available... they'd have taken it already.
This totally conflicts with the experience of working somewhere. That's because you have wishlists, road maps and deadlines.... and it always appears that demand for code is sky high.
> We are now talking about software development, but this is applicable to all processes that take longer than you would like.
Indeed, it's kind of a generalized version of Amdahl's law. Since we only speed up a portion of the work, there are upper bounds on time saved. Worse, work in progress tends to bunch up at a specific point: code review. A coworker of mine literally complained two months ago now that nobody was reviewing code (and that it was blocking his work). I'm not sure review delay has actually gotten better since.
Programming is a logical circuit breaker. There is a wide range of incompleteness that halts development or puts the solutions in an unpublishable state.
A product person has no compiler, no RAM, no database, no state machine. There is nothing that can fail. There are probably strategies to weed out some issues, but none will be perfect.
We need to combine reality with computers. Computers set the constraints and we can only check if we are in bounds of the constraints by solving the problems with computers.
Oddly enough AI has so far nothing to offer to improve the "product people" problems.
So well said.
AI is unveiling how the bureaucracy is the slow part.
Computing has been doing that for decades. If your process is fucked, computers make it fucked faster.
It’s just that now, we have entire generations alive that have never seem a world without digital computers. ~LLMs~ AI is a fun new lever in some uses so clearly it is finally the hammer that will drive the screws and bolts for us, with less effort on our part!
They just have to learn from experience. It’s what you do when you can’t be bothered to learn the lessons of the past.
Work in large orgs long enough and you will recognize these creatures. Ladder climbing is a skill orthogonal to adding any value to the customer/company.
It's happening about 10x faster than any other I've seen or read about.
Conceive how long it took just to get barcode scanners rolled out in grocery stores. Or direct payment terminals. Or how many decades it's been getting robotics into the manufacturing of cars at scale. I worked through the .com boom and I can tell you that "webification" took 10 years or more for most businesses (and many of them now just gave up and just have a Facebook page instead etc)
This is a little insane what's happening now. It really does change everything. People who don't work in software I don't think have any idea what's coming.
It's highly salient to management, and being forced top-down by them at 10x speed, for sure, because they see a future cost save to reduce headcount.
For certain technical roles its a force multiplier and already very saturated for sure.
On the other hand there's a lot of solution-looking-for-problem going on in large orgs where layers of management have been banging the table for 2-3 years on AI KPIs without any value being delivered.
In the weekly AI wins mail at a friends company, multiple non-technicals were bragging how AI has saved them 15 minutes a day by summarizing their morning inbox. This was the big game changer for them.
We have a person who wants, effectively, a formatted report generated on demand from four sources. The current interface is four different programs, all of which were written by different groups inside the corp, but they also all draw from the same or similar databases. There's a unified login, but each interface has its own permissions.
The company brings in an AI initiative and soon enough drops all security restrictions for the AI's access to the databases. The new formatted report gets generated through the use of a few tens of thousands of tokens each time, and about 5% of the time synthesizes non-existent data.
A competent DBA and application programmer could have spent a week doing the same thing, producing a program which would do the job faster, cheaper (at run-time), secure and in a way which could be extended and debugged.
But DBA and application programmer time is expensive up-front and the execs are gung-ho about the stock-price now that they are hip and trendy.
Because the "rate of improvement" is only astonishing in well understood areas and really only astonishing if you yourself are not that great at what you do. Speaking for myself here, my job is extremely safe given that my boss doesn't wanna sit there and prompt AI all day and i work in a fun little 4 person company. We already have plans for the 3 next years which involve me :-)
This is a bold vague claim many on HN make, but never put back-of-napkin numbers on. e.g. do you think agentic Opus 4.7/GPT 5.5 are 95th percentile coders but you're 98th percentile? Or are you saying you're a middle-of-the-road 60th percentile coder and AI is 20th percentile so only 20% worst programmers should worry? Let's be specific about the claim being made.
Also, I have the impression that LLMs bring some gains or benefits for individuals but not relevant enough at the organization level.
For a while this is not a problem: I can work with my current mental model. But every generated PR erodes my expertise a little bit. Eventually my mental model won’t fit anymore.
So how much of that model maintenance should I count into my productivity metric? Does that even matter or will the next model be able to reason well enough that my mental model doesn’t matter?
The primary issue is simply that developers are the most immediately impacted by this technology. The combination of being able to adopt, willing to adopt, and the tech actually being incredibly good at developer related concerns is unique. The rest of the business will eventually catch up. I'm watching it happen in real time. It is agonizingly slow in most places, but it is happening.
The developers being able to drain a one year long work queue in an afternoon is meaningless if the rest of the business cannot absorb the effects of that work in the same timeframe. The business will not leave your idle work queue on the table for long though. Keep pulling a vacuum on them and they will fill the space eventually.
- shift towards throughput-oriented vs latency-oriented. Can juggle more tasks, but increasingly hard to speed up individual ones.
- strong scaling is tough. Might even see slowdowns for individual tasks, so reliable benefits come from being able to juggle more and eat the per-task inefficiency
- amdahl's law: we can't speed up tasks beyond their longest sequential (human) unit, so our work becomes identifying those bits and working on them. Related: you can buy bandwidth, but you can't buy latency
The proper implementation and design still take time, but still faster in systems with a lot of available resources online.
Another aspect that is not captured here is that the lawyers and subject matter experts will also be using AI to speed up their parts.
You know, typing fast and accurately is kind of important.
The new speed skill that developers now need is speed reading. LLMs just make copious amounts of output (from tests, documentation, diagnostics). They also produce code so quickly that a skill for focusing on weak points is so important.
There's way too much dunning kruger in software right now. "Just read fast" wtf lol
https://podcasts.apple.com/us/podcast/the-daily/id1200361736...
Once tooling (e.g. agent harnesses, external tools) becomes more mature and consistent, the other 2 will become less of a bottleneck.
If I were to take a gamble here, I would argue that development will at one point reach the more ideal scenario, whereas the project planning, the scoping, will become longer. Also, the documentation section will take almost the same as the development, slightly longer at the edges.
The new ai-assisted era will most likely push companies to adopt a Waterfall management, rather than an Agile one.
The way AI makes your processes go faster will have little to do with cutting software development time in itself, but by letting an organization be made with fewer people, which in itself lowers your misalignment issues. A giant company of 200K people will still be about as messy as one today, but you might be able to do a lot more with the same number of people, just like a lone programmer today, without AI, already does quite a bit more than anyone could do by themselves the 80s.
Maybe some of the advantages are that you don't need quite as many developers, or maybe you can use a smaller marketing team, or you don't need to spend that much time answering questions, because an LLM is doing it for you, and it's tracking what it's been asked of it, turning the questions into product research. Either way, the gains come from being able to run leaner, and therefore minimizing organizational misalignment.
The broader issue is the sheer number of businesses that build massively overcomplicated stacks, bought heavily into bandage solutions like AWS lambda, got on dumb tech bandwagons like big data, nosql etc. This is just another one.
I think you can engineer yourself into being leaner, in some businesses AI will help but we’ve had over a decade of “we can just add more complexity” and it just does not work.
I’m a rails guy. People forget for every unicorn there’s 10 9 figure businesses just ticking away on some niche with a VPS, rails and like 4-10 devs.
> "faster typing won't make you faster".....
I understand a Deloitte consultant has specific incentives. But let's first try to answer a baseline question: why do some companies have thousands of software engineers? What do they all do?
And then, a follow-up: what is actually the bottleneck at most companies? What causes "requirements gathering" to take long?
Complexity.
In my experience (medium size businesses, i.e. 200 million to 2 billion annual revenue) we're trying to understand how a complex set of systems and business processes and different businesses (external partners) interact and then trying to morph all of that into a shape that now has capability X layered on top or in the middle.
Here's a concrete example, business X that makes their own products and has retail stores as well as an ecom site wanted to add the ability to put complementary items built by other companies on the website and have them drop shipped from the vendors to the consumers. The final solution involved 21 different interfaces between 4 different systems (ecom system, store system, omni channel system, external drop ship mgmt system) as well as a new internal system to manage this activity. It's takes a significant amount of time to understand and solve for all of the low level details.
Everything is OK, but the size of Gantt chart should be expanded.
This is how I felt when I first started seeing people discuss things like AGENTS.md etc.
If you don't like the state of technology with AI tools, just wait a few weeks. Things are still changing at a quite rapid pace. The scope of what is possible seems to shift regularly. A lot of what I did in the last weeks was complete science fiction even a year ago.
This article makes a few good points though. AI won't magically make processes faster. You might actually have to change the process. A lot of processes in companies are about people and how they communicate. The more people you have, the more communication you get. It's an exponential. Using AI in that context just adds to the communication noise.
But if you restructure your processes you might get different results. Most companies have not really gone through that process yet. It's too early to call success or failure. And especially non technical people have mostly not yet experienced any agentic tooling at all. We've yet to see how that will change companies. My guess is that some companies will be better at this than others. And we'll see a bit of darwinism play out.
However, while the engineering team successfully fast tracked development, UAT, and production testing largely thanks to AI other departments only began digging deeper into the project toward the end of April. To be fair, they do use AI in their workflows to some extent, but they haven't adapted their processes to keep pace with engineering's increased productivity.
In my opinion, this lag is mostly because many employees in those departments are older and hesitant to change their routines. While I understand that resistance to change is a natural human trait, what comes to my mind is this beautiful German adage, "Wer nicht mit der Zeit geht, geht mit der Zeit" which loosely translates to, "Who doesn't change with time is left behind by time"
I get most value from them when I'm asking it to either fill in the blanks of something already half implemented or when I need some feature in a given context/language that only exists in other languages
> ...but that doesn’t mean it’s generating the correct code.
Something I'm observing is that now a lot of the pressure moves to the product team to actually figure out the correct thing to build. Some product teams are simply not used to this and are YOLO-ing prototypes now, iterating, finding out they built and shipped the wrong thing, and then unwinding.Before, when there was the notion that "building is expensive", product teams would think things through, do user interviews up-front, actually do discovery around the customer + business context + underlying human process being facilitated with software.
This has shortened the cycle to first working prototype, but I'd guess that in the longer scale, it extends the time to final product because more time is wasted shifting the deliverable and experience on the user during this process of discovery versus nailing most of the product experience in big, stable chunks through design.
At the end of the day, there is a hidden cost to fast iterative shifts on the fundamental design of the software intended for humans to use and for which humans are responsible for operation. First is the cost on the end users who have to stop, provide feedback, and then retrain on each cycle. Second is that such compounding complexities in the underlying implementation as product learns requirements and vibe-codes the solution creates a system that becomes very challenging for humans to operationalize and maintain.
Ultimately, I think the bookends of the software development process are being neglected (as author points out) to the detriment of both the end users and the teams that end up supporting the software. I do wonder if we're entering an "Ikea era" of software where we should just treat everything as disposable artifacts instead.
No. AI is used all the way from the very start to the very end and after.
- People need to be trained to use AI in ways that we don’t call slop, meaning half is made up by the LLM
- To this effect, LLMs should be trained to ask for more input before offering any kind of final output
Another option is that lower software costs would significantly reduce the cost of whatever non-software product the software supports (manufactured good, electricity, services, telecom etc.) but I don't know in which industry the cost of software is a large portion of the overall product cost.
And there's another thing. A company that makes tractors can't produce food without land. A company that makes metal machining equipment can't make cars without the raw materials. But a software company that makes software that automatically makes software could just produce the result software itself rather than sell the software-making software. If AI ever reaches the point it makes software at a marginal cost that's not much higher than the cost of the AI itself, what would be the incentive of selling that AI?
>Process blocked on human inputs
Have AI check chat, email, issue tracker and see who it's blocked on and what latest status is. It may not save a huge amount of time but it can dig through the info pretty quick.
>Exploration
Once again, have it scour issue tracker, chat, customer suggestions, product documentation and summarize history and current status. Much quicker than setting up new meetings to try to rediscover and organize existing info.
Another use case, have agent build prototype, hand to people, have AI summarize and integrate feedback.
Claude or ChatGPT + Slack MCP + Jira MCP + Google Docs MCP + internal knowledgebase MCP + gh (GitHub) CLI + Datadog MCP--really 1 MCP per process in the Gantt chart--has been a huge boost at work just digging through context scattered all over the place and summarizing.
That said, it definitely still needs supervision and hand holding along the way
The assumption is that there’s no way to extract speed and accuracy matching business models.
This isn’t obviously false to the majority of dev/arch’s because most are vibe-coding, but it is extremely obvious to the minority that has focused on accuracy first THEN speed.
There's no point in falling under the illusion that they'll finally get it now. This will all fall on deaf ears. They're convinced they're automating us out of existence when in fact they'll need the services of people who can surf complex systems more than ever.
We will be able to do more than ever and potentially faster. The issue remains that most of the things these people ask us to do and want us to do and pay us to do remains basically stupid and as TFA points out, the last mile of getting shit properly shipped isn't going to speed up. It's going to slow down.
If you want to see what happens when you put people in charge who sincerely believe in the "AI automates SWEs out of existence" mantra, take a look at the code quality of Claude Code and the recent "bun rewrite in Rust" fiasco.
Feature development could take minutes to hours depending on how you iterate it. These days, all we do now is just think of a feature and add it within an hour using AI. We have a process that is a year old now that is fixing bugs that would have taken us hours or days and it spits out a fix in about 10-15 minutes that is 95% accurate. 5% is garbage, but 24 months ago, 95% of it was garbage so the progress is staggering. The longest pole is code review which is all human, but that will all be automated soon.
Not everything will be much faster, but most processes will be 1-3 orders of magnitude faster. To ignore this or find excuses why LLMs/AI won't speed things up or remove the need for large swathes of humans is delusional and cope-ism.
...but yeah most organizational processes & people aren't set up for leveraging it and roll out will be slow (same on learning where it does / doesn't work).
I’m currently working on a data migration for an enormous dataset. I’m writing the tooling in go, which is a language I used to be very familiar with, but that I hadn’t touched in about 12 years when I started this. It definitely helped me get back into go faster.
But after the initial speed up, I found myself in the last 10% takes the other 90% of the time phase. And it definitely took longer for me to wrap my head around the code than it would have if I’d skipped the AI. I might have some overall speed up, but if so it’s on the order of 10-20%. Nothing revolutionary.
I have been able to vibe code a few little one off tools that have made my life a little easier. And I have vibe coded a few iPad games for my kids for car trips, but for work I still have to understand the code and reading code is still harder than writing it.
This is also not from lack of trying , I spent $1000 last week during a company wide “AI week”. Mostly on trying to get AI to replicate my migration tooling, complete with verification agents, testing agents, quality gates, elaborate test harnesses etc…
I’d let Claude (opus 4.7 max effort) crank away overnight only to immediately find that had added some horrible new bug or managed to convince the verification agent that it wasn’t really cheating to pass my quality tests.
What I learned from last week is that we are so far away from not needing to understand the code that everyone who says otherwise is probably full of shit. Other people who I trust who have been running the same experiments have told me the same thing.
Until and unless we get to that point, it’s always going to be a 10-50% speed up (if that).
For many businesses that is revolutionary.
Not sure that's enough magic to make the math work for the trillions being invested, but on a ground level within companies even small wins stack up. You may have burned through $1000 without getting much done, but from a company perspective they've probably got an employee with better instincts as to what does or doesn't work
Where I have a problem is with the FOMO, panic, and mania that has come down from up top. There are people in my company saying that we should be spending 3x our salaries in tokens.
But if you’re in a business where a 20% speed up is revolutionary, there are so many things that have been on the table for years that you could have been focusing on. I’ve seen at least 5 advances over that have happened over the last 20 years with that kind of boost.
That’s probably about you’d get from spending time really learning vim or eMacs.
Careful who you share this information with- better to roll with the kool-aid drinkers when they're holding the cards.
It might be the ultimate tool of disruption.
Have you thought about pair programming together with the AI?
My LLM outputs are intentional, in my style, and tightly reviewed by myself.
I'm also emitting Rust, which I've found to be the very best language to work with in AI. The AST and language design is focused around control flow and error handling. The borrow checker, sum types, filtering and mapping makes it such that good design is idiomatic.
There's a lot JavaScript, Python, PHP, and Java in the world. A lot of it isn't great. The architectures and styles are wildly varied too. Rust doesn't have that problem. The training data is really solid and idiomatic.
People have to stop promoting this narrative of the AI doesn’t make you move faster as it’s not helping anybody.
I get it. We all worked hard for our skills and it’s really difficult watching them get automated away, but it’s been this way since the printing press assembly lines and the industrial revolution itself. Things change, and you have to adapt to them and stop thinking about it from a centric point of view. The narrative people should be pushing is that you can build great things with AI.
Of course you might not have a job for a while and yes, that’s a big deal but it doesn’t mean that AI is wrong or stupid. It means you have to adapt.
LLMs are not helpful, they make everything worse. They make you worse, or reduce you to average at best. I really just don't see what ya'll are seeing. I have access to every model with no limits, Its not issue of "holding it correctly" I can assure you, I've tried.
Yes it can create very small programs with low complexity, but anything of any size ends up as a literal Eldritch horror or with so many subtle bugs that make life miserable. I actually hate all of you that are pushing it onto people, its such lie.
So, for example, if someone is a poor at architecture, then they ask for AI's help to design a new feature, they won't know when to push back on the AI design, so the design will be overly complex and not solve the problem optimally.
If they are a poor debugger, and ask for the AI's help they will not know when it has incorrectly made a false assumption on the root cause or interpreting data and come to a faulty conclusion.
If they are poor at writing optimized code , and ask for ai to write some , they won't push back when the code is literally 10x the size it needs to be to solve the exact same problem.
This one non-technical PM guy at work used Codex to develop a project I was expecting would fall on my plate. He asked me to do a code review on it. What it produced was riddled with SQL injection vulns and the UI was complete garbage.
Off of that example, the key stakeholders on my project are demanding I start vibe-coding everything. I raised the security flag and now they are saying, "well, now we have a prototype and real development can continue," but it's clearly just to mollify me and make me shut up, because no such development effort on that other project has been planned, scheduled, budgeted, etc. They are kind of just sitting around on it, hoping they can get everyone distracted long enough to sneak it out the way it is.
"But he did it in a week!" Yeah, it would have taken me only a week to make whatever of value actually was in that project. The reason our software projects at our company take longer than a week is not because of code, it's because we have an IT department that blocks production deployment of everything unless you literally get the president of the company to make them do it. That's not a repeatable process that every project can leverage.
There was another project another more-technical-but-not-a-developer guy (he knows how to use MS Access) did in Claude Code where, yes, Claude could read a bunch of PDFs he got from the client, get the salient details out, made an Access database out of it, and made a static HTML website out of it to make those documents easier to search and navigate. But again, the UI was complete, unadulterated garbage. And, the best part, he spent several weeks on just getting Claude to reliably process the entire set of documents. He never could quite get it to end-to-end do the entire process. It kept missing documents and reprocessing the same ones over and over again. A for-loop to iterate over a directory of files would have taken 2 minutes to code by hand and he got stuck on it for over a month.
AI will speed us up, my ass.
Look, if AI means I never have to open another PowerPoint from a client to read a "quad chart" on one particular slide to get the data I need to do my project because my client doesn't understand that PowerPoint is not a data transmission format, fine. I'll be happy with just that: AI vision as a library I can call out to from my code, just like we've been trying to do with OCR but traditional OCR sucks at the job. But there's a bigger drumbeat than that and it ends in dilettantism and laying off the junior analyst and developer staff. I will be no party to that.
I think many things that were true prior to AI are still true or more so today, but new workflows and processes altogether are needed. I suspect that comprehensive, detailed planning and specification documentation must be assembled in advance of beginning code (akin to waterfall) when working with AI agents. Furthermore, I still believe customers and other key stakeholders need to be involved early and often so that the product can iterate towards a better ultimate end state (i.e., agile). Unlike prior to AI, it's completely plausible to implement both types of approaches, and they aren't mutually exclusive. We can do comprehensive, exhaustive, thorough planning and specification documentation prior to handing off to dedicated engineering and products teams, AND we can work quickly and iteratively via sprints that aim for frequent meetings and updates with the stakeholders that matter.
I also think the same validation gates that mattered before -- linting, SASTs, but most importantly, comprehensive automated testing that gets run locally and in CI/CD and is regularly expanded to cover all expectations about the behavior and structure of newly-implemented functionality -- continue to matter now, more than ever.
New tools and processes also must be built to make human review, the single biggest bottleneck in software development today, more simplified and streamlined, and less taxing. I think tools like CodeRabbit and Qodo can help automate and expedite the code-review and approval processes, but they would be even better if they were working off more surgical and tiny edits. Bloated, verbose AI-generated code edits are the core problem here. Process management techniques to mitigate the problem of AI code overload can prohibit the submission of AI-generated PRs, require senior engineer approval of any PRs prior to merging, or block the maximum number of lines or changes made. More sophisticated processes like Graphite's stacking of PRs are genuinely helpful in breaking down massive PRs into smaller chunks.
Finally, precision-editing tools for AI coding assistants like HIC Mouse (full disclosure, my project) that move beyond the existing options available to AI agents of whole-file replacement or exact string-replacement to enable agents at the editing-tool layer to perform surgical, tiny changes that don't touch any unrelated content, giving agents specialized visibility, recovery, and next-step guidance mechanisms that safeguard AI workflows, can materially reduce AI code slop by alleviating burdens upstream of code reviewers, both automated and human.
The bottom line: Shipping secure, production-grade code was never easy and always took a long time. It's not necessarily easier now just because certain aspects to the overall process can be generated much more rapidly. Arguably, the hardest parts like human review and approval are much harder now -- not easier. Solutions will take hard work and must be tested in the crucible of real-world enterprise usage. I am guessing that companies that deploy successful processes will be wildly profitable. Those that don't, including well-established incumbents, will fail. I do think AI absolutely can give organizations a game-changing boost in development velocity of genuinely high-quality code that might even be better than anything ever created previously. I also fully agree with the author that for many organizations, AI will not make their processes go faster and may even slow things down.