Just call it Agent-based programming or somesuch, otherwise it's really confusing!
I can one handed spec out changes, AI does its thing, and then I review and refine it whenever my kid is asleep for 20 minutes. Or if I’m super tired I’m able to explain changes with horrible english and get results. At the same time, I am following a source control and code review process that I’ve used in large teams. I’ve even been leaving comments on PRs where AI contributes and I’m the only dev in the codebase.
I wouldn’t call this vibe coding— however vibe coding could be a subset of this type of work. I think async coding is a good description, but bad because of what it means as a software concept (which is mentioned). Maybe AI-delegation?
In vibe coding, the developer specifies only functional requirements (what the software must do) and non-functional requirements (the qualities it must have, like performance, scalability, or security). The AI delivers a complete implementation, and the developer reviews it solely against those behaviors and qualities. Any corrections are given again only in terms of requirements, never code, and the cycle repeats until the software aligns.
But you're trying to coin a term for the following?
In ??? coding, the developer specifies code changes that must be made, such as adding a feature, modifying an existing function, or removing unused logic. The AI delivers the complete set of changes to the codebase, and the developer reviews it at the code level. Any corrections are given again as updates to the code, and the cycle repeats until the code aligns.
Did I understand it right?
If so, I've most seen the latter be called AI pair-programming or AI-assisted coding. And I'd agree with the other commenters, please DO NOT call it async programming (even if you add async AI it's too confusing).
Yes
> If so, I've most seen the latter be called AI pair-programming or AI-assisted coding.
I specifically considered both terms and am not a fan * "pair-programming" is something that involves two people paying attention while writing code, and in this case, i'm not looking at the screen while the AI system writes code * "AI-assisted coding" is generally anchored to copilots/IDE style agents where people are actively writing code, and an AI assists them.
I totally hear you on conflating async. However, I think the appropriate term would clearly indicate that this happens without actively watching the AI write code. Unfortunately I think other terms like "background" may also be confusing for similar reasons.
I feel that be the most clear. Agentic coding implies any workflow using AI agents. Which mean it's always the same agentic coding loop:
1. Prompt...
2. Wait or go do something else while agents make edits...
3. Come back to review the result
4. Go to 1
> is generally anchored to copilots/IDE style agents where people are actively writing codeI don't know when you last used these, but they're all agentic now. The workflow is exactly the same, you don't write code and auto-complete suggestions, you prompt and they go and make multiple edits to multiple files and can take upwards of 10 minutes, once done they show you a diff (or you can trust) and you're free to review/test or not, and prompt some more.
Edit: Or what the other commenter said: "prompt driven coding", that could be a good term as well.
I’m giving the benefit of the doubt to the author here that it’s very unlikely they consider their example to be an actual representative scenario.
Vibe coding is to allow the AI to make the majority of the decisions. What the author describes is more like a highly complex autocomplete; you establish the fairly detailed outline of what is needed, often using tools/servers/etc tailored to use cases, and expect the AI to design an implementation that is in-line with the human-made decisions that preceded it, which is why I draw the comparison to autocomplete. Vibe coding is more like paying the kid next door to write your school essay…comparatively.
I used vibe coding to build a UI prototype of workflow. I used mockup images as the basis of the layout and let the agent use Redis as the persistence layer. I know it will be throw away and don't care how it is working underneath as long as it can demonstrate the flow I want.
I have also used prompt driven development to allow the agent to code something I expect to turn into a longer term product. I do more review of the code, ensure it is meeting all standards to development I would expect of myself or any other developer.
There are certainly differing degrees of the two types of development.
Worked amazingly when it worked. Really stretched things out when the devs misunderstood us or got confused by our lack of clarity and we had to find time for a call... Also eventually there got to be some gnarly technical debt and things really slowed down.
This seems like a fairly rare situation in my experience.
The first step is "define the problem clearly".
This would be incredibly useful for software development, period. A 10x factor, all by itself. Yet it happens infrequently, or, at best, in significantly limited ways.
The main problem, I think, is that it assumes you already know what you want at the start, and, implicitly, that what you want actually makes some real sense.
I guess maybe the context is cranking out REST endpoints or some other constrained detail of a larger thing. Then, sure.
The thing I would add is to retry to prompt, don't tell it to fix a mistake. Rewind and change the prompt to tell It not to do that it did.
It is almost by definition what the average programmer would expect to find, so it's valuable as such.
But the moment you want to do something original, you need to keep high-level high-quality documentation somewhere.
That said, this article is basically describing being a product owner.
My experience is different. I find that AI-powered coding agents drop the barriers to experimentation drastically, so that ... yes if I don't know what I Want, I can go try things very easily, and learn. Exploration just got soooo much cheaper. Now that may be a different interaction that what is described in this blog post. The exploration may be a precursor to what is happening in this blog post. But once I'm done exploring I can define the problem and ask for solutions.
If it's DOA you'd better tell everyone who is currently doing this, that they're not really doing this.
Face it, the only reason you can do a decent review is because of years of hard won lessons, not because you have years of reading code without writing any.
1. Learn how to describe what you want in an unambiguous dialect of natural language.
2. Submit it to a program that takes a long time to transform that input into a computer language.
3. Review the output for errors.
Sounds like we’ve reinvented compilers. Except they’re really bad and they take forever. Most people don’t have to review the assembly language / bytecode output of their compilers, because we expect them to actually work.
In many cases it falls on the developer to talk the PM out of the bad idea and then into a better solution. Agents aren’t equipped to do any of that.
For any non trivial problem, a PM with the same problem and 2 different dev teams will produce a drastically different solutions 99 times out of 100.
The 2nd time it will likely be pretty different because they’ll use what they learned to build it better. The 3rd time will be better still, but each time after that it will essentially be the same product.
An LLM will never converge. It definitely won’t learn from each subsequent iteration.
Human devs are also a lot more resilient to slight changes in requirements and wording. A slight change in language that wouldn’t impact a human at all will cause an LLM to produce completely different output.
Humans are very non deterministic: if you ask me to solve a problem today, the solution will be different from last week, last year or 10 years ago. We’ve learnt to deal with it, and we can also control the non-determinism of LLMs.
And humans are also very prone to hallucinations: remember those 3000+ gods that we’ve created to explain the world, or those many religions that are completely incompatible? Even if some are true, most of them must be hallucinations just by being incompatible to the others.
If you are very experienced, you won’t solve the problem differently day to day. You probably would with a 10 year difference, but you won’t ever be running the next model 10 years out (even if the technology matures), so there’s no point in doing that comparison. Solving the same problem with the same constraints in radically different ways day to day comes from inexperience (unless you’re exploring and doing it on purpose).
Calling what LLMs do hallucinations and comparing it to human mythology is stretching the analogy into absurdity.
I believe the author was trying to specifically distinguish their workflow from that, in that they are prompting for changes to the code in terms of the code itself, and reviewing the code that is generated (maybe along with also mentioning the functionality and testing it).
Of course there are times when you need someone extremely skilled at a particular language. But from my experience I would MUCH prefer to see how someone builds out a problem in natural language and have guarantees to its success. I’ve been in too many interviews where candidates trip over syntax, pick the wrong language, or are just not good at memorization and don’t want to look dumb looking things up. I usually prefer paired programming interviews where I cater my assistance to expectations of the position. AI can essentially do that for us.
Unless you are writing some shitty code for a random product that will be used for some demo then trashed, code can be resumed to a simple thing:
Code is a way to move ideas into the real world through a keyboard
So, reading that the future is using a random machine with an averaged output (by design), but that this output of average quality will be good enough because the same random machine will generate tests of the same quality : this is ridiculousTests are probably the thing you should never build randomly, you should put a lot of thoughts in them: do they make sense ? Do your code make sense ? With tests, you are forced to use your own code, sometimes as your users will
Writing tests is a good way to force yourself to be empathic with your users
People that are coding through IA are the equivalent of the pre-2015 area system administrators that renewed TLS certificates manually. They are people that can (and are replacing themselves) with bash scripts. I don't miss them and I won't miss this new kind.
This is the bit I am having problems with: if you are rarely looking at the code, you will never have the skills to actually debug that significant escalation event.
if it's even possible it will be more work than writing the code manually
I'd compare it to gym work: some exercises work best until they don't, and then you switch to a less effective exercise to get you out of your plateau. Same with code and AI. If you're already good (because of years of hard won lessons), it can push you that extra bit.
But yeah, default to the better exercise and just code yourself, at least on the project's core.
doubt intensifies
Nice article by the way. I've found my workflow to be pretty much exactly the same using Claude code.
> Hand it off. Delegate the implementation to an AI agent, a teammate, or even your future self with comprehensive notes.
The AI agent just feels like a way to create tech debt on a massive scale while not being able to identify it as tech debt.
Regarding that string search, you really have to fight Claude to get it to use tree sitter consistently, I have to do a search through my codebase to build an audit list for this stuff.
The benefits you might gain from LLMs is that you are able to discern good output from bad.
Once that's lost, the output of these tools becomes a complete gamble.
I actually like writing code, it does get tedious I get that when you're making yet another component. I don't feel joy when you just will a bunch of code into existence with words. It's like actively participating in development when typing. Which yeah people use libraries/frameworks/boilerplate.
My dream is to not be employed in software and do it for fun (or work on something I actually care about)
Even if I wrote some piece of crap, it is my piece of crap
Unfortunately, you won't be able to get a job in software with anything but AI skills, since humans no longer write software in the industry. People will look at you the way they used to look at anyone who wrote their own HTML or Javascript without frameworks and Typescript, like you must drive your car to work with your feet.
Also funny how much time was wasted since it had random code in it that was not removed (not working old code vs. current working new code) that's not to blame on the AI part but yeah.
I have a job now in the industry it's funny I work with AI eg. AWS Bedrock/Knowledgebases/Agents... RAG/LLM AI.
The AI I want to work with is vision/ML (robotics) but don't have the background for that (I do it as a hobby instead).
I'm feeling the effect of vibe coding now, where the 2nd leader in our team was only recently a developer but uses ChatGPT/windsurf to code for him which enables him to work on random topics like OpenSearch one day Airflow the next... idk I get I'm the one being left behind by not doing it too but I also want to really learn/understand something. You can do that with an AI-assisted thing but yeah... idk I don't want to that's what I'm saying, I will get out eventually once I've saved enough money.
My learning process for a while has been watching YT crash courses/reading the docs/finding articles...
The project I mentioned above there was literally a prompt in the repo "Write me an event-driven app with this architecture..."
The 2nd leader I mentioned above is a code at work/not at home type of person which is fine but yeah. I'm not that person, I like to actually code/make stuff outside of work. It's not just about getting a task done/shipping some code for me. But I guess that's what a business is, churn out something.
Idk there's some validity there isn't there... "I've been a developer for 10 years, then a guy with 2 years comes in vibe coding stuff" is the leader. Which I'm past it, I don't do office politics anymore, I've got a six-fig job, no need to climb, I'm coasting. Debt is really the only problem I have.
An executive at a large company once told me about something where a spec had been written and reviewed by all relevant stakeholders: "That may be what I asked for, but its not what I want."
Hi everyone, thanks for the spirited debate! I think there are some great points in the discussion so far. Some thoughts:
* "This didn't work for offshoring, why will it work all of a sudden?" I think there are good lessons to draw from offshoring around problem definition and what-not but the key difference is the iteration speed. Agents allow you to review stuff much faster, and you can look at smaller pieces of incremental work.
* "I thought this would be about async primitives in python, etc" Whoops sorry, I can understand how the name is confusing/ambiguous! The use of "async" here refers to the fact that I'm not synchronously looking at an IDE while writing code all the time.
* "You can only do this because you used to handwrite code". I don't think this workflow is a replacement for handwriting code. I still love doing that. This workflow just helps me do more.
Sure, it can look good now, when there's no legacy, but if you ever move into having to maintain that code you're going to be in a tough spot.
The question is how it compares to the medium level of offshoring. Near term I think that at comparable cost ($100s of dollars per week), it'll give faster results at an acceptable tradeoff in quality for most uses. I don't think most companies want to spend thousands of dollars a month on developer tools per developer though... even though they often do.
Someone, please, try to convince me why this is a positive thing.
This is just being a TL. the agent is an assistant or a member of the team. I don't know why we need to call it "Async AI programming", unless we want to shy away from or obscure the idea that the agent is actually performing the job a human used to perform.
And then, I need to detail very precisely what "Promise.all()" (and "return") really mean in the context of async/await. Which is something that (I feel) could have been abstracted away during the async/await syntax definition, and make the full magic much more natural.
ChatGPT explanation: https://chatgpt.com/share/68c30421-be3c-8011-8431-8f3385a654...
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
prior to any dev they plan to do in JS/TS.
PS: 10 bucks that none of them would stay.
[spoil: "when you are already an expert of the tool detailled in it"]
Update: oh my god, I read the article. And feel completely cheated!!!!
Note for my future self: continue to read only the HN comments
1- Define what task the program should perform
2- Define how the program should do it
3- Writing the code that does it.
Most SWEs usually skip to step 3 instead of going through 1 and 2 without giving it much thought, and implement their code iteratively. I think Step 3 also includes testing, review, etc.With AI developers are forced to think about the functionality and the specs of their code to pass it to AI to do the job and can no longer just jump to step 3. For delegating to other devs, the same process is required, senior engineers usually create design docs and pass it to junior engineers.
IMO automated verification and code reviews are already part of many developers workflows, so it's nothing new.
I get the point of the article though, that there are new requirements for programming and things are different in terms of how folks approach programming. So I do not agree that the method is new or should be called "async", it's the same method with brand new tools.
[1] https://www.youtube.com/watch?v=-4Yp3j_jk8Q
[2] https://www.youtube.com/watch?v=uyLy7Fu4FB4
See also that movie with Johnny Depp where AI takes over the world.
What I do is is I tell the computer to do something and wait until is done
Not that catchy (even in fewer words).Why would I choose to slow myself down in the short term and allow my skills to atrophy in the long term (which will also slow me down)?
I can live without these things, but they're nice to have without expending the effort to figure out all the boilerplate necessary for solving very simple problems at their core. Sometimes AI can't get all the way to a solution, but usually it sets up enough of the boilerplate that only the fun part remains, and that's easy enough to do.
Managing a team of interns isn't fun, and I have no idea why someone who is a competent developer would choose to do that to themselves.
But among people I’ve worked with whose capabilities I can judge, the competent programmers are not building things like this. Among my own sample size there is a near perfect negative correlation between AI use and competency.
> This version of "async programming" is different from the classic definition. It's about how developers approach building software.
Oh async=you wait until it is done. How interesting.
The thing I like least about software engineering will now become the primary task. It's a sad future for me, but maybe a great one for some different personality type.