I keep being told this and the tools keep falling at the first hurdle. This morning I asked Claude to use a library to load a toml file in .net and print a value. It immediately explained how it was an easy file format to parse and didn’t need a library. I undid, went back to plan mode and it picked a library, added it and claimed it was done. Except the code didn’t compile.
Three iterations later of trying to get Claude to make it compile (it changed random lines around the clear problematic line) I fixed it by following the example in the readme, and told Claude.
I then asked Claude to parse the rest of the toml file, whereby it blew away the compile fix I had made..
This isn’t an isolated experience - I hit these fundamental blocking issues with pretty much every attempt to use these tools that isn’t “implement a web page”, and even when it does that it’s not long before it gets tangled up in something or other…
The real magic of LLMs comes when they iterate until completion until the code compiles and the test passes, and you don't even bother looking at it until then.
Each step is pretty stupid, but the ability to very quickly doggedly keep at it until success quite often produces great work.
If you don't have linters that are checking for valid syntax and approved coding style, if you don't have tests to ensure the LLM doesn't screw up the code, you don't have good CI, you're going to have a bad time.
LLMs are just like extremely bright but sloppy junior devs - if you think about putting the same guardrails in place for your project you would for that case, things tend to work very well - you're giving the LLM a chance to check its work and self correct.
It's the agentic loop that makes it work, not the single-shot output of an LLM.
There are techniques that can help deal with this but none of them work perfectly, and most of the time some direct oversight from me is required. And this really clips the potential productivity gains, because in order to effectively provide oversight you need to page in all the context of what's going on and how it ought to work, which is most of what the LLMs are in-theory helping you with.
LLMs are still very useful for certain tasks (bootstrapping in new unfamiliar domains, tedious plumbing or test fixture code), but the massive productivity gains people are claiming or alluding to still feel out of reach.
For instance, if you are working on a compiler and have a huge test database of code to compile that all has tests itself, "all sample code must compile and pass tests, ensuring your new optimizer code gets adequate branch coverage in the process" - the underlying task can be very difficult, but you have large amounts of test coverage that have a very good chance at catching errors there.
At the very least "LLM code compiles, and is formatted and documented according to lint rules" is pretty basic. If people are saying LLM code doesn't compile, then yes, you are using it very incorrectly, as you're not even beginning to engage the agentic loop at all, as compiling is the simplest step.
Sure, a lot of more complex cases require oversight or don't work.
But "the code didn't compile" is definitely in "you're holding it wrong" territority, and it's not even subtle.
But honestly I think sane code organization is the bigger hurdle, which is a lot harder to get right without manual oversight. Which of course leads to the temptation to give up on reviewing the code and just trusting whatever the LLM outputs. But I'm skeptical this is a viable approach. LLMs, like human devs, seem to need reasonably well-organized code to be able to work in a codebase, but I think the code they output often falls short of this standard.
(But yes agree that getting the LLM to iterate until CI passes is table-stakes.)
I think getting good code organization out of an LLM is one of the subtler things - I've learned quite a bit about what sort of things need to be specified, realizing that the LLM isn't actively learning my preferences particularly well, so there are some things about code organization I just have to be explicit about.
Which is more work, but less work than just writing the code myself to begin with.
I don't know anything about DotNet, but I just fired up Claude Code in an empty directory and asked it to create an example dotnet program using the Tomlyn library, it chugged away and ~5 minutes later I did a "grep Deserialize *" in the project and it came up with exactly the line you wanted (in your comment here) it to produce: var model = TomlSerializer.Deserialize<TomlTable>(tomlContent)!;
The full results of what it produced are at https://github.com/linsomniac/tomlynexample
That includes the prompt I used, which is:
Please create a dotnet sample program that uses the library at https://github.com/xoofx/Tomlyn to parse the TOML file given on the command line. Please only use the Tomlyn library for parsing the TOML file. I don't have any dotnet tooling installed on my system, please let me know what is needed to compile this example when we get there. Please use an agent team consisting of a dotnet expert, a qa expert, a TOML expert a devils advocate and a dotnet on Linux expert.
I can't really comment on the code it produced (as I said, I don't use dotnet, I had to install dotnet on my system to try this), so I can't comment on the approach. 346 lines in Program.cs seems like a lot for an example TOML program, but I know Claude Code tends to do full error checking, etc, and it seems to have a lot of "pretty printing" code.
Like when I was trying to find a physical store again with ChatGPT Pro 5.4 and asked it to prepare a list of candidates, but the shop just wasn't in the list, despite GPT claiming it to be exhaustive. When I then found it manually and asked GPT for advice on how I could improve my prompting in the future, it went full "aggressively agreeable" on me with "Excellent question! Now I can see exactly why my searches missed XY - this is a perfect learning opportunity. Here's what went wrong and what was missing: ..." and then 4 sections with 4 subsections each.
It's great to see the AI reflect on how it failed. But it's also kind of painful if you know that it'll forget all of this the moment the text is sent to me and that it will never ever learn from this mistake and do better in the future.
If 80% of the time they 10x my output, and the other 20% I can say "well they failed, I guess this one I have to do manually" - that's still an absolutely massive productivity boost.
I wonder if it was getting blocked on searches or something, and just didn't tell you.
Legit this morning Claude was essentially unusable for me
I could explicitly state things it should adjust and it wouldn't do it.
Not even after specifying again, reverting everything eventually and reprompt from the beginning etc. Even super trivial frontend things like "extract [code] into a separate component"
After 30 minutes of that I relented and went on to read a book. After lunch I tried again and it's intelligence was back to normal
It's so uncanny to experience how much ita performance changes - I strongly suspect anthropic is doing something whenever it's intelligence drops so much, esp. Because it's always temporary - but repeatable across sessions if occuring... Until it's normal again
But ultimately just speculation, I'm just a user after all
Honestly, this is my experience. Every now and again it just completely self implodes and gives up, and I’m left to pick up the pieces. Look at the other replies who are making sure I’m using the agrntic loop/correct model/specific enough prompt - I don’t know what they’re doing but I would love to try the tools they’re using.
I try to give the model as little freedom as possible. That usually means it's not being used for novel work.
But that's the hard part! You can only eke out moderate productivity gains by automating the tedium of actually writing out the code, because it's a small fraction of software engineering.
Then it crawls around for awhile, does some web searches, fetches docs from here and there, whatever. Sometimes it'll ask me some questions. And then it'll finally spit out a plan. I'll read through it and just give it a massive dump of issues, big and small, more questions I have, whatever. (I'll also often be spinning off new planning sessions for pre-work or ancillary tasks that I thought of while reviewing that plan). No structure or anything, just brain dump. Maybe two rounds of that, but usually just one. And then I'll either have it start building, or I'll have it stash in the linear agent so I can kick it off later.
I remember having to write code on paper for my CS exams, and they expected it to compile! It was hard but I mostly got there. definitely made a few small mistakes though
Friday afternoon I made a new directory and told Claude Code I wanted to make a Go proxy so I could have a request/callback HTTP API for a 3rd party service whose official API is only persistent websocket connections. I had it read the service’s API docs, engage in some back and forth to establish the architecture and library choices, and save out a phased implementation plan in plan mode. It implemented it in four phases with passing tests for each, then did live tests against the service in which it debugged its protocol mistakes using curl. Finally I had it do two rounds of code review with fresh context, and it fixed a race condition and made a few things cleaner. Total time, two hours.
I have noticed some people I work with have more trouble, and my vague intuition is it happens when they give Claude too much autonomy. It works better when you tell it what to do, rather than letting it decide. That can be at a pretty high level, though. Basically reduce the problem to a set of well-established subproblems that it’s familiar with. Same as you’d do with a junior developer, really.
Equating "junior developers" and "coding LLMs" is pretty lame. You handhold a junior developers so, eventually, you don't have to handhold anymore. The junior developer is expected to learn enough, and be trusted enough, to operate more autonomously. "Junior developers" don't exist solely to do your bidding. It may be valuable to recognize similarities between a first junior developer interaction and a first LLM interaction, but when every LLM interaction requires it to be handheld, the value of the iterative nature of having a junior developer work along side you is not at all equivalent.
I simply said the description of the problem should be broken down similar to the way you’d do it for a junior developer. As opposed to the way you’d express the problem to a more senior developer who can be trusted to figure out the right way to do it at a higher level.
What’s giving too much autonomy about
“Please load settings.toml using a library and print out the name key from the application table”? Even if it’s under specified, surely it should at least leave it _compiling_?
I’ve been posting comments like this monthly here, my experience has been consistently this with Claude, opencode, antigravity, cursor, and using gpt/opus/sonnet/gemini models (latest at time of testing). This morning was opus 4.6
Are you using Claude Code? Do yo have it configured so that you are not allowing it to run the build? Because I've observed that Claude Code is extremely good at making sure the code compiles, because it'll run a compile and address any compile errors as part of the work.
I just asked it to build a TOML example program in DotNet using Tomlyn, and when it was done I was able to run "./bin/Debug/net8.0/dotnettoml example.toml", it had already built it for me (I watched it run the build step as part of its work, as I mentioned it would do above).
> I’ve observed Claude code is extremely good at making sure the code compiles
My observation is that it’s fine until it’s absolutely not, and the agentic loop fails.
We’ve gone from “I’m baffled at your experience” to well yeah it often fails” in two sentences here…
The code it needed to write was:
var model = TomlSerializer.Deserialize<TomlTable>(toml)!;
Which is in the readme of the repo. It could also have generated a class and deserialised into that. Instead it did something else (afraid I don’t have it handy sorry)If your dev group is spending 90% of their time on these... well, you'd probably be right to fire someone. Not most of the developers but whoever put in place a system where so much time is spent on overhead/retrograde activities.
Something that's getting lost in the new, low cost of generating code is that code is a burden, not an asset. There's an ongoing maintenance and complexity cost. LLMs lower maintenance cost, but if you're generating 10x code you aren't getting ahead. Meanwhile, the cost of unmanaged complexity goes up exponentially. LLMs or no, you hit a wall if you don't manage it well.
My company has 20 years of accumulated tech debt, and the LLMs have been pretty amazing at helping us dig out from under a lot of that.
You make valid points, but I'm just going to chime in with adding code is not the only thing that these tools are good at.
At my company, we are maintaining our hiring plan (I'm the decision maker). We have never been more excited at our permission to win against the incumbents in our market. At the same time, I've never been more concerned about other startups giving us a real run. I think we will see a bit of an arms race for the best talent as a result.
Productivity without clear vision, strategy and user feedback loops is meaningless. But those startups that are able to harness the productivity gains to deliver more complete and polished solutions that solve real problems for their users will be unstoppable.
We've always seen big gains by taking a team of say 8 and splitting it into 2 teams of 4. I think the major difference is that now we will probably split teams of 4 into 2 teams of 2 with clearer remits. I don't want them to necessarily delivery more features. But I do want them to deliver features with far fewer caveats at a higher quality and then iterate more on those.
Humans that consume the software will become the bottlenecks of change!
Ruby on Rails and its imitators blew away tons of boilerplate. Despite some hype at the time about a productivity revolution, it didn’t _really_ change that much.
> , libraries, build-tools,
Ensure what you mean by this; what bearing do our friends the magic robots have on these?
> and refactoring
Again, IntelliJ did not really cause a productivity revolution by making refactoring trivial about 20 years ago. Also, refactoring is kind of a solved problem, due to IntelliJ et al; what’s an LLM getting you there that decent deterministic tooling doesn’t?
the ones i've used come with defaults that you can then customize. here are some of the better ones:
- https://guides.rubyonrails.org/command_line.html#generating-...
- https://hexdocs.pm/phoenix/Mix.Tasks.Phx.Gen.html
- https://laravel.com/docs/13.x/artisan#stub-customization
- https://learn.microsoft.com/en-us/aspnet/core/fundamentals/t...
> my experience has been these get left behind as the service implementations change
yeah i've definitely seen this, ultimately it comes down to your culture / ensuring time is invested in devex. an approach that helps avoid drift is generating directly from an _actual_ project instead of using something like yeoman, but that's quite involved
> it comes down to ensuring time is invested in devex
That’s actually my point - the orgs haven’t invested in devex buyt that didn’t matter because copilot could figure out what to do!
And of course, we didn't see a massive layoff after the introduction of say, StackOverflow, or DreamWeaver, or jQuery vs raw JS, Twitter Bootstrap, etc.
If it is writing both the code and the tests then you're going to find that its tests are remarkable, they just work. At least until you deploy to a live state and start testing for yourself, then you'll notice that its mostly only testing the exact code that it wrote, its not confrontational or trying to find errors and it already assumes that its going to work. It won't ever come up with the majority of breaking cases that a developer will by itself, you will need to guide it. Also while fixing those the odds of introducing other breaking changes are decent, and after enough prompts you are going to lose coherency no matter what you do.
It definitely makes a lot of boilerplate code easier, but what you don't notice is that its just moving the difficult to find problems into hidden new areas. That fancy code that it wrote maybe doesn't take any building blocks, lower levels such as database optimization etc. into account. Even for a simple application a half-decent developer can create something that will run quite a bit faster. If you start bringing these problems to it then it might be able to optimize them, but the amount of time that's going to take is non-negligible.
It takes developers time to sit on code, learn it along with the problem space and how to tie them together effectively. If you take that away there is no learning, you're just the monkey copy-pasting the produced output from the black box and hoping that you get a result that works. Even worse is that every step you take doesn't bring you any closer to the solution, its pretty much random.
So what is it good for? It can both read, "understand", translate, write and explain things to a sufficient degree much faster than us humans. But if you are (at the moment) trusting it at anything past the method level for code then you're just shooting yourself in the foot, you're just not feeling the pain until later. In a day you can have it generate for example a whole website, backend, db etc. for your new business idea but that's not a "product", it might as well be a promotional video that you throw away once you've used it to impress the investors. For now that might still work, but people are already catching on and beginning to wise up.
I feel like most folks commenting uncritically here about second coming of Jesus must work in some code sweatshops, churning eshops or whatever is in vogue today quickly and moving on, never looking back, never maintaining and working on their codebases for decade+. Where I existed my whole career, speed of delivery was never the primary concern, quality of delivery (which everybody unanimously complaints one way or another with llms) was much more critical and thats where the money went. Maybe I just picked up right businesses, but then again I worked for energy company, insurance, telco, government, army, municipality, 2 banks and so on across 3 European states. Also, full working code delivery to production is a rather small part of any serious project, even 5x speed increase would move the needle just a tiny bit overall.
If I would be more junior, I would feel massive FOMO from reading all this (since I use it so far just as a quicker google/stackoverflow and some simpler refactoring, but oh boy is is sometimes hilariously and dangerously wrong). I am not, thus I couldn't care less. Node.js craze, 10x stronger, not seeing forest for the trees.
For greenfield development you don't need as many software engineers. Some developers (the top 10%) are still needed to guide AI and make architectural decisions, but the remaining 90% will work on the lifecycle management task mentioned above.
The productivity gains can be used to produce more software, and if you are able to sell the software you produce should result in a revenue boost. But if you produce more than you can sell then some people will be laid off.
Another way of increasing profit is to simply reduce your headcount by 90% while keeping the same profit.*
Hence, I think some companies will keep downsizing. Some companies will hire. It depends a lot.
*Assuming 90% productivity increase.
Is it the same with tech? Facebook has 3 billion monthly active users. No amount of tech will bring that up to 6 billion. If you were to double the amount of time someone spends on Facebook, or double the ads they see or double the click through rate, what does that really mean?
Taking the example of Facebook. They are in social media, messaging, AI, VR/AR hardware and software, a few other things, meta universe whatever that was, now left with the name. Facebook isn't delivering or successful on all its ventures, it knows that, it keeps investing in other segments.
More productivity would mean at least diversifying, they have some of the best engineers, it would make no sense to not simply attempt to hit the jackpot by playing more machines.
What fewer people talks about is that the entire tech industry is tertiary services. Ads, entertainment, communication, etc. If/when hard industries take a hit, tertiary takes a hit. If it isn't clear to you that the overall economy has already started to take some irreversible dents, and that those will accelerate, know that the capital is well aware.
Or we can continue wishful thinking and seek comfort that monetary tightening is just temporary, investments will flow more into tangent ventures and growth is around the corner, the U.S still is and will remain the world's strongest economy.
I think most companies are making the right call by downsizing instead of staying same size. Let people go to where there is more potential for growth.
Companies without the same constraints are well equipped to keep who they've got, pivot them into managing/overseeing agents to scale, and build better products from the outset.
So this'll be a good opportunity for smaller companies (or not-for-profits like co-ops and credit unions) to eat the lunches of bigger companies that'll be slow to adapt.
After combine harvester, we produced the same food with less people.
At the moment, it seems like hardware is the constraint. Companies don't have access to enough machines or tokens to keep all their devs occupied, so they let some go. Maybe that changes, maybe we already have too much software?
Personally I think we already had too much software before LLMs and even without them many devs would have found themselves jobless as startups selling to startups selling to startups failed and we realized (again) that food, shelter, security, education etc are 'real' industries, but software isn't one if it's not actively helping one of those.
Unfortunately this kind of software needs specialised domain knowledge to produce that AI doesn't have yet, but when (if) it arrives I hope we see strides forwards in hardware engineering productivity.
Smart organizations will not just deliver better products but likely start products that they were hesitant to start before because the cost of starting is a lot closer to zero. Smart engineering leadership will encourage developers into delivering value and not self-serving, endless iterations of tooling enhancements, etc.
If I was a CTO and my competitor Y fired 90% of their devs, I'd try to secure funding to hire their top talent and retain them. The vitriol alone could fuel some interesting creations and when competitor Y realizes things later, their top talent will have moved on.
>> Smart organizations will not just deliver better products but likely start products [...]
This is not the 90s anymore when low hanging fruit was everywhere ready to be picked. We have everything under the sun now and more.
The problem with bullshit apps is not that it took you 5 months to build. What you build now in 5 minutes it's still bullshit. Most of the remaining work is bullshit jobs. Spinning useless "features" and frameworks that nobody needs and shove them down the throat of customers that never asked for them. Now it's possible to dig holes and fill them back (do pointless work) at much improved pace thanks to AI.
Situation a/ llm increase developer's productivity: you hire more developers as you cash profit. If you don't your competitor will.
b/ llm doesn't increase productivity, you keep cruising. You rejoice seeing some competitors lay off.
Reality shows dissonance with these only possible scenarios. Absurd decision making, a mistake? No mistake. Many tech companies are facing difficulties, they need to lose weight to remain profitable, and appease the shareholders demand for bigger margins.
How to do this without a backlash? Ai is replacing developers, Anthropic's CEO said engineers don't write code anymore, role obsolete in 6 months. It naturally makes sense we have to let some of them go. If the prophecy doesn't turn true, nobody ever get fired for buying IBM.
The more grounded reality is that AI coding can be a productivity multiplier in the right hands, and a significant hindrance in the wrong hands.
Somewhere there exists a happy medium between vibe coding without ever looking at the code, and hand-writing every single line.
Seniors can adjust, but eg. junior frontend-only devs might be doomed in both situations, as they might not be able to contribute enough to business-critical features to justify their costs and most frontend-related tasks will be taken over by the "10x" seniors.
If it is a big company the answer is and will always be: whatever makes the stock price rise the most.
This is a crazy take. Even if said people are matching or exceeding the outcome of those using the technology?
I’m not in this group. But the closest analog to what you are saying is firing people for not using a specific IDE.
Remember sometimes the most productive thing to have is not money or people but time with your ideas.
My own experience...
I've tried approaching vibe coding in at least 3 different ways. At first I wrote a system that had specs (markdown files) where there is a 1 to 1 mapping between each spec to a matching python module. I only ever edited the spec, treating the code itself as an opaque thing that I ignore (though defined the intrefaces for). It kind of worked, though I realized how distinct the difference between a spec that communicates intent and a spec that specifies detail really is.
From this, I felt that maybe I need to stay closer to the code, but just use the LLM as a bicycle of the mind. So I tried "write the code itself, and integrate an LLM into emacs so that you can have a discussion with the LLM about individual code, but you use it for criticism and guidance, not to actually generate code". It also worked (though I never wrote anything more then small snippets of Elisp with it). I learned more doing things this way, though I have the nagging suspicion that I was actually moving slower than I theoretically could have. I think this is another valid way.
I'm currently experimenting with a 100% vibe coded project (https://boltread.com). I mostly just drive it through interaction on the terminal, with "specs" that kind of just act as intent (not specifications). I find the temptation to get out of the outside critic mode and into just looking at the code is quite strong. I have resisted it to date (I want to experiment with what it feels like to be a vibe coder who cannot program), to judge if I realistically need to be concerned about it. Just like LLM generated things in general, the project seems to get closer and closer to what I want, but it is like shaping mud, you can put detail into something, but it won't stay that way over time; its sharp detail will be reduced to smooth curves as you then switch to putting detail elsewhere. I am not 100% sure on how to deal with that issue.
My current thoughts is that we have failed to actually find a good way of switching from the "macro" (vibbed) to the "micro" (hand coded) view of LLM development. It's almost like we need modules (blast chambers?) for different parts of any software project. Where we can switch to doing things by hand (or at least with more intent) when necessary, and doing things by vibe when not. Striking the balance between those things that nets the greater output is quite challenging, and it may not even be that there is an optimal intersection, but simply that you are exchanging immediate change for future flexibility to the software?
I think that it's more along the lines of "do you fire people" instead of just "do you fire devs". Fewer devs means less of a need for PMs, so they can be let go as well, and maybe with the rise of AI assisted design tools, you don't need as many UX people, so you let some of them go as well.
As for building better products, I feel like that's a completely different topic than using AI for productivity gains, but only because at the end of the day you need buy in from upper management in order to build the features/redo existing features/both that will make the product better. I should also mention I'm viewing this from the position of someone who works at an established company and not a startup, so it may differ.
Amazon has demonstrated that it takes just as longer, or longer, to have senior devs review LLM output than it would to just have the senior devs do the programming in the first place. But now your senior devs are wasted on reviewing instead of developing or engineering. Amazon, Microsoft, Google, Salesforce, and Palantir have all suffered multiple losses in the tens of millions (or more) due to AI output issues. Now that Microsoft has finally realized how bad LLMs really are at generating useful output, they've begun removing AI functionality from Windows.
Product quality matters more than time to market. Especially in tech, the first-to-market is almost never the company that dominates, so it's truly bizarre that VCs are always so focused on their investments trying to be first to market instead of best to market.
If Competitor Y just fired 90% of their developers, I would have a toast with my entire human team. And a few months later, we'd own the market with our superior product.
I'm not sure what your circumstances are but even if it's not true for you, it's true for many other people.
People online with identical views to them all assure me that theyre all highly skilled though.
Meanwhile I've been experimenting using AI for shopping and all of them so far are horrendous. Cant handle basic queries without tripping over themselves.
But you can understand why all the 1700 and below chess players say it is good and it is making them better using it for eval?
Don't worry, AI will replace you one day, you are just smarter than most of us so you don't see it yet.
This kind of thinking is actually a big reason why execs are being misinformed into overestimating LLM abilities.
LLM coding agents alone are not good enough to replace any single developer. They only make a developer x% faster. That dev who is now x% faster may then allow you to lay off another dev. That is a subtle yet critical difference.
For me the main difference is now some people can explain what their code does. While some other only what it wants to achieve
This is an interesting choice for a first experiment. I wouldn't personally base AI's utility for all other things on its utility for shopping.
Most people dont really understand coding but shopping is a far simpler task and so it's easier to see how and where it fails (i.e. with even mildly complex instructions).
On the tech side I see it saving some time with stuff like mock data creation, writing boiler plate, etc. You still have to review it like it's a junior. You still have to think about the requirements and design to provide a detailed understanding to them (AI or junior).
I don't think either of these will provide 90% productivity gains. Maybe 25-50% depending on the job.
Sure it is not as fast to understand as code I wrote. But at least I mostly need to confirm it followed how it implemented what I asked. Not figuring out WHAT it even decided to implement in the first place.
And in my org, people move around projects quite a bit. Hasn’t been uncommon for me to jump in projects with 50k+ lines of code a few times a year to help implement a tricky feature, or help optimize things when it runs too slow. Lots of code to understand then. Depending on who wrote it, sometimes it is simple: one or two files to understand, clean code. Sometimes it is an interconnected mess and imho often way less organized that Ai generated code.
And same thing for the review process, lots of having to understand new code. At least with AI you are fed the changes a a slower pace.
Because it does.
> I still don't see ANY proof that it doesn't generate a total unmaintainable unsecure mess, that since you didn't develop, you don't know how to fix.
I wouldn't know since it's been years since I've tried but I'd imagine that Claude Code would indeed generate a half-baked Next.js monstrosity if one-shot and left to its own devices. Being the learned software engineer I am, however, I provide it plenty of context about architecture and conventions in a bootstrapped codebase and it (mostly) obeys them. It still makes mistakes frequently but it's not an exaggeration to say that I can give it a list of fields with validation rules and query patterns and it'll build me CRUD pages in a fraction of the time it'd take me to do so.
I can also give it a list of sundry small improvements to make and it'll do the same, e.g. I can iterate on domain stuff while it fixes a bunch of tiny UX bugs. It's great.
not talking about toys or vibecoded crap no one uses.
Weirdly, people who have actually created functional one-man products don't seem to have the same problem, as they welcome the business.
Nobody is.
Perhaps nobody cares to “convince you” and “win you over”, because…why? Why do we all have to spoon feed this one to you while you kick and scream every step of the way?
If you don’t believe it, so be it.
We are very much in need of an actual way to measure real economic impact of AI-assisted coding, over both shorter and longer time horizons.
There's been an absolute rash of vibecoded startups. Are we seeing better success rates or sales across the industry?
That's the same false argument that the religious have offered for their beliefs and was debunked by Bertrand Russell's teapot argument: https://en.wikipedia.org/wiki/Russell%27s_teapot
If you use it correctly, you can get better quality, more maintainable code than 75% of devs will turn in on a PR. The “one weird trick” seems to be to specify, specify, specify. First you use the LLM to help you write a spec (document, if it’s pre existing). Make sure the spec is correct and matches the user story and edge cases. The LLM is good at helping here too. Then break down separations of concerns, APIs, and interfaces. Have it build a dependency graph. After each step, have it reevaluate the entire stack to make sure it is clear, clean, and self consistent.
Every step of this is basically the AI doing the whole thing, just with guidance and feedback.
Once you’ve got the documentation needed to build an actual plan for implementation, have it do that. Each step, you go back as far as relevant to reevaluate. Compare the spec to the implementation plan, close the circle. Then have it write the bones, all the files and interfaces, without actual implementations. Then have it reevaluate the dependency graph and the plan and the file structure together. Then start implementing the plan, building testing jigs along the way.
You just build software the way you used to, but you use the LLM to do most of the work along the way. Every so often, you’ll run into something that doesn’t pass the smell test and you’ll give it a nudge in the right direction.
Think of it as a junior dev that graduated top of every class ever, and types 1000wpm.
Even after all of that, I’m turning out better code, better documentation, and better products, and doing what used to take 2 devs a month, in 3 or 4 days on my own.
On the app development side of our business, the productivity gain also strong. I can’t really speak to code quality there, but I can say we get updates in hours instead of days, and there are less bugs in the implementations. They say the code is better documented and easier to follow , because they’re not under pressure to ship hacky prototype code as if it were production.
On the current project, our team size is 1/2 the size it would have been last year, and we are moving about 4x as fast. What doesn’t seem to scale for us is size. If we doubled our team size I think the gains would be very small compared to the costs. Velocity seems to be throttled more by external factors.
I really don’t understand where people are coming from saying it doesn’t work. I’m not sure if it’s because they haven’t tried a real workflow, or maybe tried it at all, or they are definitely “holding it wrong.” It works. But you still need seasoned engineers to manage it and catch the occasional bad judgment or deviation from the intention.
If you just let it, it will definitely go off the rails and you’ll end up with a twisted mess that no one can debug. But use a system of writing the code incrementally through a specification - evaluation loop as you descend the abstraction from idea to implementation you’ll end up winning.
As a side note, and this is a little strange and I might be wrong because it’s hard to quantify and all vibes, but:
I have the AI keep a journal about its observations and general impressions, sort of the “meta” without the technical details. I frame this to it as a continuation of “awareness “ for new sessions.
I have a short set of “onboarding“ documents that describe the vision, ethos, and goals of the project. I have it read the journal and the onboarding docs at the beginning of each session.
I frame my work with the AI as working with it as a “collaborator” rather than a tool. At the end of the day, I remind it to update its journal of reflections about the days work. It’s total anthropomorphism, obviously, but it seems to inspire “trust” in the relationship, and it really seems to up-level the effort that the AI puts in. It kinda makes sense, LLMs being modelled on human activity.
FWIW, I’m not asserting anything here about the nature of machine intelligence, I’m targeting what seems to create the best result. Eventually we will have to grapple with this I imagine, but that’s not today.
When I have forgotten to warm-start the session, I find that I am rejecting much more of the work. I think this would be worth someone doing an actual study to see if it is real or some kind of irresistible cognitive bias.
I find that the work produced is much less prone to going off the rails or taking shortcuts when I have this in the context, and by reading the journal I get ideas on where and how to do a better job of steering and nudging to get better results. It’s like a review system for my prompting. The onboarding docs seem to help keep the model working towards the big picture? Idk.
This “system” with the journal and onboarding only seems to work with some models. GPT5 for example doesn’t seem to benefit from the journal and sometimes gets into a very creepy vibe. I think it might be optimized for creating some kind of “relationship” with the user.
I suspect you either already were or would’ve been great at leading real human developers not just AI agents. Directing an AI towards good results is shockingly similar to directing people. I think that’s a big thing separating those getting great results with AI from those claiming it simply does not work. Not everyone is good at doing high level panning, architecture, and directing others. But those that already had those skills basically just hit the ground running with AI.
There are many people working as software engineers who are just really great at writing code, but may be lacking in the other skills needed to effectively use AI. They’re the angry ones lamenting the loss of craft, and rightfully so, but their experience with AI doesn’t change the shift that’s happening.
Terafab is suddenly making so much sense!
you hire more if you are growth and have new ideas just never had the chance to implement them as they were not practical of feasible at that level of tech (non-assisted humans clicking code and taking sick leaves)
CTO is rewriting company platform (by himself with AI) and is convinced it's 100x productivity. But when you step back and look at the broader picture, he's rewriting what something like Rails, .NET, or Spring gave us 15-20 years ago? It's just in languages and code styles he is (only) familiar with. That's not 100x for the business, sorry...