The latest "meta" in AI programming appears to be agent teams (or swarms or clusters or whatever) that are designed to run for long periods of time autonomously.
Through that lens, these changes make more sense. They're not designing UX for a human sitting there watching the agent work. They're designing for horizontally scaling agents that work in uninterrupted stretches where the only thing that matters is the final output, not the steps it took to get there.
That said, I agree with you in the sense that the "going off the rails" problem is very much not solved even on the latest models. It's not clear to me how we can trust a team of AI agents working autonomously to actually build the right thing.
As you as you start to work with a codebase that you care about and need to seriously maintain, you'll see what a mess these agents make.
E.g. I use these tools to clean up or reorganize old tests (with coverage and diff viewers checking of things I might miss), update documentation with cross links (with documentation linters checking for errors I miss), convert tests into benchmarks running as part of CI, make log file visualizers, and many more.
These tools are amazing for dealing with the long tail of boring issues that you never get to, and when used in this fashion they actually abruptly increase the quality of the codebase.
We use agents very aggressively, combined with beads, tons of tests, etc.
You treat them like any developer, and review the code in PRs, provide feedback, have the agents act, and merge when it's good.
We have gained tremendous velocity and have been able to tackle far more out of the backlog that we'd been forced to keep in the icebox before.
This idea of setting the bar at "agents work without code reviews" is nuts.
I know people have emotional responses to this, but if you think people aren’t effectively using agents to ship code in lots of domains, including existing legacy code bases, you are incorrect.
Do we know exactly how to do that well, of course not, we still fruitlessly argue about how humans should write software. But there is a growing body of techniques on how to do agent first development, and a lot of those techniques are naturally converging because they work.
This is not to suggest that AI tools do not have value but that “I just have agents writing code and it works great!” Has yet to hit its test.
I get it; I do. It's rapidly challenging the paradigm that we've setup over the years in a way that it's incredibly jarring, but this is going to be our new reality or you're going to be left behind in MOST industries; highly regulated industries are a different beast.
So; instead of just out-of-hand dismissing this, figure out the best ways to integrate agents into your and your teams'/companies' workstreams. It will accelerate the work and change your role from what it is today to something different; something that takes time and experience to work with.
But it's not the argument. The argument is that these tools provide lower-quality output and checking this output often takes more time than doing this work oneself. It's not that "we're conservative and afraid of changes", heck, you're talking to a crowd that used to celebrate a new JS framework every week!
There is a push to accept lower quality and to treat it as a new normal, and people who appreciate high-quality architecture and code express their concern.
> It will accelerate the work and change your role from what it is today to something different;
We yet to see if different is good.My short experience with LLM reviewing my code is that LLM's output is overly explanatory and it slows me down.
> something that takes time and experience to work with.
So you invite us to participate in sunken cost fallacy.I’m available for consulting when you need something done correctly.
I've been using LLMs to augment development since early December 2023. I've expanded the scope and complexity of the changes made since then as the models grew. Before beads existed, I used a folder of markdown files for externalized memory.
Just because you were late to the party doesn't mean all of us were.
This may be a result of me using tools poorly, or more likely evaluating merits which matter less than I think. But I don’t think we can see that yet as people just invented these agent workflows and we haven’t seen it yet.
Note that the situation was not that different before LLMs. I’ve seen PMs with all the tickets setup, engineers making PRs with reviews, etc and not making progress on the product. The process can be emulated without substantive work.
Not to say that there's no value in AI written code in these codebases, because there is plenty. But this whole thing where 6 agents run overnight and "tada" in the morning with production ready code is...not real.
Similarly, a lot of the AGI-hype comments exist to expand the scope of the space. It's not real, but it helps to position products and win arguments based on hypotheticals.
You can get extremely good results assuming your spec is actually correct (and you're willing to chew through massive quantities of tokens / wait long enough).
EDIT: fixed typo
The Bing AI summary tells me that AI companies invested $202.3 billion in AI last year. Users are going to have to pay that back at some point. This is going to be even worse as a cost control situation than AWS.
That’s not how VC investments work. Just because something costs a lot to build doesn’t mean that anyone will pay for it. I’m pretty sure I haven’t worked for any startup that ever returned a profit to its investors.
I suspect you are right in that inference costs currently seem underpriced so users will get nickel-and-dinked of a while until the providers leverage a better margin per user.
Some of the players are aiming for AGI. If they hit that goal, the cost is easily worth it. The remaining players are trying to capture market share and build a moat where none currently exists.
Yes currency is very rarely at times exchanged at a loss for power but rarely not for more currency down the road.
But in any case, we're definitely coming up on the need for that.
Converging on an answer eventually is less interesting when you pay by the token and they start by, real opus 4.6 example from yesterday, adding a column to a database by editing an existing migration file.[1]
We're beyond a million monkeys with typewriters, but they can still be made out very clearly in the rear-view mirror.
[1] what would have happened here if I hadn't aborted this? Would it have cleaned it up? Who knows.
more reason to catch them otherwise we have to wait a longer time. in fact hiding is more correct if the AI was less autonomous right?
What fills the holes are best practices, what can ruin the result is wrong assumptions.
I dont see how full autonomy can work either without checkpoints along the way.
And at the end of the day it's not the agents who are accountable for the code running in the production. It's the human engineers.
Still makes this change from Anthropic stupid.
If a singular agent has a 1% chance of making an incorrect assumption, then 10 agents have that same 1% chance in aggregate.
I can attest that it works well in practice, and my organization is already deploying this technique internally.
This is NOT the same as asking “are you sure?” The sycophantic nature of LLMs would make them biased on that. But fresh agents with unbiased, detached framing in the prompt will show behavior that is probabilistically consistent with the underlying truth. Consistent enough for teasing out signal from noise with agent orchestration.
Even in that case they should still be logging what they're doing for later investigation/auditing if something goes wrong. Regardless of whether a human or an AI ends up doing the auditing.
As tedious as it is a lot of the time ( And I wish there was an in-between "allow this session" not just allow once or "allow all" ), it's invaluable to catch when the model has tried to fix the problem in entirely the wrong project.
Working on a monolithic code-base with several hundred library projects, it's essential that it doesn't start digging in the wrong place.
It's better than it used to be, but the failure mode for going wrong can be extreme, I've come back to 20+ minutes of it going around in circles frustrating itself because of a wrong meaning ascribed to an instruction.
How to comply with a demand to show more information by showing less information.
It's just a whole new world where words suddenly mean something completely different, and you can no longer understand programs by just reading what labels they use for various things, you need to also lookup if what they think "verbose" means matches with the meaning you've built up understanding of first.
EDIT: Ah, looks like verbose mode might show less than it used to, and you need to use a new mode (^o) to show very verbose.
I didn't know about the ^o mode though, so good that the verbose information is at least still available somewhere. Even though now it seems like an enormously complicated maneuver with no purpose.
I still think it’d be nice to allow an output mode for you folks who are married to the previous approach since it clearly means a lot to you.
Curious what plans you’re using? running 24/7 x 5 agents would eat up several $200 subscriptions pretty fast
What I think they are forgetting in this silly stubbornness is that competition is really fierce, and just as they have gained appreciation from developers, they might very quickly lose it because of this sort of stupidity (for no good reason).
You have to go into /models then use the left/right arrow keys to change it. It’s a horrible UI design and I had no idea mine was set to high. You can only tell by the dim text at the bottom and the 3 potentially highlighted bars.
On high It would think for 30+ minutes, make a plan, then when I started the plan it would either compact and reread all my files, or start fresh and read my files, then compact after 2-3 changes and reread the files.
High reasoning is unusable with Opus 4.6 in my opinion. They need at least 1M context for this to work.
Anthropic doesn't want you to be easily able to jump off claude code into open code + open weight llm.
If you rely on monitoring the behaviors of an individual coding agent to produce the output you want, you won't scale
I love the terminal more than the next guy but at some point it feels like you're looking at production nginx logs, just a useless stream of info that is very difficult to parse.
I vibe coded my own ADE for this called OpenADE (https://github.com/bearlyai/openade) it uses the native harnesses, has nice UIs and even comes with things like letting Claude and Codex work together on plans. Still very beta but has been my daily driver for a few weeks now.
Hiding filenames turns the workflow into a black box. It’s like removing the speedometer from a car because "it distracts the driver". Sure it looks clean, but it's deadly for both my wallet and my context window
Seems like this is the most probable outcome: LLM gets to fix the issues undisrupted while keeping the operator happy.
I guess that fell on deaf ears.
Yeah, I used to sit and read all of these(at one of the largest video game publishers - does that count?). 95% of them were "your game sucks" but we fixed many bugs thanks to detailed descriptions that people have provided through that box.
" A GitHub issue on the subject drew a response from Boris Cherny, creator and head of Claude Code at Anthropic, that "this isn't a vibe coding feature, it's a way to simplify the UI so you can focus on what matters, diffs and bash/mcp outputs." He suggested that developers "try it out for a few days" and said that Anthropic's own developers "appreciated the reduced noise.""
Seriously man, whatever happened to configs that you can set once. They obviously realise that people want it with the control-o but why make them do this over and over without a way to just config it, or whatever the cli does like maybe:
./clod-code -v
or something. Man I dislike these AI bros so much, there always about "your personal preferences are wrong" but you know they are lying through their smirking teeth they want you to burn tokens so the earth's inhabitability can die a few minutes earlier.
Two months ago, Claude was great for "here is a specific task I want you to do to this file". Today, they seem to be pivoting towards "I don't know how to code but want this feature" usage. Which might be a good product decision, but makes it worse as a substitute for writing the code myself.
Ultimately, the problem is the tool turning against the user. Maybe it is time to get a new tool.
https://news.ycombinator.com/item?id=9224
Or this pulls the exchange under the famous HN post itself:
If they tried to create a better product I'd expect them to just add the awesome option, not hide something that saves thousands of tokens and context if the model goes the wrong way.
The answer in both cases is: You don't. If it happens, it's because you sometimes make bad decisions, because it's hard to make good decisions.