I shared my LLM predictions last week, and one of them was that by 2029 "Someone will build a new browser using mainly AI-assisted coding and it won’t even be a surprise" https://simonwillison.net/2026/Jan/8/llm-predictions-for-202... and https://www.youtube.com/watch?v=lVDhQMiAbR8&t=3913s
This project from Cursor is the second attempt I've seen at this now! The other is this one: https://www.reddit.com/r/Anthropic/comments/1q4xfm0/over_chr...
> There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.
Firstly the CI is completely broken on every commit, all tests have failed and its and looking closely at the code, it is exactly what you expect for unmaintainable slop.
Having more lines of code is not a good measure of robust software, especially if it does not work.
I'm not sure the approach of "completely autonomous coding" is the right way to go. I feel like maybe we'll be able to use it more effectively if we think of them as something to be used by a human to accomplish some thing instead, lean into letting the human drive the thing instead, because quality spirals so quickly out of control.
Even their README is kind of crappy. Ideally you want installation instructions right near the top, but it's broken into multiple files. The README link that says "running + architecture" (but the file is actually called browser_ui.md???) is hard to follow. There is no explicit list of dependencies, and again no explanation of how JavaScript execution works, or how rendering works, really.
It's impressive that they got such a big project to be built by agents and to compile, but this codebase... Feels like AI slop, and you couldn't pay me to maintain it. You could try to get AI agents to maintain it, but my prediction is that past some scale, they would have a hard time figuring out their own mess. You would just be left with permanent bugs you can't easily fix.
I can’t shake the feeling that simply being a shameless about copy-paste (ie copyright infringement), would let existing tools do much the same faster and more efficiently. Download Chromium, search-replace ‘Google’ with ‘ME!’, run Make… if I put that in a small app someone would explain that’s actually solvable as a bash one-liner.
There’s a lot of utility in better search and natural language interactions. The siren call of feedback loops plays with our sense of time and might be clouding or sense of progress and utility.
> Today's agents work well for focused tasks, but are slow for complex projects.
What does slow mean? Slower than humans? Need faster GPUs? What does it even imply? Too slow to produce the next token? Too slow in attempts to be usable? Need human intervention?
This piece is made and written to keep the bubble inflating further.
The implied future here is _unreal cool_. Swarms of coding agents that can build anything, with little oversight. Long-running projects that converge on high-quality, complex projects.
But the examples feel thin. Web browsers, Excel, and Windows 7 exist, and they specifically exist in the LLM's training sets. The closest to real code is what they've done with Cursor's codebase .... but it's not merged yet.
I don't want to say, call me when it's merged. But I'm not worried about agents ability to produce millions of lines of code. I'm worried about their ability to intersect with the humans in the real world, both as users of that code and developers who want to build on top of it.
In my experience agents don't converge on anything. They diverge into low-quality monstrosities which at some point become entirely unusable.
This is how I think about it. I care about asymptotics. What initial conditions (model(s) x workflow/harness x input text artefacts) causes convergence to the best steady state? The number of lines of code doesn't have to grow, it could also shrink. It's about the best output.
because it is absolutely impossible to review that code and there is gazillion issues there.
The only way it can get merged is YOLO and then fix issues for months in prod which kinda defeats the purpose and brings gains close to zero.
But what would be the point of re-creating existing applications? It would be useful if you can produce a better version of those applications. But the point in this experiment was to produce something "from scratch" I think. Impressive yes, but is it useful?
A more practically useful task would be for Mozilla Foundation and others to ask AI to fix all bugs in their application(s). And perhaps they are trying to do that, let's wait and see.
I started looking in the Cargo.toml to at least get an idea how the project was constructed. I saw there that rather than being built from scratch as the post seemed to imply that almost every core component was simply pulled in from an open source library. quickjs engine, wgpu graphics, winit windowing & input, egui for ui, html parsing, the list goes on. On twitter their CEO explicitly stated that it uses a "custom js vm" which seemed particularly misleading / untrue to me.
Integrating all of these existing components is still super impressive for these models to do autonomously, so I'm just at a loss how to feel when it does something impressive but they then feel the need to misrepresent so much. I guess I just have a lot less respect and trust for the cursor leadership, but maybe a little relief knowing that soon I may just generate my own custom cursor!
https://github.com/dioxuslabs/blitz
Maybe we ended up in the training data!
Take a screenshot and take it to your manager / investor and make a presentation “Imagine what is now possible for our business”.
Get promoted / exit, move to other pastures and let them figure it out.
It's hard to avoid the impression that this is an unverified pile of slop that may have actually never worked.
The CI process certainly hasn't succeeded for the vast majority of commits.
Baffling, really.
I think any large piece of software with well established standards and test suites will be able to be quickly rewritten and optimized by coding agents.
[1] https://github.com/sberan/tjs
[2] /spawn-perf-agents claude command: https://github.com/sberan/tjs/blob/main/.claude/commands/spa...
> Another experiment was doing an in-place migration of Solid to React in the Cursor codebase. It took over 3 weeks with +266K/-193K edits. As we've started to test the changes, we do believe it's possible to merge this change.
In my view, this post does not go into sufficient detail or nuance to warrant any serious discussion, and the sparseness of info mostly implies failure, especially in the browser case.
It _is_ impressive that the browser repo can do _anything at all_, but if there was anything more noteworthy than that, I feel they'd go into more detail than volume metrics like 30K commits, 1M LoC. For instance, the entire capability on display could be constrained to a handful of lines that delegate to other libs.
And, it "is possible" to merge any change that avoids regressions, but the majority of our craft asks the question "Is it possible to merge _the next_ change? And the next, and the 100th?"
If they merge the MR they're walking the walk.
If they present more analysis of the browser it's worth the talk (not that useful a test if they didn't scrutinize it beyond "it renders")
Until then, it's a mountain of inscrutable agent output that manages to compile, and that contains an execution pathway which can screenshot apple.com by some undiscovered mechanism.
By far the most useful metric is to have a live system running for a year with widespread usage that produces a lower number of bugs than that of a codebase created by humans.
Until that happens, my skeptic hat will remain firmly on my head.
But is this actually true? They don't say that as far as I can tell, and it also doesn't compile for me nor their own CI it seems.
If you can't reproduce or compile the experiment then it really doesn't work at all and nothing but a hype piece.
I guess probably at some point, something compiled, but cba to try to find that commit. I guess they should've left it in a better state before doing that blog post.
I do use AI heavily so I resorted to actually turning on warnings as errors in the rust codebases I work in.
It is also close to impossible run any node ecosystem without getting a wall of warnings.
You are an extreme outlier for putting in the work to fix all warnings
What is `FrameState::render_placeholder`?
``` pub fn render_placeholder(&self, frame_id: FrameId) -> Result<FrameBuffer, String> { let (width, height) = self.viewport_css; let len = (width as usize) .checked_mul(height as usize) .and_then(|px| px.checked_mul(4)) .ok_or_else(|| "viewport size overflow".to_string())?;
if len > MAX_FRAME_BYTES {
return Err(format!(
"requested frame buffer too large: {width}x{height} => {len} bytes"
));
}
// Deterministic per-frame fill color to help catch cross-talk in tests/debugging.
let id = frame_id.0;
let url_hash = match self.navigation.as_ref() {
Some(IframeNavigation::Url(url)) => Self::url_hash(url),
Some(IframeNavigation::AboutBlank) => Self::url_hash("about:blank"),
Some(IframeNavigation::Srcdoc { content_hash }) => {
let folded = (*content_hash as u32) ^ ((*content_hash >> 32) as u32);
Self::url_hash("about:srcdoc") ^ folded
}
None => 0,
};
let r = (id as u8) ^ (url_hash as u8);
let g = ((id >> 8) as u8) ^ ((url_hash >> 8) as u8);
let b = ((id >> 16) as u8) ^ ((url_hash >> 16) as u8);
let a = 0xFF;
let mut rgba8 = vec![0u8; len];
for px in rgba8.chunks_exact_mut(4) {
px[0] = r;
px[1] = g;
px[2] = b;
px[3] = a;
}
Ok(FrameBuffer {
width,
height,
rgba8,
})
}
}
```What is it doing in these diffs?
https://github.com/wilsonzlin/fastrender/commit/f4a0974594e3...
I'd be really curious to see the amount of work/rework over time, and the token/time cost for each additional actual completed test case.
I'm very bullish on LLMs building software, but this doesn't mean the death of software products anymore than 3D printers meant the death of factories.
boys are trying to single shot a browser when a moderate complex task can derail a repo. there’s no good amount of info which might be deliberate but from what i can pick, their value add was “distributed computing and organisational design” but that too they simplified. i agree that simplicity is always the first option but flat filesystem structure without standards will not work. period.
I can create a web browser in under a minute in Copilot if I ask it to build a WinForms project that embeds the WebView2 "Edge" component and just adds an address bar and a back button.
If one vulnerability exists in those crates well, thats that.
If AI could reach the point where we actually trusted the output, then we might stop checking it.
AI coding agents are still a huge force-multiplier if you take this approach, though.
It would be walking the motorcycle.
All code interactions all happen through agents.
I suppose the question is if the agents only produce Swiss cheese solutions at scale and there's no way to fill in those gaps (at scale). Then yeah fully agentic coding is probably a pipe dream.
On the other hand if you can stand up a code generation machine where it's watts + Gpus + time => software products. Then well... It's only a matter of time until app stores entirely disappear or get really weird. It's hard to fathom the change that's coming to our profession in this world.
It turns out to matter a whole lot less than you would expect. Coding Agents are really good at using grep and writing out plans to files, which means they can operate successfully against way more code than fits in their context at a single time.
Interestingly, recently it seems to me like codex is actually compressing early and often so that it stays in the smarter-feeling reasoning zone of the first 1/3rd of the window, which is a neat solution for this, albeit with the caveat of post-compression behavior differences cropping up more often.
Presumably the security and validation of code still needs work, I haven't read anything that indicates those are solved yet, so people still need to read and understand the code, but we're at the "can do massive projects that work" stage.
Division of labor and planning and hierarchy are all rapidly advancing, the orchestration and coordination capabilities are going to explode in '26.
Who created those agents and gives them the tasks to work on. Who created the tests? AI, or the humans?
Of course it creates bottlenecks, since code quality takes time and people don’t get it right on the first try when the changes are complex. I could also be faster if I pushed directly to prod!
Don’t get me wrong. I use these tools, and I can see the productivity gains. But I also believe the only way to achieve the results they show is to sacrifice quality, because no software engineer can review the changes at the same speed the agent generates code. They may solve that problem, or maybe the industry will change so only output and LOC matter, but until then I will keep cursing the agent until I get the result I want.
Things like integration creating bottlenecks or a lack of consistent top down direction leading to small risk adverse changes instead of bold redesigns. All things I’ve seen before.
(Or are they?)
Sometimes workers will task other workers and act as a planner if the task is more complex.
It’s a good setup but it’s nothing like Claude Code.
> Our mission is to automate coding
The tokens were “expensive” from the minds of humans …
I think this situation emphasizes the importance of (something like) Agile. To produce something useful can only happen via experimentation and getting feedback from actual users, and re-iterating relentlessly.
https://taonexus.com/publicfiles/jan2026/172toy-browser.py.t... (turn the sound down, it's a bit loud if you interact with the built-in Tetris clone.)
You can run it after installing the packages, "pip install requests pillow urllib3 numpy simpleaudio"
I livestreamed the latest version here 2 weeks ago, it's a ten minute video:
https://www.youtube.com/watch?v=4xdIMmrLMLo&t=45s
I'm posting from that web browser. As an easter egg, mine has a cool Tetris clone (called Pentrix) based on pieces with 5 segments, the button for this is at the upper-right.
If you have any feature suggestions for what you want in a browser, please make them here: