I don't buy this. The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
Compilation and bundling made a lot more sense before browsers got ES modules and HTTP/2. Today you can get a long way without a bundler... and in a world where LLMs are generating code that's actually a more productive way to work.
Telling any LLM "use Vanilla JS" is enough to break them out of the React cycle, and the resulting code works well and, crucially, doesn't require a round-trip through some node.js build mechanism just to start using it.
Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way. The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.
Having a build step more than pays for itself just in terms of detecting errors without having to execute that codepath. The friction is becoming less and less as the compilation step is increasingly built into your project/dependency management tool and increasingly faster (helped by the trend towards Rust or Go now that the required functionality is relatively well-understood)
> The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.
An LLM can probably generate the ad hoc, informally-specified, bug-ridden, slow implementation of half of React that every non-React application needs very quickly, sure. But can the LLM help you comprehend it (or fix bugs in it) any faster? That's always been the biggest cost, not the initial write.
To see how fast a properly engineered app can be if it avoids using shitty js frameworks just look at fastmail. The comparison with gmail is almost comical: every UI element responds immediately, where gmail renders at 5 fps.
> [literally unusable]
> [one of the most successful web apps]
> [look at how bad it is]
Your standards might be uncalibrated with reality
I use gmail every day and it's fine, apart from when they push AI features I don't want, but I can't blame that on the framework
We're all used to it and that's fine. But it's still bad. We're still wasting, like, 10,000x more resources than we should to do basic things, and stuff still only works, like, 50% of the time.
And IT IS SLOW, despite your experience, which is highly dependant on how much hardware can you throw at it.
> [literally unusable]
It's gotten a lot of critique over the complexity it has over the years, the same way how Next.js also has. I've also seen a frickload of render loops and in some cases think Vue just does hooks better (Composition API) and also state management better (Pinia, closer to MobX than Redux), meanwhile their SFC compiler doesn't seem to support TypeScript types properly so if you try to do extends and need to create wrapper components around non-trivial libraries (e.g. PrimeVue) then you're in for a bunch of pain.
I don't think any mainstream options are literally unusable, but they all kinda suck in subtly different ways. Then again, so did jQuery for anything non-trivial. And also most back end options also kind of suck, just in different ways (e.g. Spring Boot version upgrades across major versions and how verbose the configuration is, the performance of Python and the dependency management at least before uv), same could be said for DBs (PostgreSQL is pretty decent, MariaDB/MySQL has its hard edges) and pretty much everything else.
Doesn't mean that you can't critique what's bad in hopes of things maybe improving a bit (that Spring Boot config is still better than Spring XML config). GMail is mostly okay as is, then again the standards for GUI software are so low they're on the floor - also extends to Electron apps.
The past couple of weeks I've been having loading times up to 1 minute to open gmail.
No idea what they are up to. Loading google workshop or something like that, takes eons.
The problem is that you’ve (and we all have) learned to accept absolute garbage. It’s clearly possible to do better, because smaller companies have managed to build well functioning software that exceeds the performance of Google’s slop by a factor of 50.
I’m not saying RETVRN to plain JS, but clearly the horrid performance of modern web apps has /something/ to do with the 2 frameworks they’re all built on.
Tried a cleared cache load, open and usable in 3 seconds, loading my work inbox which is fairly busy and not clean.
I'm not sure what FPS has to do with this? Have you some sort of fancy windows 11 animations extension installed that star wipes from inbox to email view and it's stuttering??
I click and email it shows instantly, the only thing close to "low FPS" is it loads in some styles for a calendar notification and there's a minor layout shift on the email.
What / how are you using it that you apparently get such piss poor performance?
Nonsense. Apps from all frameworks and none show the same performance issues, and you can find exceptionally snappy examples from almost all frameworks too. Modern webapps are slow because the business incentives are to make them slow, the technology choices are incidental.
There was also a time where once a website or application loaded, scrolling never lagged. Now when something scrolls smoothly it's unusual, and I appreciate it. Discord has done a really good job improving their laggy scroll, but it's still unbelievably laggy for literal text and images, and they use animation tricks to cover up some of the lag.
Anyone doing serious HTML rendering with WebAssembly today A) has a build step, B) still has a bunch of JS to do memory buffer FFI/IPC and decoding/encoding, C) is usually using some form of Virtual DOM in the Wasm side and the JS side is some version of JSON-driven React/Preact-Lite. It is not today more efficient than React's build process nor React's runtime experience.
Anyone shipping production code will one way of another have some kind of build step, whether that's bundling, minification, typechecking, linting, finger printing files, etc. At that point it makes little difference if you add a build step for compilation.
I'm sympathetic to not wanting to deal with build processes I try to avoid them where I can in my side projects. The main web project I've been working on for the last year has no build step, uses Vanilla JS & web components. But it's also not a consumer facing product.
I think there's friction for sure, but I just can't see this being an issue for most cases where a build step is already in place for other concerns. And Developers are fairly familiar with build steps especially if you do anything outside the web in C/C++ or Java/C# or Rust or whatever.
If you've got a huge project, even very quick bundlers will end up slowing down considerably (although hot reload should still be pretty quick because it still just affects individual files). But in general, bundlers are pretty damn quick these days, and getting even quicker. And of course, they're still fully optional, even for a framework like React.
As a recovering C++ programmer the idea that a basically instant compilation step is a source of friction is hysterical to me.
Try waiting overnight for a build to finish. Frontend devs don't know they're born. It takes like 5 minutes to set up vite.
React and TS people are making sure that is not the case anymore, allegedly for our own benefit.
Similarly with TypeScript, having worked with and without it, I get so much from it that is a no-brainer for me. But maybe I'm just in the pocket of Big TypeScript and this is more of that gaslighting you were worried about... ;)
I'd note that people learn and accumulate knowledge as new languages and frameworks develop, despite there being established practices. There is a momentum for sure, but it doesn't preclude development of new things.
E.g., if most developers are telling their LLMs “build me a react app” or “I want to build a website with the most popular framework,” they were going to end up with a react app with or without LLMs existing.
I’m sure a lot of vibecoders are letting Jesus take the wheel, but in my vibecoding sessions I definitely tend to have some kind of discussion about my needs and requirements before choosing a framework. I’m also seeing more developers talking about using LLMs with instructions files and project requirement documents that they write and store in their repo before getting started with prompting, and once you discover that paradigm you don’t tend to go back.
I think that while it may be easier to develop with LLMs in languages and frameworks the LLM may “know” best, in theory, models could be trained to code well in any language and could even promote languages that either the sponsoring company or LLM “prefers”.
(For the AI-sceptics, you can read this as models are equally bad at all code)
My actual long term hope is that in the future we won't need to think about frameworks at all: https://paul.kinlan.me/will-we-care-about-frameworks-in-the-...
Yes! That's exactly what I was trying to get at.
Is there something about the web — with its eternal backwards compatibility, crazy array of implementations, and 3 programming languages — that seems like it's the ideal platform for a framework-free existence?
Maybe if we bake all of the ideas into JavaScript itself, but then where does it stop? Is PHP done evolving? Does Java, by itself, do everything as well as you want out of Spring?
I sincerely doubt that either of JSX's syntax or its semantics under React's transforms would make it into a W3 or WHAT spec as they exist today.
Not exclusively. SolidJS, for example, transforms the syntax into string templates with holes in them. The "each element is a function call" approach works really well if those calls are cheap (i.e. with a VDOM), but if you're generating DOM nodes, you typically want to group all your calls together and pass the result to the browser as a string and let it figure out how to parse it.
For example, if you've got some JSX like:
<div>
<div>
<span>{text}</span>
<div>
<div>
You don't want that to become nested calls to some wrapper around 'document.createElement`, because that's slow. What you want is to instead do something like const template = parseHtml(`
<div>
<div>
<span></span>
<div>
<div>
`);
template.children[0].children[0].innerText = text
This lets the browser do more of the hard parsing and DOM-construction work in native code, and makes everything a lot more efficient. And it isn't possible if JSX is defined to only have the semantics that it has in React.It's really not slow. It might seems slow if you're using react behavior which re-invokes the "render function" any time anything changes. But eventually they get reconciled into the DOM which creates the elements anyway. And most other code bases are not based on this reconciliation concept. So I don't think that's a given.
Also, there's no reconciliation happening here. In SolidJS, as well as in Vue in the new Vapor mode, and Svelte, the element that is returned from a block of JSX (or a template in Svelte) is the DOM element that you work with. That's why you don't need to keep rerendering these components - there's no diffing or reconciliation happening, instead changes to data are directly translated into updates to a given DOM node.
But even if you don't need to worry about subsequent re-renders like with VDOM-based frameworks, you still need to worry about that initial render. And that goes a lot quicker if you can treat a JSX block as a holistic unit rather than as individual function calls.
The difference shrinks even further with `<template>`/HtmlTemplateElement, its secondary/content `document`s for `document.createElement` and `document.importNode` being faster for cloning+adoption of a `template.contents` into the main document than string parsing.
I've got work-in-progress branch in a library of mine using JSX to build HtmlTemplateElements directly with `document.createElement` and right now `document.createElement` is the least of my performance concerns and there is no reason to build strings instead of elements.
(ETA: There are of course reasons to serialize elements to strings for SSR, but that's handy enough to do with a DOM emulator like JSDOM rather than need both an elements path and a string path.)
The library [0] I wrote that uses JSX converts expression attributes into parameter-less lambdas before providing them as function parameters or object properties. This is a different behavior than react's build tools or any of typescripts jsx options. But it's not inconsistent with the spec.
The space that the Babel/Typescript JSX options describe is a constructive space for more than just React.
Unfortunately there isn't any one preferred alternative convention. But if you ignore his and roll your own it will almost certainly be better. Not great for reading other people's code but you can make your own files pretty clear.
If you're thinking of _Redux_, are you referring to the early conventions of "folder-by-type" file structures? ie `actions/todos.js`, `reducers/todos.js`, `constants/todos.js`? If so, there's perfectly understandable reasons why we ended up there:
- as programmers we try to "keep code of different kinds in different files", so you'd separate action creator definitions from reducer logic
- but we want to have consistency and avoid accidental typos, especially in untyped plain JS, so you'd extract the string constants like `const ADD_TODO = "ADD_TODO"` into their own file for reuse in both places
To be clear that was never a requirement for using Redux, although the docs did show that pattern. We eventually concluded that the "folder-by-feature" approach was better:
- https://redux.js.org/style-guide/#structure-files-as-feature...
and in fact the original "Redux Ducks" approach for single-file logic was created by the community just a couple months after Redux was created:
- https://github.com/erikras/ducks-modular-redux
which is what we later turned into "Redux slices", a single file with a `createSlice` call that has your reducer logic and generates the action creators for you:
- https://redux.js.org/tutorials/essentials/part-2-app-structu...
I can't find the author making that argument. Can you point to where they're declaring that React has permanently won?
> The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
This is orthogonal to what the article is addressing.
> Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way
If you didn't read the article, I think you should. Because this is generally the conclusion that the author comes to. That in order to break out of React's grip, LLM's can be trained to use other frameworks.
So I guess I'm in agreement with the author: let's actively work to make that not happen.
Like, if you really believe that in the future 95% of code will be written by LLMs, then there can never be a Python 4, because there would be no humans to create new training data.
To me, this is evidence that LLMs won’t be writing 95% of code, unless we really do get to some sort of mythical “AGI” where the AI can learn entirely from its own output and improve itself exponentially. (In which case there would still wouldn’t be a Python 4, it would be some indecipherable LLM speak.) I’ll believe that when I see it.
Most programming languages are similar enough to existing languages that you only need to know a small number of details to use them: what's the core syntax for variables, loops, conditionals and functions? How does memory management work? What's the concurrency model?
For many languages you can fit all of that, including illustrative examples, in a few thousand tokens of text.
So ship your new programming language with a Claude Skills style document and give your early adopters the ability to write it with LLMs. The LLMs should handle that very well, especially if they get to run an agentic loop against a compiler or even a linter that you provide.
I think that’s correct in terms of the surface-level details but less true for the more abstract concepts.
If you’ve tried any of the popular AI builders that use Supabase/PostgREST as a backend, for instance Lovable, you’ll see that they are constantly failing because of how unusual PostgREST is. I’m sure these platforms have “AI cheat sheets” to try to solve this, but you still see constant problems with things like RLS, for instance.
Is it better to specify the parameters and metrics —aka non-functional requirements —that matter for the application, and let LLMs decide? For that matter, why even provide that? Aren't the non-functional requirements generally understood?
It is the specifics that would change—scale to 100K monthly users, keep infrastructure costs below $800K, or integrate with existing Stripe APIs.
OK, it wasn't a Claude Skill, but it was done using Claude.
That said, part of what he did with Cursed was get LLMs to read its own documentation and use that to test and demonstrate the language.
Also, React was extremely popular before any LLMs were out there. I would not ascribe much of the growth to vibe coding.
It's fine. I've been using codex on some code bases with this with pretty alright results. I also use codex to generate typescript/react code. I'm getting similar results. I had a little wow moment when I asked it to add some buttons and then afterwards realized that it had figured out the localization framework (one of my creations) and added translations for the button labels. All unprompted. It clearly understood the code base and how I like things done. So it just went ahead and did them. The syntax is not a problem. The obscurity of the library is not a problem as long as you give it enough to work with. It does less well coding something from scratch than working on existing code.
IMHO, things like react are optimized for humans. They aren't actually that optimal for LLMs to work with. It's actually impressive that they can. Too much expressiveness and ambiguity. LLMs like things spelled out. Humans don't. We're still doing things manually so it helps if we can read and edit what the LLMs do. But that won't stay like that.
I think in a few years, we'll start seeing languages and frameworks that are more optimal for Agentic coding tools as they will be the main users. So, stronger typing. More verbosity and less ambiguity.
In front-end as well—I've been able to go much farther for simple projects using alpine than more complex frameworks. For big products I use Elm, which isn't exactly the most common front-end choice but it provides a declarative programming style that forces the LLM to write more correct code faster.
In general, I think introspectible frameworks have a better case, and whether they're present in training data or not becomes more irrelevant as the introspectibility increases. Wiring the Elm compiler to a post-write hook means I basically have not written front-end code in 4 or 5 months. Using web standards and micro frameworks with no build step means the LLM can inspect the behaviour using the chrome dev tools MCP and check its work much more effectively than having to deal with the React loop. The ecosystem is so fragmented there, I'm not sure about the "quality because of quantity of training data" argument.
What I was trying to get at in the post is that net new experiences is where I see a massive delta
The 'LSP' that would allow new frameworks or languages to shine with coding agents is already mostly here, and it's things like hooks, MCPs, ACP, etc. They keep the code generation aligned with the final intent, and syntactically correct from the get go, with the help of very advanced compilers/linters that explain to the LLM the context it's missing.
That's without hypothesising on future model upgrades where fine-tuning becomes simple and cheap, local, framework-specific models become the norm. Then, React's advantage (its presence in the training data) becomes a toll (conflicting versions, fragmented ecosystem).
I also have a huge bias against the javascript/typescript ecosystem, it gives me headaches. So I could be wrong.
And LLMs can create idiomatic CRUD pages using it. I just needed to include one example in AGENTS.md
Typescript, however, does scale pretty well. But now you've added a compiler and bundler, and might as well use some framework.
I’ve written some pretty complicated vanilla JS and it works fine. I’m not dealing with other people crappy code however so YMMV.
At this point there are several large Rust UI libraries that try to replicate this pattern in web assembly, and they all had enough time to appear and mature without the underlying JSX+hooks model becoming outdated. To me it’s a clear sign that JS world slowed down.
Server-side components became a thing, as well as the React compiler. And libraries in the React (and JS at large) ecosystem are pretty liberal with breaking changes, a few months is enough to have multiple libraries that are out-of-date and whose upgrade require handling different breaking changes.
React Native is it own pit of hell.
It did slow down a little since a few years ago, but it's still not great.
React had just updated and documentation hadn’t.
I then discovered that Meta owns React so I got frustrated as hell with their obfuscation and ripped out all of the React and turned what was left into vanilla html+js.
I also don’t ‘KTH-Trust’ Meta of all corporations to have a compile step for a web technology.
I try to filter out such people in hiring nowadays but sometimes you miss, or come into an existing team with these issues
You don't buy what, exactly?
> As usual, requires a lot of handholding and care to get good results, but I've found this is true for react codebases just as much as anything else.
I think you and others in this thread have either just skimmed the article or just read the headline. The point isn't that you can't use LLMs for other languages, its that the creators of these tools AREN'T using other languages for them. Yes, LLM's can write Angular. But if there's less data to train on, the results won't be as good. And because of this, it's creating a snowball effect.
To me, they don't buy the argument that the snowball effect is significant enough to overcome technical merits of different frontend frameworks.
And I'll add that: older libraries like React have at least one disavantage: there's a lot of outdated React code out there that AI is being trained on.
> there's a lot of outdated React code out there that AI is being trained on.
Yea, but that's better than no code as far as an LLM is concerned, which is what this article is about.
And specifically Svelte has their own MCP to help LLMs https://svelte.dev/docs/mcp/overview
I wonder if React has something to keep AI on their toes about best practices.
Ahh, I wouldn't hold my breath.
And to your point, I guess another thing Svelte has is it's compatibility with just vanilla JS, meaning (I think) it doesn't necessarily have to be "Svelte" code to still work with Svelte.
But if less people are exposed to those frameworks, then surely that means they will be less popular? I'm struggling to understand your argument.
> he data presented in the article isn't very convincing to me - it's absolute numbers, it's not a zero-sum game,
Of course it is. If I'm using React to build a site, I'm not using Svelte to build it. It less people are using a framework, there will be less funding. If more people use it, more money.
> I don't think it's sensible to extrapolate from current trends about LLM coding anyway.
The actual tools themselves are using React. Bolt, a UI design LLM, uses React by default. i don't even think there's an option to use a different language right now. These tools have taken over the industry, and have absolutely exploded in popularity in the few years they've been available. This is going to create a snowball effect.
> This stuff is barely a few years old and we want to make confident predictions about it?
I don't think you read the article as closely as you think you do. Saying "React has probably spiked in popularity because LLM's use it be default" isn't that controversial. And it's true. And I don't think it's a long shot to say "If there's less data associated with a framework, it'll be less likely to be used by these tools and then less likely to be used at all." In fact, it feels like a pretty obvious conclusion.
We can ignore what is clearly happening (which even as a React dev I don't want because it WILL limit my future options) or work to make sure those tools are offering other defaults.
I think LLMs, despite already being trained massively on React, can easily adapt their output to suit a new framework's-specific API surface with a simple adjustment to the prompt. Maybe include an abbreviated list of type/function signatures that are specific to your new framework and just tell the LLM to use JSX for the views?
What I think will definitely be a challenge for new library authors in the age of LLMs is state management. There are already tons of libraries that basically achieve the same thing but have vastly different APIs. In this case, new lib-authors may be forced to just write pluggable re-implementations of existing libraries just to enable LLMs to emit compilable/runnable code. Though I dont know of any state management library that dominates the web like React does with the view layer.
That's what I did. https://mutraction.dev/
My framework has approximately zero users and this is not a plug, but the idea is sound and it works.
At the moment I still consider it a tool alongside all other tool, or else a business strategy next to e.g. outsourcing. My job didn't go away because there's 1000x more of them overseas. But likewise, it also didn't go away because there's individuals 1000x better (educated, developed, paid, connected) than me in the US.
This too shall pass.
Sure it has a lot of staying power because of network effects (and qualities like backwards compatibility and gaming). But it's not a terminal, self-reinforcing snowball, force of nature like the article implies React is.
Poking client reqs is such a high value skill, most freelancers will just build what the client asks, "ok here's a react frontend with a postgres db and a crud app for your ecommerce website" instead of asking what the functional requirements are, maybe it can be a shopify thing, or just post it on amazon, or maybe a plain html app (optionally with js)
It can be valid to ask for a brick house if you know what the other ways to build a house are, but if you just asked chatgpt for a house plan and it said "bricks", because it's the most housey thing and you said ok because it rings a bell and sounds housey, having a dev that asks and tells you about wooden houses or steel beams or concrete is the best that can happen.
I appreciate when it happens the other way around, I go to a lawyer and tell them I want a corp, they start off assuming I know my shit, and after 5 minutes we are like, oh I don't want a corp
Coffeescript helped Javascript to evolve the right way, so in retrospect, it was absolutely a good thing. It's like people here don't remember the days of ES3 or ES5...
And the days? Remember Typescript right now? Typescript is not Javascript.
TL;DR sometimes you need to make an alternative to get the original to move.
It will be interesting to see how durable these biases are as labs work towards developing more capable small models that are less reliant on memorized information. My naive instinct is that these biases will be less salient over time as context windows improve and models become increasingly capable of processing documentation as a part of their code writing loop, but also that, in the absence of instruction to the contrary, the models will favor working with these tools as a default for quite some time.
Worse, with LLM's easily generating boilerplate, there's less pressure to make old framework code concise or clear, and the superior usability of a new framework won't be a big draw.
But coding is a primary application/profit center, and you can be sure they'll reduce the latency between release and model support, and they'll start to emphasize/suggest new frameworks/paradigms as a distinguishing feature.
My concern is about gaming the system, like SEO. If LLM coding is the gatekeeper, they'll be corrupted by companies seeking access/exposure. Developer uptake used to be a reasonable measure of quality, but in this new world it might only reflect exposure.
More broadly, obviously there is some pressure to use a framework/library/programminglang/editor that has better LLM training. But even before LLMs.. you'd want to choose the one that has more SO questions, more blog posts and books published about it. The one where you can hire experienced programmers.
New players has a certain activation energy they need to overcome - which is probably good. B/c it slows down the churn of new shiny with incrementally improvements. I think a paradigm shift is sufficient though. Programmers like new shiny things - especially the good ones that are passionate about their craft
I absolutely wouldn't be swapping because the output 'isn't good enough'.
Now, about the incentives? Probably less inference costs for llms, which probably means that they are more legible than the current state of the art for humans as well.
Less API changes than let's say react also means that the generated code as less branching although llms can adapt anyway. Cheaper.
Will probably be closer to the platform too (vanillaJS).
The common factor is the reader, taking what the search engine, SO commenter or AI takes as gospel. A good software developer can judge multiple inputs on their own.
And if someone doesn't care what an AI does it really isn't important what they are having it build or what tool it uses, clearly.
Should have made graphs testing LLMs with different frameworks.
I had to ditch the whole thing and rewrite it in Vue when it got big enough that I couldn’t debug it without learning React.
Vibe-coding something in a stack you know or want to know means you can get off your high horse and peek into the engine.
I still agree with the sentiment that React is winning; if the competition of volume. But other frameworks won’t stop existing unless you believe that people exclusively choose what is dominant. But there will always be artisans, even after all old people who learned the alternatives were flushed out.
In the meantime real engineers still use the proper tools.