Some of the engineers working on the app worked on Electron back in the day, so preferred building non-natively. It’s also a nice way to share code so we’re guaranteed that features across web and desktop have the same look and feel. Finally, Claude is great at it.
That said, engineering is all about tradeoffs and this may change in the future!
I tried the desktop app and was shocked at the performance. Conversations would take a full second to load, making rapidly switching intolerable. Kicking off a new task seems to hang for multiple seconds while I'm assuming the process spins up.
I wanted to try a disposable conversations per feature with git worktree integration workflow for an hour to see how it contrasted, but couldn't even make it ten minutes without bailing back to the terminal.
While there are legitimate/measurable performance and resource issues to discuss regarding Electron, this kind of hyperbole just doesn't help.
I mean, look: the most complicated, stateful and involved UIs most of the people commenting in this thread are going to use (are going to ever use, likey) are web stack apps. I'll name some obvious ones, though there are other candidates. In order of increasing complexity:
1. Gmail
2. VSCode
3. www.amazon.com (this one is just shockingly big if you think about it)
If your client machine can handle those (and obviously all client machines can handle those), it's not going to sweat over a comparatively simple Electron app for talking to an LLM.
Basically: the war is over, folks. HTML won. And with the advent of AI and the sunsetting of complicated single-user apps, it's time to pack up the equipment and move on to the next fight.
From the person you're responding to:
> I would guess a moderate amount of performance engineering effort could solve the problem without switching stacks or a major rewrite.
Pretty clearly they're not saying that this is a necessary property of Electron.
Maybe you can give ones of competing ones of comparable complexity that are clearly better?
Again, I'm just making a point from existence proof. VSCode wiped the floor with competing IDEs. GMail pushed its whole industry to near extinction, and (again, just to call this out explicitly) Amazon has shipped what I genuinely believe to be the single most complicated unified user experience in human history and made it run on literally everything.
People can yell and downvote all they want, but I just don't see it changing anything. Native app development is just dead. There really are only two major exceptions:
1. Gaming. Because the platform vendors (NVIDIA and Microsoft) don't expose the needed hardware APIs in a portable sense, mostly deliberately.
2. iOS. Because the platform vendor expressly and explicitly disallows unapproved web technologies, very deliberately, in a transparent attempt to avoid exactly the extinction I'm citing above.
It's over, sorry.
Which is still quite the statement, and damn the video is intolerable. But the full quote still feels a little different than how you put it here.
https://www.lennysnewsletter.com/p/head-of-claude-code-what-...
This is an important lesson to watch what people do, not what they say.
Surely, it would be a flex to show that your AI agents are so good they make electron redundant.
But they don’t. So it’s reasonable to ask why that is.
https://www.lennysnewsletter.com/p/head-of-claude-code-what-...
As for others, Microsoft is saying they’re porting all C/C++ code to Rust with a goal of 1m LOC per engineer per month. This would largely be done with AI.
https://www.thurrott.com/dev/330980/microsoft-to-replace-all...
If coding is a solved problem and there is no need to write code, does the language really matter at that point?
If 1 engineer can handle 1m LOC per month, how big would these desktop apps be where maintaining native code becomes a problem?
if that's the case, why don't you just ask it to "make it not shit"?
With your context and understanding of the coding agent's capabilities and limitations, especially Opus4.6, how do you see that going?
The sheer speedup all users will show everyone why vibe coding is the future. After all coding is a solved problem.
Migrating the system would be the easier part in that regard, but they'll still need a JS UI unless they develop multiple teams to spearhead various native GUIs (which is always an option).
Almost every AI chat framework/SDK I've seen is some React or JS stuff. Or even agent stuff like llamaindex.ts. I have a feeling AI is going to reinforce React more than ever.
Could you visualize the user's usage? For example, like a glass of water that is getting emptier the more tokens are used, and gets refilled slowly.
Because right now I have no clue when I will run out of credits.
Thanks!
You guys just did add it too, so yeah!
- Using a stack your team is familiar with still has value
- Migrating the codebase to another stack still isn’t free
- Ensuring feature and UX parity across platforms still isn’t free. In other words, maintaining different codebases per platform still isn’t free.
- Coding agents are better at certain stacks than others.
Like you said any of these can change.
It’s good to be aware of the nuance in the capabilities of today’s coding agents. I think some people have a hard time absorbing the fact that two things can be true simultaneously: 1) coding agents have made mind bending progress in a short span 2) code is in many ways still not free
It's the fastest way to iterate because Electron is the best cross platform option and because LLMs are likely trained on a lot of HTML/Javascript.
Which is why Claude is great at it.
So the model is not a generalised AI then? It is just a JS stack autocomplete?
I'm glad to see this coming from a company that is so popular these days.
Thanks!
> more performant
I found the problem.
I can see it in my team. We've all been using Claude a lot for the last 6 months. It's hard to measure the impact, but I can tell our systems are as buggy as ever. AI isn't a silver bullet.
When devs outsource their thinking to AI, they lose the mental map, and without it, control over the entire system.
But I don’t get how they code in Anthropic when they say that almost all their new code is written by LLM.
Do they have some internal much smarter model that they keep in secret and don’t sell it to customers? :)
When is the last time you had an on call blow up that was actually your code?
Not that I’m some savant of code writing — but for me, pretty much never. It’s always something I’ve never touched that blows up on my Saturday night when I’m on call. Turns out it doesn’t really change much if it’s Sam who wrote it … or Claude.
It means Sam is 7 beers deep on Saturday night since you’re the one on call. He’s not responding to your slack messages.
Claude actually is there though, so that’s kind of nice.
Claude is there as long as you're paying,and I hope he doesn't hallucinate an answer.
Emphasis mine.
> Claude is there as long as you're paying
If you’re at a company that doesn’t pay for AI in the year 2026, you should find a new company.
> and I hope he doesn't hallucinate an answer.
Unlike human coworkers with a 100% success rate, naturally.
There is a difference between a lector and an author
In sufficiently complicated systems, the 10xer who knows nothing about the edge cases of state could do a lot more damage than an okay developer who knows all the gotchas. That's why someone departing a project is such a huge blow.
It’s a difference reading code if you’re are also a writer of than purely a reader.
It’s like only reading/listening to foreign language without ever writing/speaking it.
Use AI as a sanity check on your thinking. Use it to search for bugs. Use it to fill in the holes in your knowledge. Use it to automate grunt work, free your mind and increase your focus.
There are so many ways that AI can be beneficial while staying in full control.
I went through an experimental period of using Claude for everything. It's fun but ultimately the code it generates is garbage. I'm back to hand writing 90% of code (not including autocomplete).
You can still find effective ways to use this technology while keeping in mind its limitations.
It’s easy to see the immediate speed boost, it’s much harder to see how much worse maintaining this code will be over time.
What happens when everyone in a meeting about implementing a feature has to say “I don’t know we need to consult CC”. That has a negative impact on planning and coordination.
An engineer should be code reviewing every line written by an LLM, in the same way that every line is normally code reviewed when written by a human.
Maybe this changes the original argument from software being “free”, but we could just change that to mean “super cheap”.
I disagree.
Instead, a human should be reviewing the LLM generated unit tests to ensure that they test for the right thing. Beyond that, YOLO.
If your architecture makes testing hard build a better one. If your tests arent good enough make the AI write better ones.
If you did, the tests would be at least as complicated as the code (almost certainly much more so), so looking at the tests isn’t meaningfully easier than looking at the code.
If you didn’t, any functionality you didn’t test is subject to change every time the AI does any work at all.
As long as AIs are either non-deterministic or chaotic (suffer from prompt instability, the code is the spec. Non determinism is probably solvable, but prompt instability is a much harder problem.
You just hit the nail on the head.
LLM's are stochastic. We want deterministic code. The way you do that is with is by bolting on deterministic linting, unit tests, AST pattern checks, etc. You can transform it into a deterministic system by validating and constraining output.
One day we will look back on the days before we validated output the same way we now look at ancient code that didn't validate input.
You can have all the validation, linters, and unit tests you want and a one word change to your prompt will produce a program that is 90%+ different.
You could theoretically test every single possible thing that an outside observer could observe, and the code being different wouldn’t matter, but then your tests would be 100x longer than the code.
Just read the code.
But once you figure that out, it's pretty effective.
And now the comments are "If it is so great why isn't everything already written from scratch with it?"
For my own work I've focused on using the agents to help clean up our CICD and make it more robust, specifically because the rest of the company is using agents more broadly. Seems like a way to leverage the technology in a non-slop oriented way
Of course the answer is all the things that aren't free, refinement, testing, bug fixes, etc, like the parent post and the article suggested.
https://imgur.com/gallery/i-m-stupid-faster-u8crXcq
(sorry for Imgur link, but Shen's web presence is a mess and it's hard to find a canonical source)
I'm not saying this is completely the case for AI coding agents, whose capabilities and trustworthiness have seen a meteoric rise in the past year.
They just have a lot of users doing QA to, and ignore any of their issues like true champs
Not to say that you don't review your own work, but it's good practice for others (or at least one other person) to review it/QA it as well.
But ignoring that, if humans are machines, they are sufficiently advanced machines that we have only a very modest understanding of and no way to replicate. Our understanding of ourselves is so limited that we might as well be magic.
Well, ignoring the whole literal replication thing humans do.
When you merge them into one it's usually a cost saving measure accepting that quality control will take a hit.
I've been coding an app with the help of AI. At first it created some pretty awful unit tests and then over time, as more tests were created, it got better and better at creating tests. What I noticed was that AI would use the context from the tests to create valid output. When I'd find bugs it created, and have AI fix the bugs (with more tests), it would then do it the right way. So it actually was validating the invalid output because it could rely on other behaviors in the tests to find its own issues.
The project is now at the point that I've pretty much stopped writing the tests myself. I'm sure it isn't perfect, but it feels pretty comprehensive at 693 tests. Feel free to look at the code yourself [0].
[0] https://github.com/OrangeJuiceExtension/OrangeJuice/actions/...
When it comes to code review, though, it can be a good idea to pit multiple models against each other. I've relied on that trick from day 1.
If you're in tech leadership it is your responsibility to make it extremely clear to execs that there is a trade-off being made here. If everyone is going in that direction with eyes wide open then the trade-offs are great.
Edit: The title of the post originally started with "If code is free,"
it just means that it might be free for my owner to adopt me, but it sure as hell aint free for them to spoil me
- AI bad - JavaScript bad - Developers not understanding why Electron has utility because they don't understand the browser as a fourth OS platform - Electron eats my ram oh no posted from my 2gb thinkpad
You mean incongruent styles? As in, incongruent to the host OS.
There is no doubt electron apps allow the style to be consistent across platforms.
Compare to other software on Mac such as Pages, Xcode, Tower, Transmission, Pixelmator, mp3tag, Table plus, Postico, Paw, Handbrake etc, (the other i use) etc those are a delight to work with and give me the computing experience I was looking for buying a Mac.
XCode is usually the first example that comes to mind of a terrible native app in comparison to the much nicer VSCode.
Code is not the cost. Engineers are. Bugs come from hindsight not foresight. Let’s divide resources between OSs. Let all diverge.
> They are often laggy or unresponsive. They don’t integrate well with OS features.
> (These last two issues can be addressed by smart development and OS-specific code, but they rarely are. The benefits of Electron (one codebase, many platforms, it’s just web!) don’t incentivize optimizations outside of HTML/JS/CSS land
Give stats. Often, rarely. What apps? I’d say rarely, often. People code bad native UIs too, or get constrained in features.
Claude offer a CLI tool. Like what product manager would say no to electron in that situation.
This article makes no sense in context. The author surely gets that.
I didn’t say AI was bad and I acknowledged the benefits of Electron and why it makes sense to choose it.
With 64gb of RAM on my Mac Studio, Claude desktop is still slow! Good Electron apps exist, it’s just an interesting note give recent spec driven development discussion.
>There are downsides though. Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes. They are often laggy or unresponsive. They don’t integrate well with OS features.
A few hundred megabytes to a few gb sounds like an end user problem. They can either make room or not use your application.
You can easily buy a laptop for around 400 USD that will run Claude code just fine, along with several other electron apps.
Don't get me wrong, native everything ( which would probably mean sacrificing Linux support) would be a bit better, but it's not a deal breaker.
Should they have re-written Chromium too?
Projects with much smaller budget than Atrophic has achieved much better x-plat UI without relying on electron [1]. There are more sensible options like Qt and whatnot for rendering UIs.
You can even engineer your app to have a single core with all the business logic as a single shared library. Then write UI wrappers using SwiftUI, GTK, and whatever microsoft feels like putting out as current UI library (I think currently it's WinUI2) consuming the core to do the interesting bits.
Heck there are people whom built gui toolkits from scratch to support their own needs [2].
[1] - https://musescore.org/en [2] - https://www.gpui.rs
What really am I to conclude by the mere fact that they used electron? The AI was not so magical that it overcame sense?
Am I to imagine that the fact that they advertise AI coding means I therefore have a window into their development process and their design choices?
I just think the notion is much sillier than all of us seem to be treating it.
Maybe their dog food isn't as tasty as they want you to believe.
And therefore, what?
That’s what’s missing and I think we should just be clear on: it is a design choice to choose electron over writing a native app.
If it really is a design choice then it's a bad decision imo.
We can all talk about how this or that app should be different, but the idea is "electron sux => ????? "
Why should I care that they didn't rebuild the desktop app I don't use. Their TUI is really nice.
It's pretty easy to argue your point if you pick a strawman as your opponent.
They have said that you can be significantly more productive (which seems to be the case for many) and that most of their company primarily uses LLM to write code and no longer write it by hand. They also seems to be doing well w.r.t. competition.
There are legitimate complaints to be made against LLMs, pick one of them - but don't make up things to argue against.
You can use those expensive engineers to build more stuff, not rewrite old stuff
Why create Linux when UNIX exists?
Why create Firefox when Internet Explorer exists?
Why Create a Pontiac when Ford exists?
Why do anything you think can be done better when someone else has done it worse?
All technology choices are about trade-offs, and while our desktop app does actually include a decent amount of Rust, Swift, and Go, but I understand the question - it comes up a lot. Why use web technologies at all? And why ship your own engine? I've written a long-form version of answers to those questions here: https://www.electronjs.org/docs/latest/why-electron
To us, Electron is just a tool. We co-maintain it with a bunch of excellent other people but we're not precious about it - we might choose something different in the future.
If as your CEO says “coding is largely solved”, why is this the case?
Or is your CEO wrong and coding is not largely solved?
Given how much they pay their developers, the Claud app probably cost at least 2, and likely 3, orders of magnitude more to build.
If their AI could do the same for $2m they'll definitely do that any day.
Also if you haven't heard, disk space is no longer as cheap, and RAM is becoming astoundingly expensive.
Tauri's story with regards to the webview engine on Linux is not great.
I've been building a native macOS/iOS app that lets me manage my agents. Both the ability to actually control/chat fully from the app and to just monitor your existing CLI sessions (and/or take 'em over in the app).
Terrible little demo as I work on it right now w/claude: https://i.imgur.com/ght1g3t.mp4
iOS app w/codex: https://i.imgur.com/YNhlu4q.mp4
Also has a rust server that backs it so I can throw it anywhere (container, pi, etc) and the connect to it. If anyone wants to see it, but I have seen like 4 other people at least doing something similar: https://github.com/Robdel12/OrbitDock
Computers have gotten orders of magnitude faster since 2016, but using mainstream apps certainly don't feel any faster. Electron and similar frameworks do offer appealing engineering tradeoffs, but they are a main culprit of this problem.
Sure, the magnitude of RAM/compute "waste" may have grown from kB to MB, but inefficiency is still inefficiency - no matter how powerful the machine it's running on is.
Claude is going to help mostly with code, much less with design. It might help to accelerate integration, if the application is simple enough and the environment is good enough. The fact is, going cross-platform native trebles effort in areas that Claude does not yet have a useful impact.
A native app is the wrong abstraction for many desktop apps. The complexity of maintaining several separate codebases likely isn't worth the value add. Especially for a company hemorrhaging money the way the Anthropic does.
I hope that prevalence of AI coding agents might lead to a bit of a revival of RAD tools like lazarus, which seem to me to have a good model for creating cross-platform apps.
If only AI had more Liquid Glass, lol
Also AI is better at beaten path coding. Spend more tokens on native or spend them on marketing?
Then what?
Not saying I'm not using AI - because I am. I'm using it in the IDE so I can stay close to every update and understand why it's there, and disagree with it if it shouldn't be there. I'm scared to be distanced from the code I'm supposed to be familiar with. So I use the AI to give me superpowers but not to completely do my job for me.
We'll see, I guess...
- unlike QT it's free for commercial use.
- I don't know any other user land GUI toolkit/compositor that isn't a game engine(unity/unreal/etc).
clearly the code isn’t free and writing for raw win32 is painful.
We should refuse to accept coding agents until they have fully replaced chromium. By that point, the world will see that our reticence was wisdom.
I guess I don't understand how people don't see something like 20k + an engineer-month producing CCC as the actual flare being shot into the night that it is. Enough to make this penny ante shit about "hurr hurr they could've written a native app" asinine.
They took a solid crack at GCC, one of the most complex things *made by man* armed with a bunch of compute, some engineers guiding a swarm, and some engineers writing tests. Does it fail at key parts? Yes. It is a MIRACLE and a WARNING that it exists at all? YES. Do you know what you would have with an engineer-month and 20k in compute trying to write GCC from scratch in 2 weeks in 2024? A whole heck of a lot less than they got.
This notion that everything is the same just didn't make contact on 2025, and we're in 2026 now. All of software is already changing and HN is full of wanking about all the wrong stuff.
I've been building a native macOS AI client in Swift — it's 15MB, provider-agnostic, and open source: https://github.com/dinoki-ai/osaurus
Committing to one platform well beats a mediocre Electron wrapper on all three.
You just have to be really careful because the agent can easily slip into JS hell; it has no shortage of that in its training.
It is easy to crank out a one-off, flashy tool using Claude (to demo its capabilities), which may tick 80% of the development work.
If you've to maintain it, improve, grow for the long haul, good luck with it. That's the 20% hard.
They took the safe bet!
A few years ago maybe. Tauri makes better sense for this use case today - like Electron but with system webviews, so at least doesn't bloat your system with extra copies of Chrome. And strongly encourages Rust for the application core over JS/Node.
You would think with programming becoming completely automated by the end of 2026, there'd be a vibe coded native port for every platform, but they must be holding back to keep us from all getting jealous.
It's a nodejs app, and there is no reason to have a problem with that. Nodejs can wait for inference as fast as any native app can.
Also I refuse to download and run Node.js programs due to the security risk. Unfortunately that keeps me away from opencode as well, but thankfully Codex and Vibe are not Node.js, and neither is Zed or Jetbrains products.
Node apps typically have serious software supply chain problems. Their dependency trees are typically unauditable in practice.
Most users are forced to use the software that they use. That doesn't mean they don't care, just that they're stuck.
BTW, this going to matter MORE now that RAM prices are skyrocketing..
https://www.techradar.com/computing/windows/microsoft-has-fi...
It seems like enough people do care to make Microsoft move.
We just don't know how bad it will get with AI coding though. Do you think the average consumer won't care about software quality when the bank software "loses" a big transition they make? Or when their TV literally stops turning on? People will tolerate shitty software if they have to, when it's minor annoyances, but it makes them unhappy and they won't tolerate big problems for long.
I use Opus 4.6 (for complex refactoring), Gemini 3.1 Pro (for html/css/web stuff) and GPT Codex 5.3 (workhorse, replaced Sonnet for me because in Copilot it has larger context) mostly.
For small tools. But also for large projects.
Current projects are:
1) .NET C#, Angular, Oracle database. Around 300k LoC.
2) Full stack TypeScript with Hono on backend, React on frontend glued by trpc, kysely and PostgreSQL. Around 120k LoC.
Works well in both. I'm using plan mode and agent mode.
What helps a ton are e2e playright tests which are executed by the agent after each code change.
My only complain is that it tends to get stutters after many sessions/hours. A restart fixes it.
$39/mo plan.
Yes, feel free to downvote me.
The fact that claude code is a still buggy mess is a testament to the quality of the dream they're trying to sell.
What bugs are you seeing? I use Claude Code a lot on an Ubuntu 22.04 system and I've had very few issues with it. I'm not sure really how to quantify the amount of use; maybe "ccusage" is a good metric? That says over the last month I've used $964, and I've got 6-8 months of use on it, though only the last ~3-5 at that level. And I've got fairly wide use as well: MCP, skills, agents, agent teams...
And despite what Anthropic and OpenAI want you to think, these LLMs are not AGI. They cannot invent something new. They are only as good as the training data.
https://www.businessinsider.com/anthropic-claude-code-founde...
At most, VS Code might say that it has disabled lexing, syntax coloring, etc. due to the file size. But I don't care about that for log files...
It still might be true that Visual Studio Code uses more memory for the same file than Sublime Text would. But for me, it's more important that the editor runs at all.
The answer of course is that it can’t do it and maintain compatibility between all three well enough as it’s high effort and each has its own idiosyncrasies.
In python it was very nearly a 1-shot, there was an issue with one watermark not showing up on one API endpoint that I had to give it a couple kicks at the can to fix. Go it was able to get but it needed 5+ attempts at rework. Rust took ~10+, and Zig took maybe 15+.
They were all given the same prompt, though they all likely would have dont much better if I had it build a test suite or at least a manual testing recipe for it to follow.
That is why everyone jumped to building in Electron because it is based on web standards that are free and are running on chromium which kind of is tied to Google but you are not tied to Google and don’t have to pay them a fee. You can also easily provide kind of the same experience on mobile skipping Android shenigans.
It's LGPL, all you have to do is link GTK dynamically instead of statically to comply.
> to build win32 you have to pay developer fee to Microsoft.
You don't.
Not really, you can self sign but your native application will be met with a system prompt trying to scare user away. This is maddening of course and I wish MS, Apple, whatever others will die just for this thing alone. You fuckers leveraged huge support from developers writing to you platform but not, it is of course not enough for you vultures, now let's rip money from the hands that fed you.
I only see these complaints on HN. Real users don't have this complaint. What kind of low-end machines are you running, that Chromium engine is too heavy for you?
> They are often laggy or unresponsive.
That's not due to Electron.
> They don’t integrate well with OS features.
If it is good enough for Microsoft Teams it is probably good enough for most apps. Teams can integrate with microphone, camera, clipboard, file system and so on. What else do you want to integrate with?
Not everyone is running the latest and greatest hardware, very few actually have the money for that. If you're running hardware from before this decade, or especially the early 2010s, the difference between an Electron app and a native app is unbelievably stark. Electron will often bring the device to its knees.
This is particularly pertinent on bulk-purchased corporate and education machines which are loaded down with mandated spyware and antivirus garbage and often ship with CPUs that lag many years behind, and in the case of laptops might even have dog slow eMMC storage which makes the inevitable virtual memory paging miserable.
These workers complain about performance on the machines we can afford. 16GB RAM and 256GB SSDs are the standard, as is 500MB/sec. internet for offices with 40 people, and my plans to upgrade RAM this year were axed by the insane AI chip boondoggle.
People on HN need to understand that not everyone works for a well-funded startup, or big tech company that is in the process of destroying democracy and the environment in the name of this quarter's profits!
BTW Teams has moved away from Electron, before it did I had to advise people to use the browser app instead of the desktop for performance reasons.
Real users complain differently: "My machine is slow". Electron itself is not very heavyweight (though not featherweight), but JS and DOM can cost a lot of resources. Right now my GMail tab has allocated 529 MB.
> That's not due to Electron.
Of course, but it takes some careful thought. BTW e.g. Qt apps can be pretty memory-hungry, too.
> good enough for Microsoft Teams
It's not easy no pick a more "beloved" application.
What an Electron app usually would miss is things like global shortcuts managed by macOS control panel, programmability via Automation, and the L&F of native controls. I personally don't usually miss any of these, but users who actually like macOS would usually complain.
I personally prefer to run Electron-ish apps, like Slack, in their Web versions, in a browser.
The free ride of ever increasing RAM on consumer devices is over because of the AI hyperscalers buying all fab capacity, leading to a real RAM shortage. I expect many new laptops to come with 8GB as standard and mid-range phones to have 4GB.
Software engineers need to start thinking about efficiency again.