Quoting Knuth without the entire context is also contributing to bloat.
> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered.
> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
Is this actually a problem, though? The blog features a section on tradeoffs, and dedicates an entire section to engineering tradeoffs. Perceived performance is one of these tradeoffs.
You complain about UI not keeping up with key strokes. As a counterexample I point out Visual Studio Code. It's UI is not as snappy as native GUI frameworks, but we have a top notch user experience that's consistent across operating systems and desktop environments. That's a win, isn't it? How many projects can make that claim?
The blog post also has a section on how a significant part of the bloat is bad.
Is it a win? Why? Consistency across platforms is a branding, business goal, not an engineering one. Consistency itself doesn't specify a direction, it just makes it more familiar, and easier to adopt without effort. It's easier to sit all day, and never exercise.
It's what everybody does, or it's what everybody uses, has never translated into it being good.
Notably; the engineers I respect the most, and the ones making things that I enjoy using, none of them use vscode. I'm sure most will read this as an attack against their editor of choice, SHUN THE NON BELIEVER! But hopefully enough will realize that it's not actually an attack on them nor their editor, but instead I'm advocating for what is the best possible option, and not the easiest to use. Could they use vscode? Obviously yes, they could. They don't because the more experience you have, the easier it is to see that bloat get in the way.
Curious what they use?
That's fine. They could very well be using the tool they always used. Support for vi bindings is not the best everywhere, and vim works also through terminal connections, which is great if you need to ssh somewhere to edit a few files.
If you have to work with anything related to TypeScript or even JavaScript, you need to think long and hard to figure out what you're doing if your first option isn't vscode.
There's the engineering maxim, which I completely, and unequivocally support; that perfection isn't achieved when there's nothing left to add, but only when there's nothing left to take away.
But that's not enough to explain why it's the preferred editor for elite tier eng.
The thing it offers, in contrast to everything else, is simplicity. Everyone loves to pretend that vi is so difficult o that it is impossible to quit. But if you can forgive it's steep learning curve, it does provide an accelerated experience. And then, critically, it's already out of your way.
All experts advocate the idea behind the quote, "if you give me 6 hours to cut down a tree, I'd spend the first 4 sharpening my axe" Learning the keys of vim is that same sharpening.
I used to use sublime text, my primary reason was because it was fast. That means it got out of my way.
Today, I use neovim. And I've never bothered to set up tab complete, nor anything else like it. It does take me, about 2 extra seconds, per meaningful code block, to type the extra characters needed. But in trade for those tens or milliseconds. I'm granted the intuition for the name of the exact stdlib function I want to call. It lives in not just my head, but I also have developed the habit of understanding the context behind the call.
The feature neovim gives to it's users, it the intuition and the confidence to reason about the code they've written.
There's a lot of anxiety going around about the future of software development, related to AI. The people who have invested the mental energy of learning vim aren't worried, because it's exceptionally obvious that LLMs are pathetic when compared to the quality they've learned to emit naturally.
Or, more simply; if you're the type of person who's willing to invest the mental effort to become good at something. Vim is engineered to make you even better. Contrasted with vscode which has been engineered to make it easier to type... but then... all that time spent has only made you good at the things AI can already do.
tldr; vscode improves the typing experience, vim improves the thinking experience. AI isn't coming for the jobs of the thinkers...
Homebrew's got you covered though!
Nothing about a cross-platform UI requires that it not be snappy. Or that Electron is the best option possible.
Did VSCode do a good job with the options available? Maybe, maybe not. But the options is where I think we should focus.
Having to trade off between two bad options means you’ve already lost.
Perceived performance should never be a tradeoff, only the measured performance impact can be one.
My iPhone SE from 2020 has input delays of up to 2s after the update to iOS 26 and that's just really disappointing. I wouldn't mind if it'd be in the 0,3ms range, even though that would still be terrible from a measured performance POV.
I have definitely run into issues with the UI not visually keeping up with keystrokes in VSCode (occasionally), and also other Electron apps (more often - perhaps they haven't had as much attention to optimization as VSCode has). For this reason alone, I dislike the Electron ecosystem and I am extremely interested in projects to develop alternative cross platform renderers.
Ultimately I would like to see Electron become a dead project so I never have to run into some interesting or useful or mandatory piece of software I need to use that was written using Electron because it was the most obvious choice for the developer.
In physical disciplines, like mechanical engineering, civil engineering, or even industrial design, there is a natural push towards simplicity. Each new revision is slimmer & more unified–more beautiful because it gets closer to being a perfect object that does exactly what it needs to do, and nothing extra. But in software, possibly because it's difficult to see into a computer, we don't have the drive for simplicity. Each new LLVM binary is bigger than the last, each new HTML spec longer, each new JavaScript framework more abstract, each new Windows revision more bloated.
The result is that it's hard to do basic things. It's hard to draw to the screen manually because the graphics standards have grown so complicated & splintered. So you build a web app, but it's hard to do that from scratch because the pure JS DOM APIs aren't designed for app design. So you adopt a framework, which itself is buried under years of cruft and legacy decisions. This is the situation in many areas of computer science–abstractions on top of abstractions and within abstractions, like some complexity fractal from hell. Yes, each layer fixes a problem. But all together, they create a new problem. Some software bloat is OK, but all software bloat is bad.
Security, accessibility, and robustness are great goals, but if we want to build great software, we can't just tack these features on. We need to solve the difficult problem of fitting in these requirements without making the software much more complex. As engineers, we need to build a culture around being disciplined about simplicity. As humans, we need to support engineering efforts that aren't bogged down by corporate politics.
One example is skirt length. You have fashion and the only thing about it is change. If everybody's wearing short skirts, then longer skirts will need to be launched in fashion magazines and manufactured and sent to shops in order to sell more. The actual products have not functionally changed in centuries.
But clothes still have to look nice. Fashion designers have a motivation to make clothes that serve their purpose elegantly. Inelegance would be adding metal rails to a skirt so that you could extend its length at will. Sure, the new object has a new function, and its designer might feel clever, but it is uglier. But ugly software and beautiful software often look the same. So software trends end up being ugly, because no one involved had an eye for beauty.
Yes. I’ve been working for years on building a GPU-based scientific visualization library entirely in C, [1] carefully minimizing heap allocations, optimizing tight loops and data structures, shaving off bytes of memory and microseconds of runtime wherever possible. Meanwhile, everyone else seems content with Electron-style bloat weighing hundreds of megabytes, with multi-second lags and 5-FPS interfaces. Sometimes I wonder if I’m just a relic from another era. But comments like this remind me that I’m simply working in a niche where these optimizations still matter.
The library you built looks fucking awesome, by the way. However, I think even you acknowledged on the page that Matplotlib may well be good enough for many use cases. If someone knows an existing tool extremely well, any replacement needs to be a major step change to solve a problem that couldn't be solved in existing, inefficient, tools.
Too many people have the "Premature optimization is the root of all evil" quote internalized to a degree they won't even think about any criticisms or suggestions.
And while they might be right concerning small stuff, this often piles up and in the end, because you choose several times not to optimize, your technology choices and architecture decisions add up to a bloated mess anyway that can't be salvaged.
Like, when you choose a web framework for a desktop app, install size, memory footprint, slower performance etc. might not matter looked at individually, but in the end it all might easily add up and your solution might just suck without much benefit to you. Pragmatism seems to be the hardest to learn for most developers and so many solutions get blown out of proportion instantly.
Yeah I find it frustrating how many people interpret that quote as "don't bother optimizing your software". Here's the quote in context from the paper it comes from:
> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
> Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
Knuth isn't saying "don't bother optimizing", he's saying "don't bother optimizing before you profile your code". These are two very different points.
Reading the section you are quoting from (as well as the section of the conclusion dealing with efficiency), I think it should be clear that in the context of this paper, “optimization” means performance enhancements that render the program incomprehensible and unmaintainable. This is so far removed from what anyone in the last 30+ years thinks of when they read the word “optimization” that we are probably better off pretending that this paper was never written. And smacking anyone that quotes it.
I've posted a better scan here: https://shreevatsa.net/tmp/2025-06/DEK-P67-Structured.progra...
My boss (and mentor) from 25 years ago told me to think of the problems I was solving with a 3-step path:
1. Get a solution working
2. Make the solution correct
3. Make the solution efficient
Most importantly, he emphasizes that the work must be done in that order. I've taken that everywhere with me.
I think one of the problems is that quite often, due to business pressure to ship, step 3 is simply skipped. Often, software is shipped half-way through step 2 -- software that is at best partially correct.
The pushes the problem down to the user, who might be building a system around the shipped code. This compounds the problem of software bloat, as all the gaps have to be bridged.
Any different interpretation in my opinion leads to slow, overbloated software.
Databases in particular, since that’s my job. “This query runs in 2 msec, it’s fast enough.” OK, but it gets called 10x per flow because the ORM is absurdly stupid; if you cut it down by 500 microseconds, you’d save 5 msec. Or if you’d make the ORM behave, you could save 18 msec, plus the RTT for each query you neglected to account for.
Do we need a dozen components of half a million lines each maintained by a separate team for the hotdesk reservation page? I'm not sure, but I'm definitely not willing to endure the conversation that would follow from asking.
> Don't write stupid slow code
The context was that they wrote a double-lookup in a dictionary, and I was encouraging them to get into the habit of only doing a single lookup.
Naively, one could argue that I was proposing a premature optimization; but the point was that we should develop habits where we choose the more efficient route when it adds no cost to our workflow and keeps code just as readable.
The time needed from the moment you launched the game (clicked on the .exe) to the moment you entered the server (to the map view) with all assets 100% loaded was about 1 second. Literally! You click the icon on your desktop and BAM! you're already on the server and you can start shooting. But that was written by John Carmack in C :-)
From other examples - I have a "ModRetro Chromatic" at home which is simply an FPGA version of the Nintendo Game Boy. On this device, you don't see the falling "Nintendo" text with the iconic sound known from normal Game Boys. When I insert a cartridge and flip the Power switch, I'm in the game INSTANTLY. There's simply absolute zero delay here. You turn it on and you're in the game, literally just like with that Quake.
For comparison - I also have a Steam Deck, whose boot time is so long that I sometimes finish my business on the toilet before it even starts up. The difference is simply colossal between what I remember from the old days and what we have today. On old Windows 2000, everything seemed lighter than on modern machines. I really miss that.
Windows 2000 boots up fast on modern hardware. You're looking through rose-colored-ass glasses if you think it booted up that quick on the hardware available at the time of release. Windows NT was a pig in its day, but at least it was a clean pig, free of spyware and other unnecessary crapware (unless you were like a client site I visited, and just let Bonzi Buddy, Comet Cursor, and such run rampant across your sensitive corporate workstations).
Stack enough layers - framework on library on abstraction on dependency - and nobody understands what the system does anymore. Can't hold it in your head. Debugging becomes archaeology through 17 layers of indirection. Features work. Nobody knows why. Nobody dares touch them.
TFA touches this when discussing complexity ("people don't understand how the entire system works"). But treats it as a separate issue. It's not. Bloat creates unknowable systems. Unknowable systems are unmaintainable by definition.
The "developer time is more valuable than CPU cycles" argument falls apart here. You're not saving time. You're moving the cost. The hours you "saved" pulling in that framework? You pay them back with interest every time someone debugs a problem spanning six layers of abstraction they don't understand
It often feels to me like we’ve gone far down the framework road, and frameworks create leaky abstractions. I think frameworks are often understood as saving time, simplifying, and offloading complexity. But they come with a commitment to align your program to the framework’s abstractions. That is a complicated commitment to make, with deep implications, that is hard to unwind.
Many frameworks can be made to solve any problem, which makes things worse. It invites the “when all you’ve got is a hammer, everything looks like a nail” mentality. The quickest route to a solution is no longer the straight path, but to make the appropriate incantations to direct the framework toward that solution, which necessarily becomes more abstract, more complex, and less efficient.
This is specious reasoning, as "optimized" implementations typically resort to performance hacks that make code completely unreadable.
> TFA touches this when discussing complexity ("people don't understand how the entire system works"). But treats it as a separate issue. It's not. Bloat creates unknowable systems.
I think you're confusing things. Bloat and lack of a clear software architecture are not the same thing. Your run-of-the-mill app developed around a low-level GUI framework like win32 API tends to be far more convoluted and worse to maintain than equivalent apps built around high-level frameworks, including electron apps. If you develop an app into a big ball of mud, you will have a bad time figuring it out regardless of what framework you're using (or not using)
I'm saying: those same layers create a different maintainability problem that TFA ignores. When you stack framework on library on abstraction, you create systems nobody can hold in their head. That's a real cost.
You can have clean architecture and still hit this problem. A well-designed 17-layer system is still 17 layers of indirection between "user clicks button" and "database updates.
That really depends on context, and you're generalizing based on assumptions that don't hold true:
Replacing bloated ORM code with hand-written SQL can be significantly more readable if it boils down to a simple query that returns rows that neatly map to objects. It could also boil down to a very complicated, hard to follow query that requires gymnastics to populate an object graph.
The same can be said for optimizing CPU usage. It might be a case of removing unneeded complexity, or it could be a case of microoptimizations that require unrolling loops and copy & paste code.
---
I should point out that I've lived the ORM issue: I removed an ORM from a product and it became industry-leading for performance, and the code was so clean that newcomers would compliment me on how easy it was to understand data access. In contrast, the current product that I work on is a clear example of when an ORM is justified.
I've also lived the CPU usage issue: I had to refactor code that was putting numeric timestamps into strings, and then had complicated code that would parse the strings to perform math on the timestamps. The refactor involved replacing the strings with a defined type. Not only was it faster, the code was easier to follow because the timestamps were well encapsulated.
Bloat (you mean here code duplication?) can be both cause or a symptom of some maintainability problem. It's like a vicious cycle. A spaghetti code mess (not the same thing as bloat) will be prone to future bloat because developers don't know what they are doing. I mean in the bad sense. You can still be not familiar with the entire system but if the code is well organized, is reusable, modular, testable, you can still work relatively comfortably with such code and have little worries of introducing horrible regressions (in a case of a spaghetti code). You can also do refactors much easier. Meanwhile, a badly managed spaghetti code is much less testable, reusable, when developers work with such code, they often don't want to reuse an existing code, because the existing code is already fragile and not reusable. For each feature they prefer to create or duplicate a new function.
This is a vicious cycle, the code is starting to rot, becoming more and more unmaintainable, duplicated, fragile, and, very likely, inefficient. This is what I meant.
If we worked hard to keep OS requirements to a minimum- could we be looking at unimaginably improved battery life? Hyper reliable technology that lasts many years? Significantly more affordable hardware?
We know that software bloat wastes RAM and CPU, but we can't know what alternatives we could have if he hadn't spent our metaphorical budget on bloat already.
Volunteer-supported UNIX-like OS, e.g., NetBSD, represents the closest to this ideal for me
I am able to use an "old" operating system with new hardware. No forced "upgrades" or remotely-installed "updates". I decide when I want to upgrade software. No new software is pre-installed
This allows me to observe and enjoy the speed gains from upgrading hardware in a way I cannot with a corporate operating system. The later will usurp new hardware resources in large part for its own commercial purposes. It has business goals that may conflict with the non-commercial interests of the computer owner
It would be nice if software did not always grow in size. It happens to even the simplest of programs. Look at the growth of NetBSD's init over time for example
Why not shrink programs instead of growing them
Programmers who remove code may be the "heros", as McIlroy once suggested ("The hero is the negative coder")
If with a reasonable battery standby mode can only last a few weeks and active use is at best a few days then you might as well add a fairly beefy CPU and with a beefy CPU OS optimizations only go so far. This is why eInk devices can end up with such a noticeably longer lifespan, they now have a reason to put in a weak CPU and do some optimization because the possibility of a long lifespan is a huge potential selling point.
Turns out modern ubuntu will only install Firefox as a snap. And snap will then automatically grow to fill your entire hard drive for no good reason.
I'm not quite sure how people decided this was an approach to package management that made sense.
A 500MB Electron app can be easily a 20MB Tauri app.
In either case you end up with a fresh instance of the browser (unless things have changed in Tauri since last time I looked), distinct from the one serving you generally as an actual browser, so both do have the same memory footprint in that respect. So you are right, that is an issue for both options, but IME people away from development seem more troubled by the package size than interactive RAM use. Tauri apps are likely to start faster from cold as it is loading a complete new browser for which every last byte used needs to be rad from disk, I think the average non-dev user will be more concerned about that than memory use.
There have been a couple of projects trying to be Electron, complete with NodeJS, but using the user's currently installed & default browser like Tauri, and some other that replace the back-end with something lighter-weight, even more like Tauri, but most of them are currently unmaintained, still officially alpha, or otherwise incomplete/unstable/both. Electron has the properties of being here, being stable/maintained, and being good enough until it isn't (and once it isn't, those moving off it tend to go for something else completely rather than another system very like it) - it is difficult for a newer similar projects to compete with the momentum it has when the “escape route” from it is generally to something more completely different.
I've never seen a real world Electron app with a large userbase that actually has that many dependencies or performance issues that would be resolved by writing it as a native app. It's baffling to me how many developers don't realize how much latency is added and memory is used by requiring many concurrent HTTP requests. If you have a counterexample I'd love to see it.
If you build towards everyone, you end up with a large standard like Unicode or IEEE 754. You don't need everything those standards have for your own messages or computations, sometimes you find them counter to your goal in fact, and they end up wasting transistors, but they are convenient enough to be useful defaults, convenient enough to store data that is going to be reused for something else later, and therefore they are ubiquitous in modern computing machines.
And when you have the specific computation in mind - an application like plotting pixels or ballistic trajectories - you can optimize the heck out of it and use exactly the format and features needed and get tight code and tight hardware.
But when you're in the "muddled middle" of trying to model information and maybe it uses some standard stuff but your system is doing something else with it and the business requirements are changing and the standards are changing too and you want it to scale, then you end up with bloat. Trying to be flexible and break up the system into modular bits doesn't really stave this off so much as it creates a Whack-a-Mole of displaced complexity. Trying to use the latest tools and languages and frameworks doesn't solve this either, except where they drag you into a standard that can successfully accommodate the problem. Many languages find their industry adoption case when a "really good library" comes out for it, and that's a kind of informal standardizing.
When you have a bloat problem, try to make a gigantic table of possibilities and accept that it's gonna take a while to fill it in. Sometimes along the way you can discover what you don't need and make it smaller, but it's a code/docs maturity thing. You don't know without the experience.
Indeed, if a language and framework has slow code execution, but facilitates efficient querying, then it can still perform relatively well.
Or actually not, and the list doesn't help go beyond "users have more resources, so it's just easier to waste more resources"
> Layers & frameworks
There are a million of these, with performance difference of orders of magnitude. So an empty reference explains nothing re bloat
But also
> localization, input, vector icons, theming, high-DPI
It's not bloat if it allows users to read text in an app! Or read one that's not blurry! Or one that doesn't "burn his eyes"
> Robustness & error handling / reporting.
Same thing, are you talking about a washing machine sending gigabytes of data per day for no improvement whatsoever "in robustness"? Or are you taking about some virtualized development environment with perfect time travel/reproduction, where whatever hardware "bloat" is needed wouldn't even affect the user? What is the actual difference between error handling in the past besides easy sending of your crash dumps?
> Engineering trade-offs. We accept a larger baseline to ship faster, safer code across many devices.
But we do not do that! The code is too often slower precisely because people have a ready list of empty statements like this
> Hardware grew ~three orders of magnitude. Developer time is often more valuable than RAM or CPU cycles
What about the value of time/resources of your users ? Why ignore reality outside of this simplistic dichotomy. Or will the devs not even see the suffering because the "robust error handling and reporting" is nothing of the sort, it mostly /dev/nulls a lot of user experience?
As a former CS major (30 years ago) that went into IT for my first career, I've wondered about bloat and this article gave me the layman explanation.
I am still blown away by the comparison of pointing out that the WEBP image of SuperMario is larger in size than the Super Mario game itself!
But anyways, I think it's still very demonstrative when an entire game size is smaller than its picture. Also consider that even your tiny PNG example (3.37KiB) still cannot fit into the the RAM / VRAM of a NES console which shows the contrast between these eras in regards of amounts of memory.
That image has a similar problem to yours. It has been scaled up using some kind of interpolation which introduces a load of extra colours to smooth the edges. This is not a great fit for PNG, which is why it is 64KB.
The article claims that it is only 5.9KB. I guess it was that small originally and it's been mangled by the publishing process.
Anyways I don't think we can have 100% apple to apple comparison, because the game used a different compression and rendering technique. Also consider the actual amount of RAM / VRAM the images are occupying. In RAM / VRAM they are probably in a decompressed form which is much more memory.
FWIW, if you convert the 3KB image to 16 colours in GIMP (Image | Mode | Indexed... and choose "Generate Optimum Palette") it looks exactly the same. I'm pretty sure there are only 16 colours in the image. The resulting PNG is 1,991 bytes.
It's good enough, and for example React Native is spending years and millions in more optimizations to make their good enough faster, the work they do is well beyond my pay grade. (https://reactnative.dev/blog/2025/10/08/react-native-0.82#ex...)
For customer facing stuff, I think it's worth looking into frameworks that do backend templating and then doing light DOM manipulation to add dynamism on the client side. Frameworks like Phoenix make this very ergonomic.
It's a useful tool to have in the belt.
And the answer is almost always "nothing" because "good enough" is fine.
People like to shit on development tools like Electron, but the reality is that if the app is shitty on Electron, it'd probably be just as shitty on native code, because it is possible to write good Electron apps.
Right off the bat it'll save hundreds of MB in app size with a noticeable startup time drop , so no, it won't be just as shitty.
> because it is possible to write good Electron apps.
The relevant issue is the difficulty in doing that, not the mere possibility.
But it's still bloated compared to the editor I use, Emacs.
And it's still bloated compared to a Java-based IDE of equivalent functionality. (Eclipse and IntelliJ can do much more OOTB than VS Code can.)
That said, all of engineering is a tradeoff, and tradeoffs mean accepting some amount of bad in exchange for some amount of good.
In these times, though, companies seem to be very willing to accept bloat for marginal or nonexistent returns, and this is one of the reasons why, in my opinion, so much of the software being released these days is poor.
I had to write Android app recently. I don't like bloat. So I disabled all libraries. Well, I did it, but I was jumping over many hoops. Android Development presumes that you're using appcompat libraries and some others. In the end my APK was 30 KB and worked on every smartphone I was interested (from Android 8 to Android 16). Android Studio Hello World APK is about 2 MB, if I remember correctly. This is truly madness.
- Layers & frameworks: We always had some layers and frameworks, the big one being the operating system. The problem is that now, instead of having these layers shared between applications (shared libraries, OS calls, etc...), every app wants to do their own thing and they all ship their own framework. For the calculator example, with the fonts, common controls, rendering code, etc... the global footprint was probably several MBs even in the early days, but the vast majority of it was shared with other applications and the OS shell itself, resulting in only a few extra kB for the calculator.
- Security & isolation: That's mostly the job of the OS and even the hardware. But the one reason why we need security so much is that the more bloated your code is, the more room there is for vulnerabilities. We don't need to isolate components that don't exist in the first place.
- Robustness & error handling / reporting: The less there is, the less can go wrong, so more robust and less errors to handle.
- Globalization & accessibility: That's true that it adds some bloat, however, that's something that the OS should take care of. If everyone uses the same shared GUI toolkit, only it has to deal with these issues. Note that many of these problems were addressed in the Windows 9x era.
- Containers & virtualization: Containerization is a solution to dependency hell and non-portable code, you carry your entire environment with you so you don't have to adjust to a new environment. The more dependencies you have, i.e. the most bloat, the more you need it. And going back to security and accessibility, since you are now shipping your environnement, you don't benefit from system-wide updates that address these issues.
- Engineering trade-offs: that is computers are cheap, developers are expensive. We are effectively trading time to hand-craft lightweight optimized software to keeping up with the bloat.
I get the author's point, but I believe that most of it is self-inflicted. I remember the early days of Android. I had a Nexus One, 512MB RAM / 1GHz single core CPU / 512MB Flash + 4GB MicroSD and it could do most of what I am doing now with a phone that is >10x more powerful in every aspect. And that's with internationalization, process isolation, the JVM, etc... Not only that but Google was much smaller back then, so much for lowering development costs.
I think this a strawman?
But it's pretty ridiculous if anyone believes this.
FOSS code can't be "stolen." The whole point of GPL and free software movement is that software should be free to use and modify.
P.S. Does someone know anyone who tested this?
Putting React with those two is a wild take.
> 99% percent of websites would work a lot better with SSR and a few lines of JavaScript here and there and there is zero reason to bring anything like React to the table.
Probably but as soon as you have a modicum of logic in your page the primitives of the web are a pain to use.
Also, I must be able to build stuff in the 1% space. I actually did it before: I built an app that's entirely client-side, with Vue, and "serverless" in the sense that it's distributed in the form of one single HTML file. Although we changed that in the last few months to host it on a proper server.
The level of psychological trauma that some back-end devs seem to endure is hilarious though. Like I get it, software sucks and it's sad but no need to be dramatic about it.
And btw, re forbidding stuff: no library, no process, no method can ever substitute to actually knowing what you're doing.
Can you elaborate more on how this works? Do you mean JS loading server generated HTML into the DOM?
Note that with this approach you don't need to "render" anything, browser already done it for you. You merely attaching functionality to DOM elements in the form of Component instances.
I entirely agree. It is what I do when I have to - although I mostly do simple JS as I am a backend developer really, and if I do any front end its "HTML plus a bit of JS" and I just write JS loading stuff into divs by ID.
When i have worked with front end developers doing stuff in react it has been a horrible experience. In the very worst case they used next.js to write a second backend that sat between my existing Django backend (which had been done earlier) and the front end. Great for latency! It was an extreme example but it really soured my attitude to complex front ends. The project died.
That's hilarious.
Casey Muratori truly is right when he says to "non-pessimize" software (= make it do what it should do and not more), before optimizing it.
The problem was that the front end developers involved decided to use Next.js to replace the front end of a mostly complete Django site. I think it was very much a case of someone just wanting to use what they knew regardless of whether it was a good fit - the "when all you have is a hammer, everything looks like a nail" effect.
I did a search and a lot of people are promoting the concept. Maybe it makes sense if you have a strong reason to use micro services, but for the vast majority of systems it seems crazy!
We could go further and have a language written to run in a sandbox VM especially for that with a GUI library designed for the task instead of being derived from a document format.