I attempted to validate this: You'd need >75 TFlop/s to get into the top50 in the TOP500[0] rankings in 2009. M4 Max review says 18.4 TFlop/s at FP32, but TOP500 uses LINPACK, which uses FP64 precision.
An M2 benchmark gives a 1:4 ratio for double precision, so you'd get maybe 9 TFlop/s at FP64? That wouldn't make it to TOP500 in 2009.
> The package managers we benchmarked weren't built wrong, they were solutions designed for the constraints of their time.
> Buns approach wasn't revolutionary, it was just willing to look at what actually slows things down today.
> Installing packages 25x faster isn't "magic": it's what happens when tools are built for the hardware we actually have.
Well, no. The particular thread of execution might have been spending 95% of time waiting for I/O, but a server (the machine serving the thousands of connections) would easily run at 70%-80% of CPU utilization (because above that, tail latency starts to suffer badly). If your server had 5% CPU utilization under full load, you were not running enough parallel processes, or did not install enough RAM to do so.
Well, it's a technicality, but the post is devoted to technicalities, and such small blunders erode the trust to the rest of the post. (I'm saying this as a fan of Bun.)
That's even less accurate. By two orders of magnitude. High-end servers in 2009 had way more than 4GB. The (not even high-end) HP Proliant I installed for a small business in 2008, that was already bought used at the time, had 128GB of RAM.
I understand why one would want to make an article entertaining but that seriously makes me doubt the rest of the articles when diving into a topic I don't know as much.
Also: I love that super passionate people still exist, and are willing to challenge the statut quo by attacking really hard things - things I don't have the brain to even think about. It's not normal that we have better computers each month and slower softwares. If only everyone (myself included) were better at writing more efficient code.
Amazing to see it being used in a practical way in production.
There'll probably be a strategy (AEO?) for this in the future for newcomers and the underrepresented, like endless examples posted by a sane AI to their docs and github for instance so it gets picked up by training sets or live, tool calling, web-searches.
For future languages, maybe it's better to already have a dev name and a release name from the get go.
I do almost all of my development in vanilla js despite loathing the node ecosystem, so i really should have checked it out sooner.
Much better than Node.
However...!
I always managed to hit a road block with Bun and had to go back to Node.
First it was the crypto module that wasn't compatible with Nodejs signatures (now fixed), next Playwright refused working with Bun (via Crawlee).
Does it work if I have packages that have nodejs c++ addons?
They're still missing niche things and they tend to target the things that most devs (and their dependencies) are actually using.
But I can see they have it in their compat stuff now and it looks like it's working in the repl locally: https://docs.deno.com/api/node/dgram/
A single static zig executable isn’t the same as a a pipeline of package management dependencies susceptible to supply chain attacks and the worst bitrot we’ve had since the DOS era.
Zero.
I'm guessing you're looking at the `devDependencies` in its package.json, but those are only used by the people building the project, not by people merely consuming it.
Vitamins/supplements? Sleep? Exercise? Vacations?
I have sprints of great productivity but it's hard to keep it for long.
Lydia is very good at presenting complex ideas simply and well. I've read and watched most of her work or videos. She really goes to great lengths in her work to make it come to life. Highly recommend her articles and YouTube videos.
Though she's been writing less I think due to her current job
This leads them to the incorrect conclusion that bun fresh runs are faster than npm cached, which doesn’t seem to be the case.
I wonder why that is? Is it because it is a runtime, and getting compatibility there is harder than just for a straight package manager?
Can someone who tried bun and didn't adopt it personally or at work chime in and say why?
[0] https://aleyan.com/blog/2025-task-runners-census/#javascript...
Considering how many people rely on a tailwind watcher to be running on all of their CSS updates, you may find that bun is used daily by millions.
We use Bun for one of our servers. We are small, but we are not goofing around. I would not recommend them yet for anything but where they have a clear advantage - but there are areas where it is noticeably faster or easier to setup.
Last big issue I had with Bun was streams closing early:
https://github.com/oven-sh/bun/issues/16037
Last big issue I had with Deno was a memory leak:
https://github.com/denoland/deno/issues/24674
At this point I feel like the Node ecosystem will probably adopt the good parts of Bun/Deno before Bun/Deno really take off.
https://github.com/oven-sh/bun/commit/b474e3a1f63972979845a6...
I actually think Bun is so good that it will still net save you time, even with these annoyances. The headaches it resolves around transpilation, modules, workspaces etc, are just amazing. But I can understand why it hasn't gotten closer to npm yet.
But the language haven’t even reached 1.0 yet. A lot of the strategies for doing safe Zig isn’t fully developed.
Yet, TigerBeetle is written in Zig and is an extremely robust piece of software.
I think the focus of Bun is probably more on feature parity in the short term.
Sure, they have some nice stuff that should also be added in Node, but nothing compelling enough to deal with ecosystem change and breakage.
It's a cool project, and I like that they're not using V8 and trying something different, but I think it's very difficult to sell a change on such incremental improvements.
It was better than npm with useful features, but then npm just added all of those features after a few years and now nobody uses it.
You can spend hours every few years migrating to the latest and greatest, or you can just stick with npm/node and you will get the same benefits eventually
In the interim, I am very glad we haven't waited.
Also, we switched to Postgres early, when my friends were telling me that eventually MySQL will catch up. Which in many ways, they did, but I still appreciate that we moved.
I can think of other choices we made - we try to assess the options and choose the best tool for the job, even if it is young.
Sometimes it pays off in spades. Sometimes it causes double the work and five times the headache.
That said, for many work projects, I need to access MS-SQL, which the way it does socket connections isn't supported by the Deno runtime, or some such. Which limits what I can do at work. I suspect there's a few similar sticking points with Bun for other modules/tools people use.
It's also very hard to break away from entropy. Node+npm had over a decade and a lot of effort to build that ecosystem that people aren't willing to just abandon wholesale.
I really like Deno for shell scripting because I can use a shebang, reference dependencies and the runtime just handles them. I don't have the "npm install" step I need to run separately, it doesn't pollute my ~/bin/ directory with a bunch of potentially conflicting node_modules/ either, they're used from a shared (configurable) location. I suspect bun works in a similar fashion.
That said, with work I have systems I need to work with that are already in place or otherwise chosen for me. You can't always just replace technology on a whim.
https://dev.to/hamzakhan/rust-vs-go-vs-bun-vs-nodejs-the-ult...
2x in specific microbenchmarks doesn’t translate to big savings in practice. We don’t serve a static string with an application server in prod.
I write a lot of one off scripts for stuff in node/ts and I tried to use Bun pretty early on when it was gaining some hype. There were too many incompatibilities with the ecosystem though, and I haven't tried since.
LLMs default to npm
> Bun takes a different approach by buffering the entire tarball before decompressing.
But seems to sidestep _how_ it does this any differently than the "bad" snippet the section opened with (presumably it checks the Content-Length header when it's fetching the tarball or something, and can assume the size it gets from there is correct). All it says about this is:
> Once Bun has the complete tarball in memory it can read the last 4 bytes of the gzip format.
Then it explains how it can pre-allocate a buffer for the decompressed data, but we never saw how this buffer allocation happens in the "bad" example!
> These bytes are special since store the uncompressed size of the file! Instead of having to guess how large the uncompressed file will be, Bun can pre-allocate memory to eliminate buffer resizing entirely
Presumably the saving is in the slow package managers having to expand _both_ of the buffers involved, while bun preallocates at least one of them?
https://github.com/oven-sh/bun/blob/7d5f5ad7728b4ede521906a4...
We trust the self-reported size by gzip up to 64 MB, try to allocate enough space for all the output, then run it through libdeflate.
This is instead of a loop that decompresses it chunk-by-chunk and then extracts it chunk-by-chunk and resizing a big tarball many times over.
I think my actual issue is that the "most package managers do something like this" example code snippet at the start of [1] doesn't seem to quite make sense - or doesn't match what I guess would actually happen in the decompress-in-a-loop scenario?
As in, it appears to illustrate building up a buffer holding the compressed data that's being received (since the "// ... decompress from buffer ..." comment at the end suggests what we're receiving in `chunk` is compressed), but I guess the problem with the decompress-as-the-data-arrives approach in reality is having to re-allocate the buffer for the decompressed data?
[1] https://bun.com/blog/behind-the-scenes-of-bun-install#optimi...
A few things:
- I feel like this post repurposed could be a great explanation on why io_uring is so important.
- I wonder if Zig recently io updates in v0.15 make any perf improvement to Bun beyond its current fast perf.
So many of these concepts (Big O, temporal and spatial locality, algorithmic complexity, lower level user space/kernel space concepts, filesystems, copy on write), are ALL the kinds of things you cover in a good CS program. And in this and similar lower level packages, you use all of them to great effect.
CS is the study of computations and their theory (programming languages, algorithms, cryptography, machine learning, etc).
SE is the application of engineering principles to building scalable and reliable software.
progress: dynamically-linked musl binaries (tnx)
next: statically-linked musl binaries
> Bun does it differently. Bun is written in Zig, a programming language that compiles to native code with direct system call access:
Guess what, C/C++ also compiles to native code.
I mean, I get what they're saying and it's good, and nodejs could have probably done that as well, but didn't.
But don't phrase it like it's inherently not capable. No one forced npm to be using this abstraction, and npm probably should have been a nodejs addon in C/C++ in the first place.
(If anything of this sounds like a defense of npm or node, it is not.)
Npm, pnpm, and yarn are written in JS, so they have to use Node.js facilities, which are based on libuv, which isn't optimal in this case.
Bun is written in Zig, so it doesn't need libuv, and can so it's own thing.
Obviously, someone could write a Node.js package manager in C/C++ as a native module to do the same, but that's not what npm, pnpm, and yarn did.
Or is the concern about the time spent in CI/CD?
It’s usually only worth it after ~tens of megabytes, but vast majority of npm packages are much smaller than that. So if you can skip it, it’s better.
I end up hitting 500s from npm from time to time installing by bun and I just don't know why.
Really wish the norm was that companies hosted their own registries for their own usage, so I could justify the expense and effort instead of dealing with registries being half busted kinda randomly.
Is this not the norm? I've never worked anywhere that didn't use/host their own registry - both for hosting private packages, but also as a caching proxy to the public registry (and therefore more control over availability, security policy)
https://verdaccio.org/ is my go to self hosted solution, but the cloud providers have managed solutions and there's also jFrog Artifactory.
One corollary of this is that many commercial usages of packages don't contribute much to download stats, as often they download each version at most once.
- Clean `bun install`, 48s - converted package-lock.json
- With bun.lock, no node_modules, 19s
- Clean with `deno install --allow-scripts`, 1m20s
- with deno.lock, no node_modules, 20s
- Clean `npm i`, 26s
- `npm ci` (package-lock.json), no node_modules, 1m,2s (wild)
So, looks like if Deno added a package-lock.json conversion similar to bun the installs would be very similar all around. I have no control over the security software used on this machine, was just convenience as I was in front of it.Hopefully someone can put eyes on this issue: https://github.com/denoland/deno/issues/25815
Deno's dependency architecture isn't built around npm; that compatibility layer is a retrofit on top of the core (which is evident in the source code, if you ever want to see). Deno's core architecture around dependency management uses a different, URL-based paradigm. It's not as fast, but... It's different. It also allows for improved security and cool features like the ability to easily host your own secure registry. You don't have to use npm or jsr. It's very cool, but different from what is being benchmarked here.
edit: replied to my own post... looks like `deno install --allow-scripts` is about 1s slower than bun once deno.lock exists.
What's the reason for this?
I could imagine, many tools could profit from knowing the decompressed file size in advance.
> ISIZE (Input SIZE)
> This contains the size of the original (uncompressed) input data modulo 2^32.
So there's two big caveats:
1. Your data is a single GIZP member (I guess this means everything in a folder)
2. Your data is < 2^32 bytes.
However, because of the scale of what bun deals with it's on the edge of what I would consider safe and I hope in the real code there's a fallback for what happens if the file has multiple members in it, because sooner or later it'll happen.
It's not necessarily terribly well known that you can just slam gzip members (or files) together and it's still a legal gzip stream, but it's something I've made use of in real code, so I know it's happened. You can do some simple things with having indices into a compressed file so you can skip over portions of the compressed stream safely, without other programs having to "know" that's a feature of the file format.
Although the whole thing is weird in general because you can stream gzip'd tars without every having to allocate space for the whole thing anyhow. gzip can be streamed without having seen the footer yet and the tar format can be streamed out pretty easily. I've written code for this in Go a couple of times, where I can be quite sure there's no stream rewinding occuring by the nature of the io.Reader system. Reading the whole file into memory to unpack it was never necessary in the first place, not sure if they've got some other reason to do that.
---
def _read_eof(self):
# We've read to the end of the file, so we have to rewind in order
# to reread the 8 bytes containing the CRC and the file size.
# We check the that the computed CRC and size of the
# uncompressed data matches the stored values. Note that the size
# stored is the true file size mod 2*32.
---
~/: bun install
error: An unknown error occurred (Unexpected)
Repo: https://github.com/carthage-software/mago
Announcement 9 months ago:
https://www.reddit.com/r/PHP/comments/1h9zh83/announcing_mag...
For now its main features are 3: formatting, linting and fixing lint issues.
I hope they add package management to do what composer does.
Thats closer to how pnpm achieves speed up though. I know there is 'rv' recently, but havent tried it.
>Brought to you by Spinel
>Spinel.coop is a collective of Ruby open source maintainers building next-generation developer tooling, like rv, and offering flat-rate, unlimited access to maintainers who come from the core teams of Rails, Hotwire, Bundler, RubyGems, rbenv, and more.
...
> On a 3GHz processor, 1000-1500 cycles is about 500 nanoseconds. This might sound negligibly fast, but modern SSDs can handle over 1 million operations per second. If each operation requires a system call, you're burning 1.5 billion cycles per second just on mode switching.
> Package installation makes thousands of these system calls. Installing React and its dependencies might trigger 50,000+ system calls: that's seconds of CPU time lost to mode switching alone! Not even reading files or installing packages, just switching between user and kernel mode.
Am I missing something or is this incorrect. They claim 500ns per syscall with 50k syscalls. 500ns * 50000 = 25 milliseconds. So that is very far from "seconds of CPU time lost to mode switching alone!" right?
Still only about 2 secs, but still.
When handling merge/pull requests, I'll often do a clean step (removing node_modules, and temp files) before a full install and build to test everything works. I know not everyone else is this diligent, but this can happen several times a day... Automation (usually via docker) can help a lot with many things tested through a CI/CD environment, that said, I'm also not a fan of having to wait for too long for that process... it's too easy to get side-tracked and off-task. I tend to set alarms/timers throughout the day just so I don't miss meetings. I don't want to take a moment to look at HN, and next I know it's a few hours later. Yeah, that's my problem... but others share it.
So, again, if you can make something take less than 15s that typically takes much more, I'm in favor... I went from eslint to Rome/Biome for similar reasons... I will switch to faster tooling to reduce the risk of going off-task and not getting back.
I am so, so tired of the “who cares if this library eats 100 MiB of RAM; it’s easier” attitude.