I also have `ignore-scripts=true` in my ~/.npmrc. Based on the analysis, that alone would have mitigated the vulnerability. bun and pnpm do not execute lifecycle scripts by default.
Here's how to set global configs to set min release age to 7 days:
~/.config/uv/uv.toml
exclude-newer = "7 days"
~/.npmrc
min-release-age=7 # days
ignore-scripts=true
~/Library/Preferences/pnpm/rc
minimum-release-age=10080 # minutes
~/.bunfig.toml
[install]
minimumReleaseAge = 604800 # seconds
(Side note, it's wild that npm, bun, and pnpm have all decided to use different time units for this configuration.)If you're developing with LLM agents, you should also update your AGENTS.md/CLAUDE.md file with some guidance on how to handle failures stemming from this config as they will cause the agent to unproductively spin its wheels.
First day with javascript?
It also efficiently annoys the most people at once: those what want hours will complain if they set it to days, thought that want days will complain if hours are used. By using minutes or seconds you can wind up both segments while not offend those who rightly don't care because they can cope with a little arithmetic :)
Though doing what sleep(1) does would be my preference: default to seconds but allow m/h/d to be added to change that.
I doubt anyone cares about an hour more or less in this context. But if you want multiple implementations to agree talking about seconds on a monotonic timer is a lot simpler
Daylight savings time makes a day take 23 hours or 25 hours. That makes a week take 7254000 seconds or 7261200 seconds. Etc.
I think you're incorrect to say that second are also ambiguous. Maybe what you mean is that days are more practical, but that seems very much a personal preference.
True. But seconds are not the base unit for package compromises coming to light. The appropriate unit for that is almost certainly days.
It's just library for handling time that 98% of the time your app will be using for something else.
(Hope your timezones and tzdata correctly identifies Easter bank holiday as non-workdays)
This is javascript, not Java.
In JavaScript something entirely new would be invented, to solve a problem that has long been solved and is documented in 20+ year old books on common design patterns. So we can all copy-paste `{ or: [{ days: 42, months: 2, hours: "DEFAULT", minutes: "IGNORE", seconds: null, timezone: "defer-by-ip" }, { timestamp: 17749453211*1000, unit: "ms"}]` without any clue as to what we are defining.
In Java, a 6000LoC+ ecosystem of classes, abstractions, dependency-injectables and probably a new DSL would be invented so we can all say "over 4 Malaysian workdays"
You can find the patch files for your OSs by registering at Oracle with a J3EE8.4-PatchLibID (note, the older J3EE16-PatchLib-ids aren't compatible), attainable from your regional Oracle account-manager.
A joke should be funny though, not just a dry description of real life, so let's leave it at that. We've already taken it too far.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Do you run automatic dependency updates over the weekend? Wouldn't you rather do that during fully-staffed hours?
new_date = add_workdays(
workdays=1.5,
start=datetime.now(),
regions=["es", "mx", "nl", "us"],
)That way Han Solo can make sense in the infamous quote.
EDIT: even Gemini gets this wrong:
> In Star Wars, a parsec is a unit of distance, not time, representing approximately 3.26 light-years
They explained it in the Solo movie.
https://www.reddit.com/r/MovieDetails/comments/ah3ptm/solo_a...
> It's a very simple ship, very economical ship, although the modifications he made to it are rather extensive – mostly to the navigation system to get through hyperspace in the shortest possible distance (parsecs).
For Star Wars, they retconned it to mean he found the shortest possible route through dangerous space, so even for Han Solo's quote, it's still distance.
And the chances of staying undetected are higher if nobody is installing until the delay time ellapses.
It's the same as not scheduling all cronjobs to midnight.
For anyone wondering, you need to be on npm >= 11.10.0 in order to use it. It just became available Feb 11 2026
I don't think there are great solutions here. Arguably, units should be supported by the config file format, but existing config file formats don't do that.
start_at = 2026-05-27T07:32:00Z # RFC 3339
start_at = 2026-05-27 07:32:00Z # readable
We should extend it with durations: timeout = PT15S # RFC 3339
And like for datetimes, we should have a readable variant: timeout = 15s # can omit "P" and "T" if not ambiguous, can use lowercase specifiers
Edit: discussed in detail here: https://github.com/toml-lang/toml/issues/514I'd argue that it is ideal, in the sense that it's the sweet spot for a general config file format to limit itself to simple, widely reusable building blocks. Supporting more advanced types can get in the way of this.
Programs need their own validation and/or parsing anyway, since correctness depends on program-specific semantics and usually only a subset of the values of a more simply expressed type is valid. That same logic applies across inputs: config may come from files, CLI args, legacy formats, or databases, often in different shapes. A single normalization and validation path simplifies this.
General formats must also work across many languages with different type systems. More complex types introduce more possible representations and therefore trade-offs. Even if a file parser implements them correctly (and consistently with other such parsers), it must choose an internal form that may not match what a program needs, forcing extra, less standard transformation and adding complexity on both sides for little gain.
Because acceptable values are defined by the program, not the file, a general format cannot fully specify them and shouldn’t try. Its role is to be a medium and provide simple, human-usable (for textual formats), widely supported types, avoid forcing unnecessary choices, and get out of the way.
All in all, I think it can be more appropriate for a program to pick a parsing library for a more complex type, than to add one consistently to all parsers of a given file format.
You guys can't appreciate a bad joke
I know 90% of people I've worked with will never know these options exist.
Urgent fix, patch released, invisible to dev team cause they put in a 7 day wait. Now our app is vulnerable for up to 7 days longer than needed (assuming daily deploys. If less often, pad accordingly). Not a great excuse as to why the company shipped an "updated" version of the app with a standing CVE in it. "Sorry we were blinded to the critical fix because set an arbitrary local setting to ignore updates until they are 7 days old". I wouldn't fire people over that, but we'd definitely be doing some internal training.
The entire history of malware lol
> Why do you believe that motivated threat hunters won’t continue to analyze and find threats in new versions of open source software in the first week after release?
I'm sure they will, but attackers will adapt. And I'm really unconvinced that these delays are really going to help in the real world. Imagine you rely on `popular-dependency` and it gets compromised. You have a cooldown, but I, the attacker, issue "CVE-1234" for `popular-dependency`. If you're at a company you now likely have a compliance obligation to patch that CVE within a strict timeline. I can very, very easily pressure you into this sort of thing.
I'm just unconvinced by the whole idea. It's fine, more time is nice, but it's not a good solution imo.
It's herd immunity, not personal protection. You benefit from the people who DO install immediately and raise the alarm
This became evident, what, perhaps a few years ago? Probably since childhood for some users here but just wondering what the holdup is. Lots of bad press could be avoided, or at least a little.
Which will never even come close to happening, unless npm decides to make it the default, which they won't.
~/.npmrc
min-release-age=7 # days
actually doesn't set it at all, please edit your comment.EDIT: Actually maybe it does? But it's weird because
`npm config list -l` shows: `min-release-age = null` with, and without the comment. so who knows ¯\_(ツ)_/¯
From https://pnpm.io/cli/install#--ignore-scripts:
> Default: *false*
> Define a dependency cooldown by specifying a duration instead of an absolute value. Either a "friendly" duration (e.g., 24 hours, 1 week, 30 days) or an ISO 8601 duration (e.g., PT24H, P7D, P30D) can be used.
(Make sure you’re on a version that actually supports relative times, please!)
error: Failed to parse: `.config/uv/uv.toml` Caused by: TOML parse error at line 1, column 17 | 1 | exclude-newer = "7 days" | ^^^^^^^^ failed to parse year in date "7 days": failed to parse "7 da" as year (a four digit integer): invalid digit, expected 0-9 but got
I was on version 0.7.20, so I removed that line, ran "uv self update" and upgraded to 0.11.2 and then re-added the config and it works fine now.
> If project-, user-, and system-level configuration files are found, the settings will be merged, with project-level configuration taking precedence over the user-level configuration, and user-level configuration taking precedence over the system-level configuration.
If your first party tooling contains all the functionality you typically need, it's possible you can be productive with zero 3rd party dependencies. In practice you will tend to have a few, but you won't be vendoring out critical things like HTTP, TCP, JSON, string sanitation, cryptography. These are beacons for attackers. Everything depends on this stuff so the motivation for attacking these common surfaces is high.
I can literally count on one hand the number of 3rd party dependencies I've used in the last year. Dapper is the only regular thing I can come up with. Sometimes ScottPlot. Both of my SQL providers (MSSQL and SQLite) are first party as well. This is a major reason why they're the only sql providers I use.
Maybe I am just so traumatized from compliance and auditing in regulated software business, but this feels like a happier way to build software too. My tools tend to stay right where I left them the previous day. I don't have to worry about my hammer or screw drivers stealing all my bitcoin in the middle of the night.
Unless you are Python, where the standard library includes multiple HTTP libraries and everyone installs the requests package anyways.
Few languages have good models for evolving their standard library, so you end up with lots of bad designs sticking around forever. Libraries are much easier to evolve, giving them the advantage in terms of developer UX and performance.
I removed the locks from all the doors, now entering/exiting is 87% faster! After removing all the safety equipment, our vehicles have significantly improved in mileage, acceleration and top speed!
Initially I assumed this is sarcastic, but apparently not. UX and performance is what programmers are paid to do! Making sure UX is good is one of the most important things in programmer job.
While security is a moving target, a goal, something that can never be perfect, just "good enough" (if NSA wants to hack you, they will). You make it sound like installing third party packages is basically equivalent to a security hole, while in practice the risk is low, especially if you don't overdo it.
Wild to read extreme security views like that, while at the same time there are people here that run unconstrained AI agents with --dangerous-skip-confirm flags and see nothing wrong with it.
The amount of time defining same data structures over and over again vs `pip install requests` with well defined data structures.
I understand why this doesn't work well with legacy projects, but it's something that the language could strive towards.
Yes - the postinstall hook attack vector goes away. You can do SHA pinning since Git's content addressing means that SHA is the hash of the content. But then your "lockfile" equivalent is just... a list of commit SHAs scattered across import statements in your source? Managing that across a real dependency tree becomes a nightmare.
This is basically what Deno's import maps tried to solve, and what they ended up with looked a lot like a package registry again.
At least npm packages have checksums and a registry that can yank things.
Why wouldn't that work well with legacy projects? In fact, the projects I was a part of that I'd call legacy nowadays, was in fact built by copy-and-pasting .js libraries into a "vendor/" directory, and that's how we shipped it as well, this was in the days before Bower (which was the npm of frontend development back in the day), vendoring JS libs was standard practice, before package managers became used in frontend development too.
Not sure why it wouldn't work, JavaScript is a very moldable language, you can make most things work one way or another :)(
Package managers should do the same thing
With Bun I use less dependencies from NPM than I used from Nuget with .NET to build minimal apis. For example the pg driver.
(Leaving aside thoughts on language syntax, compile times, tooling etc - just interested in people's experiences with / thoughts on healthy stdlibs)
Python (decent standard library) - It's pretty much everywhere. There's so many hidden gems in that standard library (difflib, argparse, shlex, subprocess, cmd)
C#/F# (.NET)
C# feels so productive because of how much is available in .NET Core, and F# gets to tag along and get it all for free too. With C# you can compile executables down to bundle the runtime and strip it down so your executables are in the 15 MiB range. If you have dotnet installed, you can run F# as scripts.
Do you worry at all about the future of F#? I've been told it's feeling more and more like a second-class citizen on .NET, but I don't have much personal experience.
The irony in this case is that axios is not really needed now given that fetch is part of the JS std lib.
The problem is not third party libraries. It is updating third party libraries when the version you have still works fine for your needs.
What a great time to be alive! Now, that's exactly why I enjoy writing software with minimal dependencies for myself (and sometimes for my family and friends) in my spare time - first, it's fun, and second, turns out it's more secure.
Also from the report:
> Neither malicious version contains a single line of malicious code inside axios itself. Instead, both inject a fake dependency, plain-crypto-js@4.2.1, a package that is never imported anywhere in the axios source, whose only purpose is to run a postinstall script that deploys a cross-platform remote access trojan (RAT)
Good news for pnpm/bun users who have to manually approve postinstall scripts.
Fetch wasn't added to Node.js as a core package until version 18, and wasn't considered stable until version 21. Axios has been around much longer and was made part of popular frameworks and tutorials, which helps continue to propagate it's usage.
These are so much better than the interface fetch offers you, unfortunately.
fetch('https://api.example.com/data', {
headers: {
'Authorization': 'Bearer ' + accessToken
}
})1- automatically add bearer tokens to requests rather than manually specifying them every single time
2- automatically dispatch some event or function when a 401 response is returned to clear the stale user session and return them to a login page.
There's no reason to repeat this logic in every single place you make an API call.
Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.
Finally, there's some nice mocking utilities for axios for unit testing different responses and error codes.
You're either going to copy/paste code everywhere, or you will write your own helper functions and never touch fetch directly. Axios... just works. No need to reinvent anything, and there's a ton of other handy features the GP mentioned as well you may or may not find yourself needing.
const myfetch = async (req, options) => {
let options = options || {};
options.headers = options.headers || {};
options.headers['Authorization'] = token;
let res = await fetch(new Request(req, options));
if (res.status == 401) {
// do your thing
throw new Error("oh no");
}
return res;
}
Convenience is a thing, but it doesn't require a massive library.Because it is so few lines it is much more sensible to have everyone duplicate that little snippet manually than import a library and write interceptors for that...
(Not only because the integration with the library would likely be more lines of code, but also because a library is a significantly liability on several levels that must be justified by significant, not minor, recurring savings.)
fetch responses have a .json() method. It's literally the first example in MDN: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/U...
It's literally easier than not using JSON because I have to think about if I want `repsponse.text()` or `response.body()`.
IMO interceptors are bad. they hide what might get transformed with the API call at the place it is being used.
> Likewise, every response I get is JSON. There's no reason to manually unwrap the response into JSON every time.
This is not true unless you are not interfacing with your own backends. even then why not just make a helper that unwraps as json by default but can be passed an arg to parse as something else
You can remember this answer for every time you ask same question again:
"Coz whatever else/builtin was before was annoying enough for common use cases"
Would they not have approved it for earlier versions? But also wouldn't the chance of addition automatic approval be high (for such a widely used project)?
It's also a little context dependent, for example if I was using Axios and I see a prompt to run the plain-crypto-js postinstall script, alarm bells would instantly ring, which would at least make me look up the changelog to see why this is happening.
In most cases I don't even let them run unless something breaks/doesn't work as expected.
For example, esbuild and typescript 7 split binaries for different systems and architectures into separate packages, and rely on your package manager to pull the correct one.
Because axios existed before the builtin fetch, and so there's a lot of stackoverflow answers explaining how to use fetch, and the llm models are trained on that, so they will write axios requests instead of fetch
The solution to this is twofold, and is already implemented in the primary ecosystems being targeted (Python and JS): packagers should use Trusted Publishing to eliminate the need for long lived release credentials, and downstreams should use cooldowns to give security researchers time to identify and quarantine attacks.
(Security is a moving target, and neither of these techniques is going to work indefinitely without new techniques added to the mix. But they would be effective against the current problems we’re seeing.)
Since the attacker had full control of the NPM account, it is game over - the attacker can login to NPM and could, if they wanted, configure Trusted Publishing on any repo they control.
Axios IS using trusted publishing, but that didn't do anything to prevent the attack since the entire NPM account was taken over and config can be modified to allow publishing using a token.
Scaling security with the popularity of a repo does seem like a good idea.
(The classic example being passwords: we wouldn’t need MFA is everybody just “got good” and used strong/unique passwords everywhere. But that’s manifestly unrealistic, so instead we use our discipline budget on getting people to use password managers and phishing-resistant MFA.)
MFA is typically enforced by organizations, forcing discipline. Individual usage of MFA is dramatically lower
I just wish it had more human interaction rather than have a GenAI spit out the blog post. It's very repetitive and includes several EM dashes.
Wouldn’t that just encourage the bad actors to delay the activation of their payloads a few days or even remotely activated on a switch?
I have bwrap configured to override: npm, pip, cargo, mvn, gradle, everything you can think of and I only give it the access it needs, strip anything that is useless to it anyway, deny dbus, sockets, everything. SSH is forwarded via socket (ssh-add).
This limits the blast radius to your CWD and package manager caches and often won't even work since the malware usually expects some things to be available which are not in a permissionless sandbox.
You can think of it as running a docker container, but without the requirement of having to have an image. It is the same thing flatpak is based on.
As for server deployments, container hardening is your friend. Most supply chain attacks target build scripts so as long as you treat your CI/CD as an untrusted environment you should be good - there's quite a few resources on this so won't go into detail.
Bonus points: use the same sandbox for AI.
Stay safe out there.
As a higher-level alternative to bwrap, I sometimes use `flatpak run --filesystem=$PWD --command=bash org.freedesktop.Platform`. This is kind of an abuse of flatpaks but works just fine to make a sandbox. And unlike bwrap, it has sane defaults (no extra permissions, not even network, though it does allow xdg-desktop-portal).
Maybe I misunderstood this point. But the ssh socket also gives access to your private keys, so I see no security gain in that point. Better to have a password protected key.
Axios has like 100M downloads per week. A couple of people with MFA should have to approve changes before it gets published.
GB's of libre software, graphical install, 2.6 kernel, KDE3 desktop, very light on my Athlon 2000 with 256MB of RAM. It was incredible compared to what you got with Windows XP and 120 Euro per seat. Nonfree software and almost empty.
And, well, if for instance I could get read only, ~16TB durable USB drive with tons of Guix packages offline (in a two yearly basis with stable releases) for $200 I would buy them in the spot.
You would say that $200 for a distro it's expensive, but for what it provides, if you are only interested in libre gaming and tools, they amount you save can be huge. I've seen people spend $400 in Steam games because of the Holyday sales...
It's what we call in France "la fête du slip".
PS: that's one reason I try to use git submodules in my Common Lisp projects instead of QuickLisp, because I really see the size of my deptree this way.
At work, we're happy with Python's included batteries when we need to make scripts instead of large programs.
Yeah, pretty bad idea.
- copy the dependencies' tests into your own tests
- copy the code in to your codebase as a library using the same review process you would for code from your own team
- treat updates to the library in the same way you would for updates to your own code
Apparently, this extra work will now not be a problem, because we have AI making us 10x more efficient. To be honest, even without AI, we should've been doing this from the start, even if I understand why we haven't. The excuses are starting to wear thin though.
This needs to be done (as we've seen from these recent attacks) in your devenv, ci/cd and prod environments. Not one, or two, but all of these environments.
The easiest way is via using something like kubernetes network policies + a squid proxy to allow limited trusted domains through, and those domains must not be publicly controllable by attackers. ie. github.com is not safe to allow, but raw.githubusercontent.com would be as it doesn't allow data to be submitted to it.
Network firewalls that perform SSL interception and restrict DNS queries are an option also, though more complicated or expensive than the above.
This stops both DNS exfil and HTTP exfil. For your devenv, software like Little Snitch may protect your from these (I'm not 100% on DNS exfil here though). Otherwise run your devenv (ie vscode) as a web server, or containerised + vnc, a VM, etc, with the same restrictions.
Getting zero day patches 7 days later if no proper monitoring about important patches or if this specific patch is not in the important list. Always a tradeoff.
Zero deps. One file. Already detects the hijacked maintainer email on the current safe version.
github.com/nfodor/npm-supply-chain-audit
We have libraries like SQLite, which is a single .c file that you drag into your project and it immediately does a ton of incredibly useful, non-trivial work for you, while barely increasing your executable's size.
The issue is not dependencies themselves, it's transitive ones. Nobody installs left-pad or is-even-number directly, and "libraries" like these are the vast majority of the attack surface. If you get rid of transitive dependencies, you get rid of the need of a package manager, as installing a package becomes unzipping a few files into a vendor/ folder.
There's so many C libraries like this. Off the top of my head, SQLite, FreeType, OpenSSL, libcurl, libpng/jpeg, stb everything, zlib, lua, SDL, GLFW... I do game development so I'm most familiar with the ones commonly used in game engines, but I'm sure other fields have similarly high quality C libraries.
They also bindings for every language under the sun. Rust libraries are very rarely used outside of Rust, and C#/Java/JS/Python libraries are never used outside their respective language (aside form Java ones in other JVM langs).
1. Packages should carry a manifest that declares what they do at build time, just like Chrome extensions do. This manifest would then be used to configure its build environment.
2. Publishers to official registries should be forced to use 2FA. I proposed this a decade ago for crates.io and people lost their minds, like I was suggesting we drag developers to a shed to be shot.
3. Every package registry should produce a detailed audit log that contains a "who, what, when". Every build/ command should be producing audit logs that can be collected by endpoint agents too.
4. Every package registry should support TUF.
5. Typosquatting defenses should be standard.
etc etc etc. Some of this is hard, some of this is not hard. All of this is possible. No one has done it, so it's way too early to say "package managers can't be made safe" when no one has tried.
What is a problem is library quality. Which is downstream of nobody getting paid for it, combined with an optimistic but unrealistic "all packages are equal" philosophy.
> High quality C libraries
> OpenSSL
OpenSSL is one of the ones where there's a ground up rewrite happening because the code quality is so terrible while being security critical.
On the other end, javascript is uniquely bad because of the deployment model and difficulty of adding things to the standard library, so everything is littered with polyfills.
Absolute nonsense. What does automated world even mean? Even if one could infer reasonably, it's no justification. Appealing to "the real world" in lieu of any further consideration is exactly the kind of mindlessness that has led to the present state of affairs.
Automation of dependency versions was never something we needed it was always a convenience, and even that's a stretch given that dependency hell is abundant in all of these systems, and now we have supply chain attacks. While everyone is welcome to do as they please, I'm going to stick to vendoring my dependencies, statically compiling, and not blindly trusting code I haven't seen before.
How do you handle updating dependencies then?
People are trying to automate the act of programming itself, with AI, let alone all the bits and pieces of build processes and maintenance.
I'm not sure why you believe this is more secure than a package manager. At least with a package manager there is an opportunity for vetting. It's also trivial that it did not increase your executable's size. If your executable depends on it, it increases its effective size.
This is what happens when there is no barrier to entry and it includes everyone who has no idea what they are doing in charge of the NPM community.
When you see a single package having +25 dependencies, that is a bad practice and increases the risk of supply chain attacks.
Most of them don't even pin their dependencies and I called this out just yesterday on OneCLI. [0]
It just happens that NPM is the worst out of all of the rest of the ecosystems due to the above.
If no one checks their dependencies, the solution is to centralize this responsibility at the package repository. Something like left-pad should simply not be admitted to npm. Enforce a set of stricter rules which only allow non-trivial packages maintained by someone who is clearly accountable.
Another change one could make is develop bigger standard libraries with all the utilities which are useful. For example in Rust there are a few de facto standard packages one needs very often, which then also force you to pull in a bunch of transitive dependencies. Those could also be part of the standard library.
This all amounts to increasing the minimal scope of useful functionality a package has to have to be admitted and increasing accountability of the people maintaining them. This obviously comes with more effort on the maintainers part, but hey maybe we could even pay them for their labor.
But maybe that's not the right fit either. The world where package managers are just open to whatever needs to die. It's no longer a safe model.
That model effectively becomes your ring 1. Ring 0 is the stdlib and the package manager itself, and - because you would always need to be able to step outside the distribution for either freshness or "that's not been picked up by the distro yet" reasons - the ecosystem package repositories are the wild west ring 2.
In the language ecosystems I'm only aware of Quicklisp/Ultralisp and Haskell's Stackage that work like this. Everything else is effectively a rolling distro that hasn't realised that's what it is yet.
You are just swapping a package manager with security by obscurity by copy pasting code into your project. It is arguably a much worse way of handling supply chain security, as now there is no way to audit your dependencies.
> If you get rid of transitive dependencies, you get rid of the need of a package manager
This argument makes no sense. Obviously reducing the amount of transitive dependencies is almost always a good thing, but it doesn't change the fundamental benefits of a package manager.
> There's so many C libraries like this
The language with the most fundamental and dangerous ways of handling memory, the language that is constantly in the news for numerous security problems even in massively popular libraries such as OpenSSL? Yes, definitely copy-paste that code in, surely nothing can go wrong.
> They also bindings for every language under the sun. Rust libraries are very rarely used outside of Rust
This is a WILD assumption, doing C-style bindings is actually quite common. YOu will of course then also be exposing a memory unsafe interface, as that is what you get with C.
What exactly is your argument here? It feels like what you are trying to say is that we should just stop doing JS and instead all make C programs that copy paste massive libraries because that is somhow 'high quality'.
This seems like a massively uninformed, one-sided and frankly ridiculous take.
You should try writing code, and not relying on libraries for everything, it may change how you look at programming and actually ground your opinions in reality. I'm staring at company's vendor/ folder. It has ~15 libraries, all but one of which operate on trusted input (game assets).
> fundamental benefits of a package manager.
I literally told you why they don't matter if you write code in a sane way.
> doing C-style bindings is actually quite common
I know bindings for Rust libraries exist. Read the literal words you quoted. "Rust libraries are very rarely used outside of Rust". Got some counterexamples?
Some pm's are badly maintained (Pip/NPM), while others are curated enough.
Again, if you have GNU/Linux installed, install Guix, read the Info manual on 'guix import' and just create a shell/container with 'guix shell --container' (and a manifest package created from guix import) and use any crap you need for NPM in a reproducible and isolated way. You $HOME will be safe, for sure.
You got a project with 1-2 depencies? Sure. But if you need to bring in 100 different libs (because you bring in 10 libs which in turn brings in 10 libs) good luck.
So don’t?
With manual deps management, everyone soon gravitates to a core set of deps. And libraries developer tends to reduce their deps needs, That’s why you see most C libraries deals with file formats, protocols, and broad concerns. Smaller algorithms can be shared with gists and blog articles.
It’s things like this that make me want to swap to Qubes permanently, simply as to not have my password manager in the same context as compiling software ever.
Updating packages takes longer, but we try to keep packages to a minimum so it ends up not being that big deal.
I like to think of it like working with dangerous chemicals in the lab. Back in the days, people were sloppy and eventually got cancer. Then dangers were recognized and PPE was developed and became a requirement.
We are now at the stage in software development where we are beginning to recognizing the hazards and developing + mandating use of proper PPE.
A couple of years ago, pip started refusing to install packages outside of a virtualenv. I'm guessing/hoping package managers will start to have an opt-in flag you can set in a system-wide config file, such that they refuse to run outside of a sandbox.
Like congratulations, your dev was compromised whole 10 minutes later after he ran code.
Also, semantic versioning is not some golden goose that fixes this issue, update embargoes help, but that doesn’t require semver. Vendoring dependencies is not a scalable solution for all the software people use.
> semantic versioning is not some golden goose that fixes this issue
Nothing is a golden goose, however semver is designed to limit the scope of incoming changes so you have a chance of staying on top.
> Vendoring dependencies is not a scalable solution for all the software people use.
There are literally three ways to deal with these supply chain issues:
1. Allocate the bandwidth yourself
2. Buy that bandwidth
3. Yolo
Dealing with dependencies is another question; if it's stupid stuff like leftpad then it should be either vendored in or promoted to be a language feature anyway (as it has been).
I kind of feel like the authors here should want that for themselves, before the community would even realize it's needed. I can't say I've worked on packages that are as popular as axios, but once some packages we were publishing hit 10K downloads or so, we all agreed that we needed to up our security posture, and we all got hardware keys for 2FA and spent 1-2 weeks on making sure it was as bullet-proof we could make it.
To be fair, most FOSS is developed by volunteers so I understand not wanting to spend any money on something you provide for free, but on the other hand, I personally wouldn't feel comfortable being responsible for something that popular without hardening my own setup as much as I could, even if it means stopping everything for a week.
Also, considering how prevalent TPM/Secure Enclaves are on modern devices, I would guess most package maintainers already have hardware capable of generating/using signing keys that never leave hardware.
I think it is mostly a devex/workflow question.
Considering the recent ci/cd-pipeline compromises, I think it would make sense to make a two phase commit process required for popular packages. Build and upload to the registry from a pipeline, but require a signature from a hardware resident key before making the package available.
Even left-pad is still getting 1.6 million weekly downloads.
Maybe not all users should pull all packages straight from what devs are pushing.
There's no reason we can't have "node package distributions" like we have Linux distributions. Maybe we should stop expecting devs and maintainers and Microsoft to take responsibility for our supply-chain.
There's no community, the users of axios are devs that looked at stackoverflow for "how to download a file in javascript", they barely know or care what axios is.
Now the users of axios are devs that ask Claude Code or Codex to scrape a website or make a dashboard, they don't even know about the word axios.
I personally had to delete axios a couple of time from my codebase when working with junior devs.
My feelings precisely. Min package age (supported in uv and all JS package managers) is nice but I still feel extremely hesitant to upgrade my deps or start a new project at the moment.
I don’t think this is going to stabilize any time soon, so figuring out how to handle potentially compromised deps is something we will all need to think about.
https://github.com/npm/cli/pull/8965
https://github.com/npm/cli/issues/8994
Its good that that they finally got there but....
I would be avoiding npm itself on principle in the JS ecosystem. Use a package manager that has a history of actually caring about these issues in a timely manner.
(Of course I could still get bitten if one of the packages I trust has its postinstall script replaced.)
So this and litellm one would’ve been preventable by proper config of OIDC Trusted Publishers.
You should probably set your default to not run those scripts. They are mostly unnecessary.
~/.npmrc :
ignore-scripts=true
83M weekly downloads!— Run Yarn in zero-installs mode (or equivalent for your package manager). Every new or changed dependency gets checked in.
— Disable post-install scripts. If you don’t, at least make sure your package manager prompts for scripts during install, in which case you stop and look at what it’s going to run.
— If third-party code runs in development, including post-install scripts, try your best to make sure it happens in a VM/container.
— Vet every package you add. Popularity is a plus, recent commit time is a minus: if you have this but not that, keep your eyes peeled. Skim through the code on NPM (they will probably never stop labelling it as “beta”), commit history and changelog.
— Vet its dependency tree. Dependencies is a vector for attack on you and your users, and any new developer in the tree is another person you’re trusting to not be malicious and to take all of the above measures, too.
Idk, lockfiles provide almost as good protection without putting the binaries in git. At least with `--frozen-lockfile` option.
However, it’s an extra line of defence against
1) your registry being down (preventing you from pushing a security hotfix when you find out another package compromised your product),
2) package unpublishing attacks (your install step fails or asks you to pick a replacement version, what do you do at 5pm on a Friday?), and
3) possibly (but haven’t looked in depth) lockfile poisoning attacks, by making them more complicated.
Also, it makes the size of your dependency graph (or changes therein) much more tangible and obvious, compared to some lines in a lockfile.
Most npm CVEs are stuff like DDoS vulnerabilities, and you should have mitigations for those in place for at the infra-level anyway (e.g. request timeouts, rate limits, etc), or you are pretty much guaranteed to be cooked sooner or later anyway. The really dangerous stuff like arbitrary command execution from a library that takes end user input is much much more rare. The most recent big one I remember is React2shell.
Number 2 hasn't been much of an issue for a long time. npm doesn't allow unpublishing package after 72 hours (apart from under certain rare conditions).
Don't know about number 3. Would feel to me that if you have something running that can modify lockfile, they can probably also modify the chekced-in tars.
I can see how zero-installs are useful under some specific constraints where you want to minimize dependencies to external services, e.g. when your CI runs under strict firewalls. But for most, nah, not worth it.
Which dependency? It sounds like you are assuming some specific scenario, whereas the fix can take many forms. In immediate term, the quickest step could be to simply disable some feature. A later step may be vendoring in a safe implementation.
The registry doesn’t need to be actually down for you, either; the necessary condition is that your CI infrastructure can’t reach it.
> cases where npm CVEs must be patched with such urgency or bad things will happen are luckily very rare, in my experience.
Not sure what you mean by “npm CVEs”. The registry? The CLI tool?
As I wrote, if you are running compromised software in production, you want to fix it ASAP. In first moments you may not even know whether bad things will happen or not, just that you are shipping malicious code to your users. Even if you are lucky enough to determine with 100% confidence (putting your job on the line) that the compromise is inconsequential, you don’t want to keep shipping that code for another hour because your install step fails due to a random CI infra hiccup making registry inaccessible (as happened in my experience at least half dozen times in years prior, though luckily not in a circumstance where someone attempted to push an urgent security fix). Now imagine it’s not a random hiccup but part of a coordinated targeted attack, and somehow it becomes something anticipated.
> Number 2 hasn't been much of an issue for a long time. npm doesn't allow unpublishing package after 72 hours (apart from under certain rare conditions).
Those rare conditions exist. Also, you are making it sound as if the registry is infallible, and no humans and/or LLMs there accept untrusted input from their environment.
The key aspect of modern package managers, when used correctly, is that even when the registry is compromised you are fine as long as integrity check crypto holds up and you hold on to your pre-compromise dependency tree. The latter is not a technical problem but a human problem, because conditions can be engineered in which something may slip past your eyes. If this slip-up can be avoided at little to no cost—in fact, with benefits, since zero-installs shortens CI times, and therefore time-to-fix, due to dramatically shorter or fully eliminated install step—it should be a complete no-brainer.
> Don't know about number 3. Would feel to me that if you have something running that can modify lockfile, they can probably also modify the chekced-in tars.
As I wrote, I suspect it’d complicate such attacks or make them easier to spot, not make them impossible.
A much better approach would be to pin the versions used and do intentional updates some time after release, say a sprint after.
I wrongly thought that the verified provenance UI showed a package has a trusted publishing pipeline, but seems it’s orthogonal.
NPM really needs to move away from these secrets that can be stolen.
As long as you don't update your pins during an active supply chain attack, the risk surface is rather low.
All it takes is an `npm config set` to switch registries anyways. The hard part is having a central party that is able to convince all the various security companies to collaborate rather than having dozens of different registries each from each company.
Rather than just a hard-coded delay, I think having policies on what checks must pass first makes sense with overrides for when CVEs show up.
(WIP)
find / -path '*/node_modules/axios/package.json' -type f 2>/dev/null | while read -l f; set -l v (grep -oP '"version"\s*:\s\*"\K(1\.14\.1|0\.30\.4)' $f 2>/dev/null); if test -n "$v"; printf '\a\n\033[1;31m FOUND v%s\033[0m \033[1;33m%s\033[0m\n' $v (string replace '/package.json' '' -- $f); else; printf '\r\033[2m scanning: %s\033[K\033[0m' (string sub -l 70 -- $f); end; end; printf '\r\033[K\n\033[1;32m scan complete\033[0m\n' find / -type f -path '*/node_modules/axios/package.json' \
-exec grep -Pl '"version"\s*:\s*"(1\.14\.1|0\.30\.4)"' {} + 2>/dev/null
Let’s not encourage people to respond to security incidents by… copy/pasting random commands they don’t understand.The anti-forensics here are much more complicated that I had imagined. Sahring after getting my hands burned.
After the RAT deploys, setup.js deletes itself and swaps package.json with a clean stub. Your node_modules looks fine. Only way to know is checking for artifacts: /Library/Caches/com.apple.act.mond on mac, %PROGRAMDATA%\wt.exe on windows, /tmp/ld.py on linux. Or grep network logs for sfrclak.com.
Somehow noboady is worried about how agentic coding tools run npm install autonomously. No human in the loop to notice a weird new transitive dep. That attack surface is just getting worsened day by day.
Project: https://point-wild.github.io/who-touched-my-packages/
Website: https://asfaload.com/
Now, I tend to use Python, Rust and Julia. With Python I am constantly using few same packages like numpy and matplotlib. With Rust and Julia, I try as much as possible to not use any packages at all, because it always scares me when something that should be pretty simple downloads half of the Internet to my PC.
Julia is even worse than Rust in that regard - for even rudimentary stuff like static arrays or properly namespaced enums people download 3rd party packages.
Given how HTTP is now what TCP was during the 90s and almost all modern networked applications needing to communicate in it one way or another, most rust projects come with an inherent security risk.
These days, I score the usability of programming languages by how complete their standard library is. By that measure, Rust and Javascript get an automatic F.
That way, I can at least limit the blast radius when (not if) I catch an infostealer.
Edit: bottom line is installs are gonna get SOOO much more complicated. You can already see the solution surface... Cooling periods, maintainer profiling, sandbox detonation, lockfile diffing, weird publish path checks. All adds up to one giant PITA for fast easy dev.
This might have taken a lot longer to discover, otherwise.
This is why corporations doing it right don't allow installing the Internet into dev machines.
Yet everyone gets to throw their joke about PC virus, while having learnt nothing from it.
And with LLMs generating more and more code, the risk of copying old setups increases.
People are lazy. And sometimes they find old stuff via a google search and use that.
Have a branch called testing, and packages stay in testing for few weeks, after which they go to stable. That is how many Linux distributions handle packages. It would have prevented many of these.
Advising every user of npm/pnpm to change their settings and set their own cooldown periods is not a real choice.
Besides there's always a way to immediately push a new version to stable repositories. You have to in order to deal with regressions and security fixes.
Most of the supply chain vulnerabilities that ended up in the NPM would have been mitigated with having mandatory testing / stable branches, of course there needs to be some sort of way to skip the testing but that would be rather rare and cumbersome and audited, like it is in Linux distributions too.
Just to note, if we're talking about Linux Distributions. There's also COPR in Fedora, OBS for OpenSUSE (and a bunch of other stuff, OBS is awesome), Ubuntu has PPAs. And I am sure there's many more similar solutions.
https://github.com/IsaacGemal/claude-skills
It's a bit janky right now but I'd be interested to hear what people think about it.
https://knowyourmeme.com/memes/stop-trying-to-make-fetch-hap...
Doesn’t npm mandate 2FA as of some time last year? How was that bypassed?
Storing your sensitive data on a single bare-metal OS that constantly downloads and runs packages from unknown maintainers is like handing your house key out to a million people and hoping none of them misuse it.
the security solution i have is where it needs to become more simple, getting rid of attack surface that is coming out of these bloated releases
WTF!!!! gaslighting your victims into believing they are not victims. the ingenuity of this is truly mindblowing. I am shocked at such thing is even allowed. like packages should not be able to modify their contents while they are being instaleld.
Learn about 'guix import'.
Oh, and you can install Guix on any GNU/Linux distro.
> The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code. Its sole purpose is to execute a postinstall script that acts as a cross platform remote access trojan (RAT) dropper, targeting macOS, Windows, and Linux. The dropper contacts a live command and control server and delivers platform specific second stage payloads. After execution, the malware deletes itself and replaces its own package.json with a clean version to evade forensic detection.
I strongly recommend you read the entire article.
Stop downloading code from the internet unless it's a major strategic decision.
JavaScript, its entire ecosystem is just a pack of cards, I swear. What a fucking joke.
It won't stop all attacks but definitely would stop some of these
Looks like the maintainer wasn't just careless when reviewing PRs.
Makes actual security patches tougher to roll out though - you need to be vigilant to bypass the slowdown when you’re actually fixing a critical flaw. But nobody said this would be easy!
Yeah. 7 days in 2026 is a LONG TIME for security patches, especially for anything public facing.
Stuck between a rock (dependency compromise) and a hard place (legitimate security vulnerabilities).
Doesn't seem like a viable long-term solution.
but tell dependabot to delay a week, you'd sleep easy from this nonesense
So unless you’re saying the extra time will be spent inspecting every package, whenever you do update, you will be getting an insecure package.
You’re not safe by dodging axios. There are currently thousands of breached packages ready to install that aren’t notable.
“I’ll run npm install after checking twitter” won’t help
Sure, its convenient to have so much code to use for basic functionality - but the technical debt of having to maintain these projects is just too damn high.
At this point I think that, if I am forced to use javascript or node for a project, I reconsider involvement in that project. Its ecosystem is just so bonkers I can't justify the effort much longer.
There has to be some kind of "code-review-as-a-service" that can be turned on here to catch these things. Its just so unproductive, every single time.
Hey, I have been part of the archival effect/Litellm issue thread. I think I have stored them in archive.org for preservation purposes
https://web.archive.org/web/20260325073027/https://files.pyt...
(I have also made an archive of the github issue with all the comments manually till a certain point at https://web.archive.org/web/20260325054202/https://serjaimel...)
You could use Trivy! /s
I think that jason might like if someone from github team can contact them as soon as possible.
(8 minutes ago at the time of writing)
Now we have a 20MB main.min.js problem
Linux has the most powerful native process isolation arsenal at the user disposal.
And some distros use even more isolation mechanisms on top of the ones provided by the kernel like snap and flatpak.
And then you can recreate the entire thing like a spellbook with nix.
Docker works natively in it. Do I need to say more?
Linux is a decade ahead here with regards for security options available to the user.
In fact it even gives the user more security tools.
So I fail to reason on you singling out Linux here.
A more apt comparison is vs Windows and macOS.
And Linux offer more than these two with regards to security.
I have not investigated the shell script but DO NOT RUN shell scripts posted to Hacker News, especially by bot accounts!
Does cargo contain any mitigations to prevent a similar attack?
Now hopefully no distro signing keys have been compromised in the latest attacks...
https://blog.rust-lang.org/2022/05/10/malicious-crate-rustde...
https://en.wikipedia.org/wiki/Log4Shell
https://blog.pypi.org/posts/2024-12-11-ultralytics-attack-an...
https://about.gitlab.com/blog/gitlab-catches-mongodb-go-modu...
https://www.reversinglabs.com/blog/packagist-php-repo-supply...
Maven to this day represents my ideal of package distribution. Immutable versions save so much trouble and I really don't understand why, in the age of left-pad, other people looked at that and said, "nah, I'm good with this."
Most package.jsons I see have semver operators on every dependency, so patches spread incredibly quickly. Package namespacing is not enforced, so there is no way of knowing who the maintainer is without looking it up on the registry first; for this reason many of the most popular packages are basically side projects maintained by a single developer*. Post-install scripts are enabled by default unless you use pnpm or bun.
When you combine all these factors, you get the absolute disaster of an ecosystem that NPM is.
*Not really the case for Axios as they are at least somewhat organized and financed via sponsors.
Forest > Trees
In package managers like pacman, apt, apk,... it's easier to catch such issue. They do have postinstall scripts, but it's part of the submission to the repo, not part of the project. Whatever comes from the project is hashed, and that hash is also visible as part of the submission. That makes it a bit difficult to sneak something. You don't push a change, they pull yours.
I looked at the Rust one for example, which is literally just a malicious crate someone uploaded with a similar name as a popular one:
> The crate had less than 500 downloads since its first release on 2022-03-25, and no crates on the crates.io registry depended on it.
Compared to Axios, which gets 83 million downloads and was directly compromised.
What an extremely disingenuous argument lol
The issues have everything to do with npm as a platform and nothing with JS as a language. You can use JS without npm. Saying you'll escape supply chain attacks by not using JS is like saying you'll be saved from an car crash with a parachute.