What's concerning is the 6-month window. Supply chain attacks are difficult to detect because the malicious code runs with full user permissions from a "trusted" source. Most endpoint protection isn't designed to flag software from a legitimate publisher's update infrastructure.
For organizations, this argues for staged rollouts and network monitoring for unexpected outbound connections from common applications. For individuals, package managers with cryptographic verification at least add another barrier - though obviously not bulletproof either.
The crappy installation and update channels are often tightly integrated with the vendors' monetization strategies, so there's a huge amount of inertia.
Microsoft Store could have changed this situation, had it been better designed and better received. Unfortunately, nobody seems to use it unless they have no other choice.
WinGet looks much better, but so far it's only for developers and power users.
I can't say it would have guaranteed people would have liked it, just that those were needed for it to have a chance.
And if they have prevention mechanisms, why can't existing supply chains be secured with similar prevention mechanisms, instead of funneling to a single package manager provider?
Surely someone with more resources and more sets of eyes could do better than that? AFAIK nobody has compromised Debian's APT repositories and Red Hat's RPM repositories yet.
What happened to just good old OS APIs? You could wrap the entire "secure update" process into a function call. Does Windows somehow not already have this?
The Store uses that behind the scenes. You don't have to use the store to use the system update system.
It's particularly good because updates can happen in the background, without having to launch your app to trigger them.
The problem is finding and installing new software. Without a well-known official repository, people end up downloading Windows apps from random websites filled with ads and five different "Download" buttons, bundled with everything from McAfee to Adobe Reader.
We should be asking how to enable adding external sources like Ubuntu PPAs (which can then be updated like the rest), not whether there should be an official repository to bootstrap the package manager in the first place. "Store" is just a typical name for such a repository, it's not mandatory.
If one day, maybe in 10 or 20 years time, I feel Notepad++ lacks something and I decide to upgrade, I will do it myself, I don't need a handy helper.
There is no reason for a tool to implicitly access my mounted cloud drive directory and browser cookies data.
Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps. Not because OS vendors want to control what apps do, but because users do. If the FOSS community continues to ignore proper security sandboxing and distribution of end user applications, then it will just end up entirely centralised in one of the big tech companies, as it already is on iOS and macOS by Apple.
Think about it from a real world perspective.
I knock on your door. You invite me to sit with you in your living room. I can't easily sneak into your bed room. Further, your temporary access ends as soon as you exit my house.
The same should happen with apps.
When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2. Further, as soon as I exit the process, the permission to access dir1 should end as well.
Asking continuously is worse than not asking at all…
But fine lock windows down for normal users as long as I can still disable all the security. We don't need another Apple.
What happens if the user presses ^O, expecting a file open dialog that could navigate to other directories? Would the dialog be somehow integrated to the OS and run with higher permissions, and then notepad is given permissions to the other directory that the user selects?
There’s also a similar photos picker (PHPicker) which is especially good from 2023 on. Signal uses this for instance.
Because security people often does not know the balance between security and usability, and we end up with software that is crippled and annoying to use.
For FreeBSD there is capsicum, but it seems a bit inflexible to me. Would love to see more experiments on Linux and the BSDs for this.
I had been thinking of a way to avoid the CloudABI launcher. The entitlements would instead be in the binary object file, and only reference command-line parameters and system paths. I have also thought of an elaborate scheme with local code signing to verify that only user/admin-approved entitlements get lifted to capabilities.
However, CloudABI got discontinued in favour of WebAssembly (and I got side-tracked...)
Redox is also moving towards having capabilities mapped to fd's, somewhat like Capsicum. Their recent presentation at FOSDEM: https://fosdem.org/2026/schedule/event/KSK9RB-capability-bas...
A slightly more advanced model, which is the default for OSes today, is to have a notion of a "user", and then you grant certain permissions to a user. For example, for something like Unix, you have the read/write/execute permissions on files that differ for each user. The security mentioned above just involves defining more such permissions than were historically provided by Unix.
But the holy grail of security models is called "capability-based security", which is above and beyond what any current popular OS provides. Rather than the current model which just involves talking about what a process can do (the verbs of the system), a capability involves taking about what a process can do an operation on (the nouns of the system). A "capability" is an unforgeable cryptographic token, managed by the OS itself (sort of like how a typical OS tracks file handles), which grants access to a certain object.
Crucially, this then allows processes to delegate tasks to other processes in a secure way. Because tokens are cryptographically unforgeable, the only way that a process could have possibly gotten the permission to operate on a resource is if it were delegated that permission by some other process. And when delegating, processes can further lock down a capability, e.g. by turning it from read/write to read-only, or they can e.g. completely give up a capability and pass ownership to the other process, etc.
So when it's all said and done, I do not expect practical levels of actual isolation to be that great.
The data doesn't support the suggestion that this is happening on any mass scale. When Apple made app tracking opt-in rather than opt-out in iOS 14 ("App Tracking Transparency"), 80-90% of users refused to give consent.
It does happen more when users are tricked (dare I say unlawfully defrauded?) into accepting, such as when installing Windows, when launching Edge for the first time, etc. This is why externally-imposed sandboxing is a superior model to Zuck's pinky promises.
You can use the underlying sandboxing with bwrap. A good alternative is firejail. They are quite easy to use.
I prefer to centralize package management to my distro, but I value their sandboxing efforts.
Personally, I think it's time to take sandboxing seriously. Supply chain attacks keep happening. Defense is depth is the way.
Not sure how something can be called a sandbox without the actual box part. As Siri is to AI, Flatpak is to sandboxes.
Linux people are NOT resistant to this. Atomic desktops are picking up momentum and people are screaming for it. Snaps, flatpaks, appimages, etc. are all moving in that direction.
As for plain development, sadly, the OS developers are simply ignoring the people asking. See:
https://github.com/containers/toolbox/issues/183
https://github.com/containers/toolbox/issues/348
https://github.com/containers/toolbox/issues/1470
I'll leave it up to you to speculate why.
Perhaps getting a bit of black eye and some negative attention from the Great Orange Website(tm) can light a fire under some folks.
Sure, in theory, SELinux could prevent this. But seems like an uphill battle if my policies conflict with the distro’s. I’d also have to “absorb” their policies’ mental model first…
There is no such thing as computer security, in general, at this point in history.
Indeed. Why lock your car door as anyone can unlock and steal it by learning lock-picking?
That subtlety is important because it explains how the backdoors have snuck in — most people feel safe because they are not targeted, so there's no hue and cry.
Are they wrong? Do gradations of vulnerability exist? Is there only one threat model, “you’re already screwed and nothing matters”?
It's still worth solving one of these problems.
Linux has this capability, of course. And it seems like MacOS prompts me a lot for "such and such application wants to access this or that". But I think it could be a lot more fine-grained, personally.
iOS and Android both implement these security policies correctly. Why can't desktop operating systems?
The most popular desktop OSes have decades of pre-existing software and APIs to support and, like a lot of old software, the debt of choices made a long time ago that are now hard/expensive to put right.
The major desktop OSes are to some degree moving in this direction now (note the ever increasing presence of security prompts when opening "things" on macOS etc etc), but absent a clean sheet approach abandoning all previous third party software like the mobile OSes got, this arguably can't happen easily over night.
We can make operating systems where the islands can interact. Its just needs to be opt in instead of opt out. A bad Notepad++ update shouldn't be able to invisibly read all of thunderbird's stored emails, or add backdoors to projects I'm working on or cryptolocker my documents. At least not without my say so.
I get that permission prompts are annoying. There are some ways to do the UI aspect in a better way - like have the open file dialogue box automatically pass along permissions to the opened file. But these are the minority of cases. Most programs only need to access to their own stuff. Having an OS confirmation for the few applications that need to escape their island would be a much better default. Still allow all the software we use today, but block a great many of these attacks.
Sound engineers don't use lossy formats such as MP3 when making edits in preproduction work, as its intended for end users and would degrade quality cumulatively. In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
It would be unfortunate to see the nuance missed just because a system isn't 'new', it doesn't mean the system needs to be scrapped.
> In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.
I'm worried that many software developers (including me, a lot of the time) will only enable security after exhausting all other options. So long as there's a big button labeled "Developer Mode" or "Run as Admin" which turns off all the best security features, I bet lots of software will require that to be enabled in order to work.
Apple has quite impressive frameworks for application sandboxing. Do any apps use them? Do those DAWs that sound engineers use run VST plugins in a sandbox? Or do they just dyld + call? I bet most of the time its the latter. And look at this Notepad++ attack. The attack would have been stopped dead if the update process validated digital signatures. But no, it was too hard so instead they got their users' computers hacked.
I'm a pragmatist. I want a useful, secure computing environment. Show me how to do that without annoying developers and I'm all in. But I worry that the only way a proper capability model would be used would be by going all in.
In such a scenario, you can launch your IDE from your application manager and then only give write access to specific folders for a project. The IDE's configuration files can also be stored in isolated directories. You can still access them with your file manager software or your terminal app which are "special" and need to be approved by you once (or for each update) as special. You may think "How do I even share my secrets like Git SSH keys?". Well that's why we need services like the SSH Agent or Freedesktop secret-storage-spec. Windows already has this btw as the secret vaults. They are there since at least Windows 7 maybe even Vista.
naturally even flatpak on Linux suffers from this as legacy software simply doesn’t have a concept of permission models and this cannot be bolted on after the fact
try to run gimp inside a container for example, you’ll have to give access to your ~/Pictures or whatever for it to be useful
Compared to some photo editing applications on android/iOS which can work without having filesystem access by getting the file through the OS file picker
Or the easier way with an external tool is using Sandboxie: https://sandboxie-plus.com/
There are so many nooks and crannies where malware can hide, and Windows doesn't enforce any boundaries that can't be crossed with a trivial UAC dialog.
Windows enforces driver signing and has a deeper access control system that means a root account doesn't even truly exist. The SYSTEM pseudo-account looks like it should be that, but you can actually set up ACLs that make files untouchable by it. In fact if you check the files in System32, they are only writable by TrustedInstaller. A user's administrative token and SYSTEM have no access those files.
But when it comes down to it, I wouldn't trust any system that has had malware on it. At the very least I'd do a complete reinstall. It might even be worth re-flashing the firmware of all components of the system too, but the chances of those also being infected are lower as long as signed firmware is required.
Malware and supply chain attack landscape is totally different now. Linux has many more viruses than in the past . People don’t actively scan because they are operating on a 1990s mindset
In Linux, one could write a script that reinstalls all packages, cleans up anything that doesn't belong to an installed package, and asks you about files it's not sure about. It's easy to modify a Linux system, but just as easy to restore it to a known state.
I'm surprised this wasn't linked from the original notepad++ disclosure
Sadly, it feels like Microsoft updates lately have trended back towards being unreliable and even user hostile. It's messed up if you update and can't boot your machine afterwards, but here we are. People are going to turn off automatic updates again.
https://docs.github.com/en/code-security/reference/supply-ch...
Using notepad++ (or whatever other program) in a manner that deals with internet content a lot - then updating is the thing.
Using these tools in a trusted space (local files/network only) : then don't update unless it needs to be different to do what you want.
For many people, something in between because new files/network-tech comes and goes from the internet. So, update occasionally...
Disagree. It's hard to screw up a text editor so much that you have buffer overflows 10 years after it's released, so it's probably safe. It's not impossible, but based on a quick search (though incomplete because google is filled with articles describing this incident) it doesn't look like there were any vulnerabilities that could be exploited by arbitrary input files. The most was some dubious vulnerability around being able to plant plugins.
Do you mean I should worry about the fixed CVEs that are announced and fixed for every other distribution at the same time? Is that the supply-chain attack you're referring to?
Naive question, but isn't this relatively safe information to expose for this level of attack? I guess the idea is to find systems vulnerable to 0-day exploits and similar based on this info? Still, that seems like a lot of effort just to get this data.
You don't need 0days when you already have RCE on an unsandboxed system.
Could this be the attacker? The scan happened before the hack was first exposed on the forum.
https://arstechnica.com/security/2026/02/notepad-updater-was...
I recommend removing notepad++ and installing via winget which installs the EXE directly without the winGUP updater service.
Here's an AI summary explaining who is affected.
Affected Versions: All versions of Notepad++ released prior to version 8.8.9 are considered potentially affected if an update was initiated during the compromise window.
Compromise Window: Between June 2025 and December 2, 2025.
Specific Risk: Users running older versions that utilized the WinGUp update tool were vulnerable to being redirected to malicious servers. These servers delivered trojanized installers containing a custom backdoor dubbed Chrysalis.
https://community.notepad-plus-plus.org/topic/27212/autoupda...
Thankfully the responses weren’t outright dismissive, which is usually the case in these situations.
It was thought to be a local compromise and nothing to do Notepad++.
Good lessons to be learned here. Don’t be quick to dismiss things simply because it doesn’t fit what you think should be happening. That’s the whole point. It doesn’t fit, so investigate why.
Most tech support aims to prove the person wrong right out the gate.
It's listed as the third most popular IDE after Visual Studio Code and Visual Studio by respondents to Stack Overflow's annual survey. Interestingly, it's higher among professionals than learners. Maybe that's because learners are going to be using some of those newer AI-adjacent editors, or because learners are less likely to be using Windows at all.
I'm sure people will leap to the defense of their chosen text editor, like they always do. "Oh, they separated vim and Neovim! Those are basically the same! I can combine those, really, to get a better score!" But I think a better takeaway is that it's incredible that Notepad++, an open source application exclusive to Windows that has had, basically, a single developer over the course of 22 years, has managed to reach such a widespread audience. Especially when Scintilla's other related editors (SciTE, EditPlus) essentially don't rate.
If vim were good enough, neovim wouldn't exist. If neovim were that much better, vim wouldn't still be as popular as it is. And if neither of them did anything worth picking up, then vi would still outrank them.
The conclusion is that they don't do the same things. They just both have the vi interface. But having a vi interface isn't particularly weird anymore. SublimeText and vscode have vi bindings. So does PyCharm/IntelliJ. So does Notepad++! Heck, so does nano! So who gets to claim those editors? Vscode is the most popular editor that supports a vi-like interface. Shouldn't that mean that vscode is the best of the "vi descendants"? Or does it mean that all these people were okay with the vi interface, but had a good reason not to make the choice they did for another editor?
Fundamentally, the issue is: Either choice matters, or popularity doesn't matter. You can't have it both ways.
You can use the 2022 (ie. pre-chatgpt) results for control for that. The results are basically the same.
https://survey.stackoverflow.co/2022/#most-popular-technolog...
This train of thought made me go find https://www.oldversion.com/. For a while, that was invaluable.
For something functionality close I would look at Kate.