But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
Assuming we survive the gap period where every country chucks what they still have at their worst enemies, I mean. I suppose we can always hit each other with animal bones.
Rivers caught on fire for a hundred years before the EPA was formed.
This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.
I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".
Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.
> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.
This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.
On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.
We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?
The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.
We need to solve the underlying problem: how to sustainably develop and maintain the software we need.
A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.
Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?
Nuclear might be airgapped but what about water, power…?
Edit: and, ofc, what we're discussing here is Linux packages.
I have one server that has shell users, and I did the "yum update" and "reboot -f" dance last week.
Was that good enough? Oh no.
Here we go again!
All the arrogant asocial coder bros cast aside.
All the poorly reasoned shortcuts due to hustle culture and "git pull the world" engineering, startups aura farm on Twitter/social media about their cool sweatshop labor exploiting tech jobs...
Watching AI come around and the 2010s messes blow up in faces... chefs kiss
Hey it's all web-scale though! Good job!
We should have:
- OS level capabilities. Launched programs get passed a capability token from the shell (or wherever you launched the program from). All syscalls take a capability as the first argument. So, "open path /foo" becomes open(cap, "/foo"). The capability could correspond to a fake filesystem, real branch of your filesystem, network filesystem or really anything. The program doesn't get to know what kind of sandbox it lives inside.
- Library / language capabilities. When I pull in some 3rd party library - like an npm module - that library should also be passed a capability too, either at import time or per callsite. It shouldn't have read/write access to all other bytes in my program's address space. It shouldn't have access to do anything on my computer as if it were me! The question is: "What is the blast radius of this code?" If the library you're using is malicious or vulnerable, we need to have sane defaults for how much damage can be caused. Calling lib::add(1, 2) shouldn't be able to result in a persistent compromise of my entire computer.
SeL4 has fast, efficient OS level capabilities. Its had them for years. They work great. They're fast - faster than linux in many cases. And tremendously useful. They allow for transparent sandboxing, userland drivers, IPC, security improvements, and more. You can even run linux as a process in sel4. I want an OS that has all the features of my linux desktop, but works like SeL4.
Unfortunately, I don't think any programming language has the kind of language level capabilities I want. Rust is really close. We need a way to restrict a 3rd party crate from calling any unsafe code (including from untrusted dependencies). We need to fix the long standing soundness bugs in rust. And we need a capability based standard library. No more global open() / listen() / etc. Only openat(), and equivalents for all other parts of the OS.
If LLMs keep getting better, I'm going to get an LLM to build all this stuff in a few years if nobody else does it first. Security on modern desktop operating systems is a joke.
Those exploits are in kernel, and the userspace is only calling the normal, allowed calls. Removing global open()/listen()/etc.. with capability-based versions would still allow one to invoke the same kernel bugs.
(Now, using microkernel like seL4 where the kernel drivers are isolated _would_ help, but (1) that's independent from what userspace does, you can have POSIX layer with seL4 and (2) that would be may more context switches, so a performance drop)
I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
FreeBSD isn’t secure, I suspect you’re sitting on a pile of 0 days for it?
1. No strong stack protectors.
2. No kASLR.
That's 20-year-old exploit methodology.
With FreeBSD there's never any question of "who should this get reported to".
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
https://github.com/artifact-keeper
An artifact manager. Only get what you approve. So you can get fast updates when needed and consistently known stable when you need it. Does need a little config override - easy work.
I had my own janky tooling for something like it. This is a good project.
If no one is willing to stand up and say "yes this is safe and of acceptable quality", why use it?
It's a software engineering version of the professional engineering stamp.
Also, IME we don't deep dive everything (should we?)
For most stuff we make sure the latest is not-shit and passed test cases. We do have ceremony around version bumps.
Another model is Perl's CPAN where you publish source files only.
Once noticed, that's where the exploit explosion erupts, excited exploiters everywhere, emboldened... enticed... excessively encouraged, by your delayed updates.
Attackers can’t push a security update without going through the reporting process (e.g. Github CVE), so they can’t necessarily abuse that easily.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
If a popular NPM package was compromised and included a copy.fail exploit, it would make lots of systems vulnerable to root privilege escalation.
I am worried that the sluggishness appeared about the same time on both devices
The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.
the idea that it exists at all is more or less a gentleman's agreement in the engineering world anyway
correctly using those tells me it was a stylistic choice not to use capital letters and omit the periods.
fwiw the HN guidelines say more about not posting "shallow dismissals", not complaining about "tangential annoyances" and being "kind, not snarky" than about grammar and punctuation: https://news.ycombinator.com/newsguidelines.html
Btw, s/onto/on to
Onto can be synonymously replaced with “on top of” which doesn’t work in that sentence.
It’s much more interesting to pay attention to the spirit of the comment than the structure, which I believe is also in the site guidelines. I’m also confident I have multiple grammatical errors in this comment.
> Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution.
I had to do a double take reading that. It’s written something happened and prevented them from following a schedule but seemingly they chose to release the information. I hope I’m missing something where it was forcibly disclosed elsewhere.
Edit: Moments later I refreshed the homepage and saw the announcement. They do claim to have consulted with maintainers
Very odd wording. I assume there’s an interesting/upsetting story here that will come out soon.
Containerization improved reproducibility in some ways, but in practice a lot of CI pipelines still behave like live dependency roulette.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
this is only scary for rootless containers as it skips an isolation layer, but we've started shipping distroless containers which are not vulnerable to this due to the fact that they lack priviledge escalation commands such as su or sudo.
never trust software to begin with, sandbox everything you can and don't run it on your machine to begin with if possible.
Infecting sudo just makes for a quick demo.
If your container has different processes at different user ids, the exploit would still be effective.
It would likely also be able to “modify” read only files mapped from the host.
But there are a lot of academic and research institutions that actually do have good Linux user management. I worked at a pediatric hospital, and the RHEL HPC admins did not mess around in terms of who was allowed to access which patients' data. As someone who was not an admin, it was a huge pain and it should have been. So this bug has pretty serious implications, seems like anyone at that hospital can abscond with a lot of deidentified data. [research HPC not as sensitive as the clinical stuff, which I think was all Windows Server]
I won't enter into all the details but... It's totally possible to not have the sudo command (or similar) on a system at all and to have su with the setuid bit off.
On my main desktop there's no sudo command there are zero binaries with the setuid bit set.
The only way to get root involves an "out-of-band" access, from another computer, that is not on the regular network [1].
This setup as worked for me since years. And years. And I very rarely need to be root on my desktop. When I do, I just use my out-of-band connection (from a tiny laptop whose only purpose is to perform root operations on my desktop).
For example today: I logged in as root blocked the three modules with the "dirty page" mitigation suggested by the person who reported the exploit.
You're not faking sudo with a mocking-bird on my machine. You're not using "su" from a regular user account. No userns either (no "insmod", no nothing).
Note that it's still possible to have several non-root users logged in as once: but from one user account, you cannot log in as another. You can however switch to TTY2, TTY3, etc. and log in as another user. And the whole XKCD about "get local account, get everything of importance", ain't valid either in my case.
I'm not saying it's perfect but it's not as simple as "get a local shell, wait until user enters 'sudo', get root". No sudo, no su.
It's brutally simple.
And, the best of all, it's a fully usable desktop: I'm using such a setup since years (I've also got servers, including at home, with Proxmox and VMs etc., but that's another topic).
That's my main reason to use "sudo" on the desktop.
I suppose I could install every piece of software locally, either from source or via flatpak, but this is a lot of work and much harder than doing it the easy way and using global install via my distro. Plus, non-distro installs are much more likely to be out of date and contain vulnerabilities of their own.
but they all have something in common, the issue is that your user is compromised that means the applications running in that user are compromised the only thing you gain is that you can trust your system, you can trust that your system is not compromised which is only relevant with infrastructure since if your user is compromised you're already fucked, multi-user setups with untrusted accounts are inheritly insecure and in infrastrucure the blast radius might be thousands of users that use the said service.
the breakdown looks something like this:
- you heavily compromise a single user <- exploit not relevant
- you compromise a shared setup via a bad user to compromise a lot of users <- should never be used anymore, namespace isolation is the replacement
- you somewhat compromise a lot of users via infra compromise <- where this hurtsMost people however aren't and will happily run sudo after an npm postinstall script tells them to apt-install turboencabulator for their new frontend framework to function.
Something that concerns me more is I use things like gemini-cli or claude-cli via their own, non-sudo accounts with no ssh keys or anything on my laptop, but a LPE means they can find away around such restrictions if they feel like it (and they might).
Even worse are the “extension packs” that combine some normal things and one wonky thing nobody’s ever heard of…
They’re always racing to be the first one to write an article about a case.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
When I see pages of obviously generated prose being submitted as any kind of documentation, my eyes just glaze over. I feel so guilty sharing similar stuff too, though to my credit, at least I always lead with a self-written TLDR, the slop is just for reference. But it's so bad, like genuinely distressing tier. I don't want to read all that junk, and more and more gets produced.
Prose type docs have always been my Achilles heel, and this is like the worst possible evolution of that.
For a brief period in the past few weeks, they somehow managed to make a change to ChatGPT Thinking that made it succint. The tone was super fact oriented too. It was honestly like waking up from a fever dream.
I know this is unrelated to the article, but related to the title.
Code is cheap and is becoming cheaper by the day. We need new paradigms.