You can sort of do it if you carefully structure your program to restrict syscall use and then use some minimal and well audited syscall filtering layer to hide most of the kernel. But you really have to know what you're doing and proper security hardening will break a lot of software. To get a basic level of security, you have to disable anything with the letters "BPF", hide all virtual filesystems like /proc, /sys, disable io_uring and remove every CONFIG_* you see until something stops working. Some subsystems seem more vulnerable than others (ironically netfilter seems to be a steady source of vulnerabilities).
When quoting kernel CVEs as evidence as signs of insecurity, especially so seemingly authoritatively, please make sure you're informed about how what Linux kernel CVEs mean.
A CVE (for any product) does not automatically mean there is actually a vulnerability there or even if one is exploitable unless explicitly noted (in the CVE or credibly by someone else). Proof of concepts, reproducibility or even any kind of verification are not a part of the CVE process.
For the Linux kernel in particular, the CVE process is explicitly to be "overly cautious" [1]. In practice, this means the Linux security team requests a CVE for anything that has a mere whiff of being theoretically exploitable. Of course that doesn't mean that the bug that was fixed was actually exploitable, not even theoretically but certainly not in practice.
As a result, you can't use CVEs reported by the Linux kernel to make claims about the (lack of) practical security of any Linux system, including your desktop. The CVEs reported by the Linux kernel are there to notify you to very well informed users of the kernel to do further risk assessments, not to be taken at face value as a sign of insecurity. [The latter is true for the entire CVE system - they're not to be taken at face value as signs something is wrong. But it's especially true for the kernel.]
Looking at the raw number of CVEs is not very meaningful
i dont appreciate putting "vulns" in scare quotes, if that was your intent
swiss cheese theory. all it takes is someone changing a component that allows that vulnerability to be chained into an exploit, which has happened many times.
these should be tracked, and in fact, it's very helpful to assign cves to them
but yeah, raw numbers is less useful. in fact, cves as a "is it secure or not" metric are pretty rough. it makes it easier to convince vendors to keep their software up to date, though...
And of course collocating different classes of work can lead to a bug in a low priority task taking down a high priority one. So those also shouldn’t run in the same partition. Once you’ve taken both of those into account, you’ve already added some security in depth. It’s hard even to escalate a remote exploit into a privilege escalation into attacking a more lucrative neighbor.
Containers are everywhere.
I'm sure you're well aware, but for the readers, they are isolated with a CPU's VT instructions which are built to isolate VMS. I still think "containers don't contain" in a very Dan Walsh boston accent, but this seems like a respectable start.
https://github.com/kata-containers/kata-containers/blob/main...
I believe there are even more course grained timing attacks with dma and memory that are waiting to be abused.
Also, every security domain in an Android systems shares a kernel, yet Android is one of the most secure systems out there. Sure, it uses tons of SELinux, but so what? It still has a shared kernel, and a quite featureful one at that.
I don't buy the idea that we can't do intra-kernel security isolation and so we shouldn't care about local privilege escalation.
It isn't impossible to do things right, but in practice, things are usually done badly.
Edit: to be clear, I knew the disk was COW but I thought it saved memory by loading one instance of shared objects into memory.
It does! The trick is that it loads the shared object read-only as far as the CPU is concerned. If a program tries to modify the memory, the CPU (I'm simplifying a lot here) throws an exception. The kernel catches that exception, makes a copy of the memory the program is trying to modify, puts the copy of the original memory at the same address as the original read-only memory, and tells the program to re-try the write operation, which now succeeds. All of this happens without the application doing the writing being aware of what's going on. From its point of view, writes Just Work.
This way, you get the memory savings of sharing and the flexibility to do writes all without the security problems of shared mutability.
You might enjoy reading about OS virtual memory operation more generally!
Namely, nft tables and its filtering.
https://almalinux.org/blog/2025-06-18-test-patches-for-cve-2...
I'm not sure, I appear to be running pipewire. But assuming it's not my own account: not a user that will initiate an attack. A user account that allows logins or runs external servers would have to get compromised first, and at that point it can use the exploit directly with no need to touch pulseaudio.
If there's only one directory in your /home, it's very unlikely the urge for admins to patch this is directed at you.
A local priv-sec is one exploit [0] away from a remote one.
[0] https://www.bleepingcomputer.com/news/security/hackers-explo...
The box I checked has no pipewire user and it's running under the account I logged in with.
> A local priv-sec is one exploit [0] away from a remote one.
That only matters for accounts that talk to the outside world.
If I'm the only user, I'm not depending on security features to keep my account and the pipewire account safe from each other. Privilege escalation is a big threat for systems that are running in a significantly different way.
Maybe I wasn't clear. I'm saying exactly one account has meaningful exposure to the outside world, and it's the only one with valuable files. Not none, but also not multiple. It's effectively single user from a security perspective.
In which case your user is in the video group, and a local escape hands over root without any extra effort required.
It's a normal install of linux mint. Resolved and timesyncd are running under systemd users, there's also messagebus, polkitd, kernoops, syslog, avahi, libvirt-dnsmasq, rtkit, colord. And root of course. But pipewire is under my user, and I checked in /etc/passwd that there is no pipewire user or pulseaudio user or any synonym of the word "audio".
> In which case your user is in the video group, and a local escape hands over root without any extra effort required.
But I'm the only real user so if you have to go through my account to get root then root doesn't let you compromise anyone. Which is my point, that an exploit like this is far less meaningful on a system without multiple real accounts.
Also just because syscall A might be vulnerable to a particular type of attack, it doesn’t mean that service B uses that syscall, let alone calls it in a way that can be exploited.
sudo, another setuid binary with a lot of policy code, has 210 CVEs / 430.150 kLoC = ~0.5 CVE per kLoC.
57.5% of CVEs have a CVSS >= 7, so 0.5 * 0.575 = 0.2875 CVE7/kLoC.
As a back-of-envelope estimate,
udisks: 0.2875 CVE7/kLoC * 265.334 kLoC = ~76.28 critical CVEs;
pmount: 0.2875 CVE7/kLoC * 19.9780 kLoC = ~5.7 CVEs.
Sudo (and other setuid programs) could in principle use privilege separation to punt everything not absolutely essential to an unprivileged context and thereby reduce the size of the TCB.
(And this isn't even the most arcane part of linux userland authorization and authentication. PAM is by far the scariest bit, very few people understand it and the underlying architecture is kinda insane)
It literally replays in the terminal like a movie. It's nice, but I worry too much about the security implications (passwords captured, etc) to roll it out.
edit:
Ah yes, sudoreplay. You can see this video a playback via it. That's not the guy typing, that's sudoreplay time-accurately replaying what happens.
script --log-timing file.tm --log-out script.out
# do something in a terminal session ...
scriptreplay --log-timing file.tm --log-out script.out
# replay it, possibly pausing and increasing/decreasing playback speed
Accounts thereafter, ruined everything.
One we started using connected machines for much and people with flexible though morals noticed that there was trust in the system(s) ripe for exploitation for fun or profit or both.
I remember SMTP hosts being open by default because it wasn't a problem, that very quickly changed once spam was noted as potentially profitable.
There were accounts all over from quite early on, in academic environments before businesses took much of an interest, if only to protect user A from user B's cockups ("rm -rf /home /me/tmp") though to some extent also because compute time was sometimes a billable item, just not on single user designed OSs¹.
[1] Windows, for example, pre NT & 95 (any multi-user features you might have perceived in WfW 3.x were bolted on haphazardly and quite broken WRT actual security)
More precisely, it runs as the file owner. Which is often root.
I didn’t exactly know what setuid did. I learned something today. :)
Going off its security advisories page [1] and this tracker [2], it seems to be around 43 CVEs, most rated high severity.
So the actual rate would be 43 CVE / 430 kLoC = ~0.01 CVE per kLoC, so ~2.65 CVEs for udisks and ~0.2 for pmount.
[0] https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=sudo
cpe:2.3:a:sudo_project:sudo:*:*:*:*:*:*:*:*
cpe:2.3:a:todd_miller:sudo:*:*:*:*:*:*:*:*
The above pair are the same "sudo", but split arbitrarily, perhaps varying by assigning authority preference.
(There are some other "sudo" named projects too).Those CPE IDs were determined by a brute-force-ish XML grep:
xml select -N cpe-23="http://scap.nist.gov/schema/cpe-extension/2.3" -t --match '//cpe-23:cpe23-item' --if 'contains(@name,":sudo:")' -v "@name" -n official-cpe-dictionary_v2.3.xml
Now, mapping CVE<->CPE is a tricker problem, it's not 1:1 (a single CVE can affect multiple product versions), and harder here since sudo (1986-ish) predates CVEs (1999) by a decade, and CPE (2009) by two. The most capable searches seem to be via non-free APIs or "vulnerability management $olutions", plus a few CLIs tools that need a lot of care and feeding.This web service is free: https://cve.circl.lu/ But, you cannot search directly by CPE right now; you can start a search by vendor, then filter by product:
todd_miller sudo: 58 vulnerabilities
sudo_project sudo: 42 vulnerabilities
Except, for reasons I don't understand, there are duplicates because they somehow source "unique" but overlapping CVEs from multiple databases. The true number might be 50 combined, of varying severity/concern, but I give up now. I'm going to go mutter into my beard for a while.Repo here: https://github.com/trifectatechfoundation/sudo-rs
It's permissively licensed, unfortunately. Wonder why. It's not a library. But it ought to improve security in the long run.
I've been loosely involved in setting this up, so I can say a little: The people that funded the initial work wanted it permissively licensed. My (somethwat informed) conjecture is that they rank making things secure - even in closed source apps that now could take the code - higher than barring closed forks. It also tracks with the Rust ecosystem in general - APL or derivates are very common in that ecosystem.
What makes you say that? I'm not trying to be argumentative, I'm genuinely interested.
At times the authors who rewrite-in-foo are just motivated to expand the foo ecosystem and are not primarily interested in making a working program, much less in possession of the requisite subject matter expertise or focus.
Also does `sudo` not have a spec or any existing unit tests for the previous vulnerabilities that they can benefit from? I'd be pretty shocked if there wasn't a lot of regression testing and documentation available to anyone implementing something this vital.
One of our engineers involved in the project wrote about the testing approach they used and about the issues they found here https://ferrous-systems.com/blog/testing-sudo-rs/.
Later, a dedicated security audit for the rewrite was performes which uncovered three issues, one of which also affects the original sudo implementation. https://ferrous-systems.com/blog/sudo-rs-audit/
I generally support the notion that rewrites of large complex code bases are usually a bad choice, but sudo is not a particularly large codebase not is it particularly comples - it's just particularly sensitive. In those cases, I believe the tradeoff can fall the other way - rewriting old, feature-stable codebases (to a reduced scope) can lead to improvements on all axis.
There are systems where you really want to preserve accidental quirks of behavior that other things depend on. Sudo I think is not one of those.
I agree.
How about we just start using doas, anyway?
But yeah a simpler program is probably good in this situation.
On the other hand, if the replacement isn’t targeting full sudo feature set and also reducing the amount of code and/or making architectural improvements like keeping most code not running as root, then the blast area of such logic bugs can be reduced.
* https://github.com/trifectatechfoundation/sudo-rs?tab=readme...
Well that makes it useless for $WORK (for now), as we use LDAP as our central policy repo (and more generally our user account store). Will have to wait until (at least) that's implemented before we can even consider it.
> It's permissively licensed, unfortunately. Wonder why.
So it can be used distributed with fewer legal hassles.
[0] - https://www.freedesktop.org/software/systemd/man/devel/run0....
Well damn that's a shame. I just hate it when people let others use their work in a way they choose, that happens to be less restrictive than my own personal choices.
/s of course.
Copyleft licenses are demonstrably better for open source projects in the long run. We've had enough time to prove that out now.
Look beyond the OS, and much of the tech stack is dominated by non-copyleft open source projects. Both the major web servers--Apache and nginx--are permissively licensed, for example. Your SSL stacks are largely permissively licensed; indeed, most protocol servers seem to me to largely be permissively licensed rather than copyleft.
And I should also point out a clear example where copyleft has hobbled an ecosystem: Clang and LLVM have ignited a major compiler-based ecosystem of ancillary tools for development such as language servers. The gcc response to this is... to basically do nothing, because tight integration of the compiler into other components might allow workarounds that release the precious goodness of gcc to proprietary software, and Stallman has resisted letting emacs join in this revolution because he doesn't want a dependency on non-copyleft software. An extra cruel irony is that Clang appears to be an existential threat to the proprietary EDG compiler toolchain, which would mean it took a permissive license to do what the goal of the copyleft license was in the first place: kill proprietary software.
To the contrary, GNU Hurd is GPL'ed and is much less successful than the linux kernel.
In the end, if you want projects to succeed they need contributors. Unfortunately, some of them need to be reminded to play fair more than others, and in those cases the legalese helps.
You clearly didn't understand my point: I'm not arguing about whether GPL is better than MIT or BSD or even SSPL/etc.
My point is that if someone else chooses to release their software with less restrictions on it than I would choose, that's literally none of my business.
They wrote the fucking thing, they get to choose how it's fucking licensed.
Plenty of organisations (and thus people) skip using GPL licensed software due to inability or unwillingness to be bound by it's terms.
I'm still waiting for the day the GPL camp says they're not going to use things like OpenSSH, Apache, Nginx, Postgres, Python, Ruby - because they're too fucking permissive.
I use both privately and professionally and while I accept that security-wise (even with selinux) they feel lacking, feature-wise they far exceed Windows I use as my other is except in gaming experience.
I wish I had something like GrapheneOS on desktops (yes I know about Qubes)
I tried Ubuntu last year, and it felt very limited compared to Windows. It lacked very basic features like face/fingerprint login, hybrid sleep, factory reset, live FDE (or post-installation FDE), fast fractional HiDPI, two-finger right-click, "sudo" on dock etc.
Just searching grsecurity on HN turns up some interesting stuff.
Unfortunately, there's no popular non-Google distro of it.
It also seems to have a lot of new code every year for very few new features. It's as if they get every new intern to rewrite a bit of the innards, and then next summer another intern rewrites it again.
First CoreOS, which forked into Flatcar Linux (now funded by Microsoft) and Fedora CoreOS (rewrite from Gentoo/ChromeOS base to Fedora base), and Google's Container-Optimized System (used heavily in Google Kubernetes Engine).
SecureBlue and Kicksecure are the closest equivalents.
> I wish I had something like GrapheneOS on desktops
Secureblue is essentially as close to GrapheneOS as Desktop Linux can get. Neither my response nor the original question required qubes comparisons. It was merely mentioned.
> grsecurity® is the only drop-in Linux kernel replacement offering high-performance, state-of-the-art exploit prevention against both known and unknown threats.
While secureblue is a full desktop distro (not just a kernel) that integrates key grapheneos hardening tools like their hardened malloc and forks of their hardened chromium and works with flatpak as a base for hardened application deployment.
grsecurity does literally none of that.
You do not understand what you are talking about because if you did you'd be embarrassed for how braindead your response is.
Video performance is a fair criticism for sure.
But is perpetual 0day RCE's in browsers and other highly exposed software is not running Qubes really a credible choice?
And people will say "Yeah, but it is amazing". Then why do so many people feel the need to defend it in terms of _being better than Windows_? Clearly they prioritize the perception of being better than Windows over being actually good, because otherwise they would defend it by pointing out how good it is. Are they all just weirdos, or have they subconsciously picked up on the real but unwritten culture of Linux?
Take filesystems, the official filesystems are UFS(1/2) and ZFS. They have GEOM as LVM and LUKS and more.
That being said, the majority of money and development goes into Linux, which by itself may make it a better system (eventually).
Edit: Of course UFS is not deprecated.
The general lesson from that seems to be that a simpler, well-understood, well-tested and mostly static attack surface is better than a more complex, more fully-featured and more dynamic attack surface. I wonder whether we'll see a trend towards even more boring Linux distributions which focus on consistency over modernity. I wouldn't complain if we did.
Less code means less possibility for bugs, and is easier to audit.
In my book, WireGuard perfectly follows the UNIX philosophy of making a simple tool that does exactly one thing and does it well.
The right comparison is not between a particular BSD and Linux, its between a particular BSD and a Linux distro.
The full range of distros are very different from each other. Consider Void, Alpine, Gentoo, Chimera, NixOS.....
Different C libraries, init systems, different default command line utilities....
Try running a FreeBSD binary under OpenBSD.
While I cannot agree nor disagree on the quality of BSDs (haven't used one in 20 years), I find it funny that in this case a design by committee is proof of quality.
I guess it's better than design by headless chicken which is how the Linux user-space is developed. Personally, I am a big fan of design by dictatorship, where one guy at the top either has a vision or can reject silly features and ideas with strong-enough words (Torvalds, Jobs, etc.) - this is the only way to create a cohesive experience, and honestly if it works for the kernel, there's no reason it shouldn't work in userspace.
I don't think "design" is correct word: organized, managed, or ran perhaps.
> The FreeBSD Project is run by FreeBSD committers, or developers who have direct commit access to the master Git repository.[1] The FreeBSD Core Team exists to provide direction and is responsible for setting goals for the FreeBSD Project and to provide mediation in the event of disputes, and also takes the final decision in case of disagreement between individuals and teams involved in the project.[2]
* https://en.wikipedia.org/wiki/FreeBSD_Core_Team
There is no BDFL, à la Linux or formerly Python: it's a 'board of directors'. Decisions are mostly dispute / policy-focused, and less technical for a particular bit of code.
They decide what gets included in the default distribution, they set the goals and provider sponsorships for achieving them.
So yes, board of directors is probably more fitting.
And then of course you have the people with a commit bit. They can essentially work on whatever they like, but inclusion into the main branch is still up to the core team.
There was a huge debate some years ago when Netgate sponsored development/porting of WireGuard to FreeBSD, and the code was of a poor quality, and was ultimately removed from FreeBSD 13.
They are still missing something like capability based security like iOS and Android have where apps have to be granted access to use things like files or the camera. It may have been considered secure a couple decades ago, but they have fallen behind the competiton.
I wouldn't consider any of these systems "secure", though, as a practical matter. In terms of preventing a breakout, I'd trust an application on OpenBSD with strict pledge and unveil limits, or a Linux process in a classic seccomp sandbox (i.e. only read, write, and exit syscalls), more than any of those other systems. Maybe Capsicum, too, but I'm not familiar enough with the implementation to know how well it limits kernel code surface area. But any application that can poke at (directly or indirectly) complicated hardware, like the GPU, is highly problematic unless there are proofs of correctness for any series of inputs that can be sent by the process (which I don't think is the case).
We need at least the following sets: effective, permitted, bounding (per escalation method?), and the ability to make a copy of all of the preceding to automatically apply to a child (or to ourselves if we request an atomic change). Linux's `inheritable` set is just confusing, and confusion means people will use it wrong. At least we aren't Windows.
...like Capsicum?
The last place in security is iOS.
Not to mention illumos-based systems too.
Since then even more stuff went to the Web, but I really I doubt Illumos got any extra traction.
Now if FreeBSD (or indeed illumos) would get CUDA-support we could stop using linux for GPU nodes too.
Could you not run Linux CUDA binaries under FreeBSD's Linuxulator?
[0] https://www.freebsd.org/status/report-2024-04-2024-06/#_part...
The answer heavily depends on your configuration. Unprivileged with a spartan syscall filter and a security profile is very different than privileged with the GPU bindmounted in (the latter amounts to a chroot and a separate user account).
And yes, this will most likely be a mess.
You just defined 'life' in general.
That assumes:
1) Attacker already have an account on the system
2) The app `udisks` is installed on the system.
Everyone is fighting the same battle and it's a good thing. It is happening because the rest of the system is hard enough to attack these days. This is true for all major OS:es.
Only fanboys bend reality to make this into a good-vs-bad argument.
What the parent poster meant is that you first need a way to run arbitrary code before local privilege escalation matters, so the exploit chain has to include _something_ that gets you local code execution.
I tend to agree with the parent poster, for most modern single-user linux devices, local privilege escalation means almost nothing.
Like, I'm the only user on my laptop. If you get arbitrary code execution as my user, you can log my keystrokes, steal my passwords and browser sessions, steal my bitcoin wallet, and persist reasonably well.... and once you've stolen my password via say keylogging me typing `sudo`, you now have root too.
If you have a local privilege escalation too, you still get my passwords, bitcoin wallet, etc, and also uh... you can persist yourself better by injecting malware into sshd or something or modifying my package manager? idk, seems like it's about the same.
I haven't actually looked at the numbers, but I strongly suspect that it's true that the overwhelming majority of single-user Linux devices out there are Android devices. If that's true, then it's my understanding that Android does bother to fairly properly sandbox programs from each other... so an escalation to root would actually be a significant gain in access.
Applications have different user IDs and different SELinux contexts.
Android security is tight
> Like, I'm the only user on my laptop. If you get arbitrary code execution as my user, you can log my keystrokes, steal my passwords and browser sessions, steal my bitcoin wallet, and persist reasonably well.... and once you've stolen my password via say keylogging me typing `sudo`, you now have root too.
In this context, "single user system" means either "single human using the system", or "one human physically sat in front of the system's 'console' at one time". It's in contrast with systems that have multiple human users logged in and using the system simultaneously. So, nearly 100% of "single user systems" of this type will have software running under different "user" accounts on the system, but still meet the definition, because those accounts are actually "machine" or "service" accounts.
I do think that this overload of the terminology is bogus and confusing. It should be called something like "single seat system", but here we are.
> Android security is tight
Yep. That's what I said: "[I]t's my understanding that Android does bother to fairly properly sandbox programs from each other... so an escalation to root would actually be a significant gain in access."
Firefox, the desktop environment, your password manager and even `sudo` are traditionally all running as your own user.
This is not true in Android whatsoever.
Being multi-seat or not has little security implications - most traditional Linux systems can handle multi-seat but they’re still limited in security by running everything as a single user
And no nearly all 100% of Linux systems do not run proper multi-user configurations because none of the most popular distributions ship like that. Not in the context of desktop usage anyway.
Servers do use multi-user configuration but that’s not what we’re talking about here
Um. Have you ever run 'ps aux', guy? At minimum you're running everything as two users (root and your user account), and probably three to twenty more, depending on what you have installed. I know that on my desktop system
ps axo user | sort -u | grep -v USER | wc -l
returns 12. Even back in the late 1990s/early 2000s, the default method of operation for Linux systems was to use multiple machine accounts.> And no nearly all 100% of Linux systems do not run proper multi-user configurations because none of the most popular distributions ship like that. Not in the context of desktop usage anyway.
In addition to my commentary above, see: <https://help.ubuntu.com/stable/ubuntu-help/user-add.html.en>
Most Linux systems don't run every single program as a separate Linux user. That doesn't mean that those systems are "in fact running everything as one user".
Lmfaoooo
I’m assuming you have actually never ran a linux on your desktop. Lmaooooo.
Yeah sure init runs as root, and maybe you have background services that run as some other user.
BUT YOUR ACTUAL DESKTOP SESSION RUNS AS ONE USER. THIS INCLUDES YOUR BROWSER, YOUR PASSWORD MANAGER AND ALL YOUR OTHER SHIT!
https://paste.centos.org/view/f8e5ec76
so multi-user much secure
You know being a know-it-all only really works if you know what you are talking about.
Feel free to dig into the code of gnome-session if you don’t believe me.
Yes, the things I personally run nearly always run under my user account. I've never said otherwise. I've also said that Android doesn't do things this way, and that that's a good thing. As I mentioned in my comment to TheDong: [0]
> [I]t's my understanding that Android does bother to fairly properly sandbox programs from each other... so an escalation to root would actually be a significant gain in access.
And my comment to you: [1]
> In this context, "single user system" means either "single human using the system", or "one human physically sat in front of the system's 'console' at one time". ... So, nearly 100% of "single user systems" of this type will have software running under different "user" accounts on the system, but still meet the definition, because those accounts are actually "machine" or "service" accounts.
And from that same comment:
> > Android security is tight
> Yep. That's what I said: "[I]t's my understanding that Android does bother to fairly properly sandbox programs from each other... so an escalation to root would actually be a significant gain in access."
Moving on.
> Yeah sure init runs as root, and maybe you have background services that run as some other user.
Correct. That's why I said:
> Most Linux systems don't run every single program as a separate Linux user. That doesn't mean that those systems are "in fact running everything as one user".
Before you succumb to another fit of rage, take a few deep breaths, review my previous comments, and notice my critique about how Android does things, as well as my commentary about how Android is also a "single-user system" (as TheDong was using the term), and how I think the term is pretty bad, but it's the one that's widely used.
https://cdn2.qualys.com/2025/06/17/suse15-pam-udisks-lpe.txt
Instead of using something standard like environment variables, pam has a special "pam_env" that contains facts about the user session that it apparently trusts. Users can override pam_env settings by writing to hidden file in ~.
So, this exploit chain is more accurately described as "yet another example of utilities inventing new, obscure configuration mechanisms for security-critical settings, allowing policy flaws to remain undetected for a long time".
Running security configuration options through a special snowflake IPC mechanism (instead of keeping them in a file where they could actually be inspected by humans) would only make things worse.
I finally understand why they're trying to deprecate `pam_env`, despite its incredible utility. For some reason, instead of only applying its contents to the user environment for the child process like any sane person would do, they are trusting its values for the library calls in the privileged parent itself.
But it's the same kind of problem as general environment vars - rather than just a name, maybe it needs metadata of where it came from.
To be clear, I'm talking about the unprivileged to allow_active CVE-2025-6018, not the allow_active to root.
The only safe way to use pam_env's `user_readenv` parameter is as the final rule of `type=session`. This behaves as you'd expect, affecting the child process only.
It appears that openSUSE enables the option for other rule types (auth and/or account), in which case it affects the parent process as well. Oops!
For the record, user_readenv has been disabled since:
commit 4c430f6f8391555bb1b7b78991afb20d35228efc
Author: Tomas Mraz <tm@t8m.info>
Date: Mon Oct 11 14:24:30 2010 +0000
Relevant BUGIDs:
Purpose of commit: bugfix
Commit summary:
---------------
2010-10-11 Tomas Mraz <t8m@centrum.cz>
* modules/pam_env/pam_env.c: Change default for user_readenv to 0.
* modules/pam_env/pam_env.8.xml: Document the new default for user_readenv.
... PAM 1.1.3. And it's been deprecated for a while, to be removed in a future release entirely.It's the wrong argument to a tool, but the suid part has nothing to do with environment variables or cleaning the env up.
PLEASE STOP SPREADING FUD.
Worrying when said person has authored a widely used security product(!). This is a bad trend in the industry that needs to stop.
Their comment was before yours.
I'm talking about this comment. Are you talking about this comment? From what knowledge I have, it looks like a good explanation of the problem and why it's not an environment variable problem.
So 'sudo -u foo bash' will prompt for the password of user foo, 'sudo bash' will prompt for the root password.
Haven't looked closer on how deep this custom configuration goes but would be nice to not have to carry around actual root password for sudo.
https://cdn2.qualys.com/2025/06/17/suse15-pam-udisks-lpe.txt
- SUSE Linux Enterprise 15 (Current LTS)
- Debian 12 (Current LTS)
- Ubuntu 24.04 (Current LTS)
... Were you thinking about a different bug...?
(This reminds me of one of my kids at a very young age. If you said "I like your trousers", she'd reply "they're not trousers, they're jeans". But, of course, jeans are a kind of trousers, and it isn't mandatory to be as specific as possible at all times).
It looks like some software projects are now entirely reliant upon PAM for authentication and don't support shadow passwords anymore. What a travesty. It's sort of like what happened with Systemd, where so many apps now entirely depend on Systemd, you can't run a Linux desktop without a "fake Systemd" to make things work. (see: Alpine Linux desktop, Slackware desktop)
All of this seems to be due to a kind of creepy crawly takeover of the system components, with new ones designed by enterprise companies and a few highly-opinionated software developers (who work at those companies). They design these components to do a million different things, but they also make them highly coupled and interdependent (which is terrible software design, but standard for enterprise products). This then results in a much more complex system with many more moving parts, and makes breaking it easier.
Since these companies hold sway over the most popular Linux distros with the most users, when they make a radical change, everybody else has to adopt it, just like with the browser world. Powerful incumbents exert an unfair (and unhealthy) amount of influence on our environment.
If you went back to a distro from 20 years ago, there really should only be a couple components: The X ecosystem (kernel drivers, userland drivers, rendering libraries), a console login program, a tty manager, a wifi manager, and, well... i'm struggling to think of anything else you need [after the system has booted]. Kernel drivers used to make up 90% of the hardware interfaces. Originally you just wrote to a device file for things like sound, printing, etc. It was an extremely simple system and it worked very well.
Today you have 80 different daemons all running at the same time in order for the system to work at all. Event buses, policy engines, management frameworks, a couple dozen libraries, and multiple layers of components to do something as simple as run a graphical app in a windowed environment. Is this all necessary? Clearly not, as we did without all this crap 20 years ago. Somebody screwed the pooch on system design.
Luckily, it's Linux, so nobody is forcing us to use all this shit. We can just start over with a new, much simpler system (and try hard as hell to avoid second system effect)
These days there are 4 main BSDs. Free which you remember, Open for security maniacs, Net for those who want to run it on random things, Dragonfly is an experimental one.
It really is too bad that the BSD license by its nature doesn't require contributions especially where it would have been helpful. E.g. Sony uses BSD in the PlayStation which has a WiFi driver stack.
PAM supports a shadow password file as its default configuration. Did you mean something else?