> The Fedora project chose to write its own tool because it was undesirable to pull Perl into the build root for every package.
further would be the ability to compare logfiles with pointer addresses or something
That’s what https://pi2.network/ does. It uses K-Framework, which is imo very underrated/deserves more attention as a long term way of solving this kind of problem.
https://news.opensuse.org/2025/02/18/rbos-project-hits-miles...
> Packages including libraries should exclude static libs as far as possible (eg by configuring with --disable-static). Static libraries should only be included in exceptional circumstances. Applications linking against libraries should as far as possible link against shared libraries not static versions.
[1]: https://docs.fedoraproject.org/en-US/packaging-guidelines/
Packaging guidelines from a distros docs like this are not any kind of counter argument to that comment.
This is the current orthodoxy, so obviously all docs say it. We all know the standard argument for the current standard. Their comment was explicitly "I'd like to see a change from the current orthodoxy". They are saying that maybe that argument is not all it promised to be back in the 90's when we started using dynamic libs.
The common thing to do for python programs that are not directly bundled with the os is to set up a separate virtual environment for each one and download/compile the exact version of each dependency from scratch.
Dynamic linking gives you M binaries + N libraries. Static linking is M * N.
What I said specifically is I'd rather a static binary than a flatpak/snap/appimage/docker/etc. That is a comparison between 2 specific things, and neither of them is "1000 programs using 1000 libraries"
And some binaries already ship with their own copies of all the libraries anyway, just in other forms than static linking. If there are 1000 flatpaks/snaps/docker images etc, then those million libraries are already out there in an even worse form than if they were all static binaries. But there are not, generally, on any give single system, yet, though the number is growing not shrinking.
For all the well known and obvious benefits of dynamic linking, there are reasons why sometimes it's not a good fit for the task.
And in those cases where, for whatever reason, you want the executable to be self-contained, there are any number of ways to arrange it, from a simple tar with the libs & bin in non-conflicting locations and a launcher script that sets a custom lib path (or bin is compiled with the lib path), to appimage/snap/etc, to a full docker/other container, to unikernel, to simple static bin.
All of those give different benefits and incur different costs. Static linking simply has the benefit of being dead simple. It's both space and complexity-efficient compared to any container or bundle system.
I also have a tiny collection of statically linked utilities available here[2].
This binary may be statically linked, or link to system libraries. Quite a few times the only system library being linked is libc though.
But yes, I also hope this gets more prevalent instead of the python approach.
Ideally everything would be statically linked but thr sections would be marked and deduped by the filesystem.
If we believe we have a reproducible build, that's constitutes a big test case which gives us confidence in the determininism of the whole software stack.
To validate that test case, we actually have to repeat the build a number of times.
If we spot a difference, something is wrong.
For instance, suppose that a compiler being used has a bug whereby it is relying on the value of an unitialized variable somewhere. That could show up as a difference in the code it generates.
Without reproducible builds, of course there are always differences in the results of a build: we cannot use repeated builds to discover that something is wrong.
(People do diffs between irreproducible builds anyway. For instance, disassemble the old and new binaries, and do a textual diff, validating that only some expected changes are present, like string literals that have embedded build dates. If you have reproducible builds, you don't have to do that kind of thing to detect a change.
Reproducible builds will strengthen the toolchains and surrounding utilities. They will flush out instabilities in build systems, like parallel Makefiles with race conditions, or indeterminate orders of object files going into a link job, etc.
Changes can also be caught using bolt on tools like Tripwire, OSSEC and it's alternatives or even home grown tools that build signed manifests of approved packages usually for production approval.
So far, reproducible builds are heavy on the former, zero on these bugs you mention and zero on supply chain attacks.
Some of these supposed use cases make no sense. You update the compiler. Oh no, all the code is different? Enjoy the 16h deep dive to realize someone tweaked code generation based on the cycle times given on page 7893 of the Intel x64 architecture reference manual.
* Edit, it's quoted in the linked article:
> Jędrzejewski-Szmek said that one of the benefits of reproducible builds was to help detect and mitigate any kind of supply-chain attack on Fedora's builders and allow others to perform independent verification that the package sources match the binaries that are delivered by Fedora.
It's the attacks on the upstream packages themselves.
Reproducible builds would absolutely not catch a situation like the XZ package being compromised a year ago, due to the project merging a contribution from a malicious actor.
A downstream package system or OS distro will just take that malicious update and spin it into a beautifully reproducing build.
Reproducible builds are such an overhelmingly good and obvious thing, that build farm security is just a footnote.
The above are things worth looking at doing.
However I'm not sure what you can code that tries to obscure the issues while looking good.
So it could help you detect tampering earlier, and maybe even prevent it from propagating depending on what else is done.
Reproducible builds remove a single point of failure for authenticating binaries – now anyone can do it, not just the person with the private keys.
We may never reach perfection, but the more steps we make in that direction the more likely it is we reach a point where we are impossible to compromise in the real world.
The stated aim is that when you compile the same source, environment, and instructions the end result is bit identical.
There is, however; hardware specific optimizations that will naturally negate this stated aim, and I don't see how there's any way to avoid throwing out the baby with the bathwater.
I understand why having a reproducible build is needed on a lot of fronts, but the stated requirements don't seem to be in line with the realities.
At its most basic, there is hardware, where the hardware may advertise features it doesn't have, or doesn't perform the same instructions in the same way, and other nuances that break determinism as a property, and that naturally taints the entire stack since computers rely heavily on emergent design.
This is often hidden in layers of abstraction and/or may be separated into pieces that are architecture dependent vs independent (freestanding), but it remains there.
Most if not all of the beneficial properties of reproducible builds rely on the environment being limited to a deterministic scope, and the reality is manufacturers ensure these things remain in a stochastic scope.
Distro packages are compiled on their build server and distributed to users with all kinds of systems; therefore, by nature, it should not use optimizations specific to the builder's hardware.
On source-based distros like Gentoo, yes, users adding optimization flags would get a different output. But there is still value in having the same hardware/compilation flags result in the same output.
It’s not clear if you’re also talking about compiler optimizations—a reproducible build must have a fixed target for that.
These are considered to be different build artifacts, which are also reproducible.
> Committing profiles directly in the source repository is recommended as profiles are an input to the build important for reproducible (and performant!) builds. Storing alongside the source simplifies the build experience as there are no additional steps to get the profile beyond fetching the source.
I very much hope other languages/frameworks can do the same.
A quote from the paper that I remember on the subject[1] as these profiles are just about as machine dependent as you can get.
> Unfortunately, most code improvements are not machine independent, and the few that truly are machine independent interact with those that are machine dependent causing phase-ordering problems. Hence, effectively there are no machine-independent code improvements.
There were some differences between various Xeon chip's implementations of the same or neighboring generations that I personally ran into when we tried to copy profiles to avoid the cost of the profile runs that may make me a bit more sensitive to this, but I personally saw huge drops in performance well into the double digits that threw off our regression testing.
IMHO this is exactly why your link suggested the following:
> Your production environment is the best source of representative profiles for your application, as described in Collecting profiles.
That is very different from Fedora using some random or generic profile for x86_64, which may or may not match the end users specific profile.
Fedora upstream was never going to do that for you anyway (way too many possible hardware configurations), so you were already going be in the business of setting that up for yourself.
It does hit real projects and may be part of the reason that "99%" is called out but Fedora also mentions that they can't match the official reproducible-builds.org meaning in the above just due to how RPMs work, so we will see what other constraints they have to loosen.
Here is one example of where suse had to re-enable it for gzip.
https://build.opensuse.org/request/show/499887
Here is a thread on PGO from the reproducible-builds mail list.
https://lists.reproducible-builds.org/pipermail/rb-general/2...
There are other costs like needing to get rid of parallel builds for some projects that make many people loosen the official constraints. The value of PGO+LTO being one.
gcda profiles are unreproducible, but the code they produce is typically the same. If you look into the pipeline of some projects, they just delete the gcda output and then often try a rebuild if the code is different or other methods.
While there are no ideal solutions, one that seems to work fairly well, assuming the upstream is doing reproducible builds, is to vendor the code, build a reproducible build to validate that vendored code, then enable optimizations.
But I get that not everyone agrees that the value of reproducibility is primarily avoiding attacks on build infrastructure.
However reproducible builds as nothing to do with MSO model checking etc... like some have claimed. Much of it is just deleting non-deterministic data as you can see here with debian, which fedora copied.
https://salsa.debian.org/reproducible-builds/strip-nondeterm...
As increasing the granularity of address-space randomization at compile and link time is easier than at the start of program execution, obviously there will be a cost (that is more than paid for by reducing supply chain risks IMHO) of reduced entropy for address randomization and thus does increase the risk of ROP style attacks.
Regaining that entropy at compile and link time, if it is practical to recompile packages or vendor, may be worth the effort in some situations, probably best to do real PGO at that time too IMHO.
This reduces entropy across binaries and may enable reliable detection of bas addresses or to differentiate gadgets in text regions.
It is all tradeoffs.
But think about how a known phrase is how enigma was cracked.
Does the profiler not output a hprof file or whatever, which is the input to the compiler making the release binary? Why not just store that?
Doesn't seem like a big issue to me. The gcc compiler doesn't even support multithreaded compiling. In the C world, parallelism comes from compiling multiple translation units in parallel, not any one with multiple threads.
Related news from March https://news.ycombinator.com/item?id=43484520 (Debian bookworm live images now fully reproducible)
If they just started quarantining the long tail of obscure packages, then people would get upset. And failing to be 100% reproducible will make a subset of users upset. Lose-lose proposition there, given that intelligent users could just consciously avoid packages that aren't passing reproducibility tests.
100% reproducibility is a good goal, but as long as the ubiquitous packages are reproducible then that is probably going to cover most. Would be interesting to provide an easy way to disallow non-reproducible packages.
I'm sure one day they will be able to make it a requirement for inclusion into the official repos.
I think the last twenty years of quasi-marketing/sales/recruiting DevRel roles have pushed a narrative of frictionless development, while on the flip side security and correctness have mostly taken a back seat (special industries aside).
I think it's a result of the massive market growth, but I so welcome the pendulum swinging back a little bit. Typo squatting packages being a concern at the same time as speculative execution exploits shows mind bending immaturity.
Dependency management tools are tools that come about because it's easier and more natural for a programmer to write some code than solve a bigger problem. Easier to write a tool than write your own version of something or clean up a complex set of dependencies.
You can get secure and easy-to-use tools, but they typically have to be really simple things.
There are good middle grounds, but most package managers don't even acknowledge other concerns as valid.
Security is good, but occasionally I wonder if technical people don't imagine fantastic scenarios of evil masterminds doing something with the data and manage to rule the world.
While in reality, at least the last 5 years there are so many leaders (and people) doing and saying so plainly stupid that I feel we should be more afraid of stupid people than of hackers.
Society works by agreements and laws, not by (absolute) secrecy.
There are of course instances like electrical grid stopping for days, people being killed remotely in hospitals, nuclear plants exploding, that would have a different impact and we might get there, just that it did not happen yet.
It’s similar to how most people are distressed after a break-in, because they considered their home to be a private space, even though the lock manufacturer never claimed 100% security (or the thieves simply bypassed the locks by smashing a window).
Agreements and laws don’t solve that problem, because thieves already aren’t stopped by those.
I don't think security was traded away for convenience. Everything started with convenience, and security has been trying to gain ground ever since.
>happen for people to start taking security seriously
Law with enforced and non-trivial consequences are the only thing that will force people to take security seriously. And even then, most probably still wont.
If Linux had evolved a more sensible system and someone came along and suggested "no actually I think each distro should have its own package format and they should all be responsible for packaging all software in the world, and they should use old versions too for stability" they would rightly be laughed out of the room.
To get to that world, we developers would have to give up making breaking changes.
We can’t have any “your python 2 code doesn’t work on python 3” nonsense.
Should we stop making breaking changes? Maybe. Will we? No.
This only happens because distros insit on shipping python and then everyone insisted on using that python to run their software.
In an alternate world everybody would just ship their own python with their own app and not have that problem. That's how windows basically solves this
Of course I grew up when hard drives were not affordable by normal people - my parents had to save for months to get my a floppy drive.
Having every package as part of a distribution is immensely useful. You can declaratively define your whole system with all software. I can roll out a desktop, development VM or server within 5 minutes and it’s fully configured.
Yeah, because they allow anyone to contribute with little oversight. As Lance Vick wrote[1], "Nixpkgs is the NPM of Linux." And Solène Rapenne wrote[2], "It is quite easy to get nixpkgs commit access, a supply chain attack would be easy to achieve in my opinion: there are so many commits done that it is impossible for a trustable group to review everything, and there are too many contributors to be sure they are all trustable."
[1] https://news.ycombinator.com/item?id=34105784
[2] https://web.archive.org/web/20240429013622/https://dataswamp...
Of course, Debian developers/maintainers are vetted more. But an intentional compromise in nixpkgs would be much more visible than in Debian, NPM, PyPI or crates.io.
There is currently a gazillion forks, some being forks of forks because they weren't considered culturally pure enough for the culturally purged fork.
Hopefully Determinate Systems or Ekela can get some real maturity and corporate funding into the system and pull the whole thing out of the quagmire.
I agree that the infighting is not nice. But to be honest, when you just use NixOS and submit PRs, you do not really notice them. It's not like people are fighting them in the actual PRs to nixpkgs.
Ironically enough the closest comparison I could make is driving a Tesla. Even if the product is great, you're supporting an organisation that is the opposite.
I think the Nix team will continue to slowly chase away competent people until the rot makes the whole thing wither, at which point everyone switches their upstream over to Determinate Systems' their open core. Although I'm hoping DS will ultimately go the RHEL-Fedora route.
This can't be real. Are you sure it was something innocuous and not something bigoted?
https://discourse.nixos.org/t/breaking-doge-to-recommend-nix...
Same author quoted the original text on their Reddit thread and was mostly uncriticized there:
https://old.reddit.com/r/NixOS/comments/1joshae/breaking_dog...
I personally found it incredibly distasteful and also fairly representative of the quality of conversation you often get from some of the Nix community. I'm not offensive, you're just thin skinned, can't you take a joke, etc. is extremely common. You'll have to judge for yourself whether it's bigoted or dog whistle or neither.
I'm a former casual community member with modest open source work in that ecosystem (projects and handful of nixpkgs PRs) before I left permanently last spring. I no longer endorse its use for any purpose and I seek to replace every piece of it that I was using.
I still hear about the ways they continue to royally fuck up their governance and make negligible progress on detoxifying the culture. It took them until last fucking week to ban Anduril from making hiring posts on the same official forum.
>I personally found it incredibly distasteful
How? Why? It's clearly satire, written in the style of The Onion.
>and also fairly representative of the quality of conversation you often get from some of the Nix community
Good satire? At least some members aren't brainrotted out to the point of no return.
> I'm not offensive, you're just thin skinned, can't you take a joke, etc.
It's clearly not offensive and if that upset you, you clearly have thin skin and can't take the blandest of jokes. Histrionic.
>I no longer endorse its use for any purpose and I seek to replace every piece of it that I was using.
I will also tell others not to use Nix after reading that. The community is indeed too toxic.
>I still hear about the ways they continue to royally fuck up their governance and make negligible progress on detoxifying the culture.
They won't detoxify until they remove all the weak neurotic leftist activists with weird fetishes for "underrepresented minorities."
>It took them until last fucking week to ban Anduril from making hiring posts on the same official forum.
I'm not sure who that is or why it's an issue, but I assume it's something only leftists cry about.
I present to you sibling comment posted slightly before yours: https://news.ycombinator.com/item?id=43655093
They do.
I brought up Arch because they get a lot of hate for exactly doing that and consequently pulling people's legs out from under them.
Prime example of this is what the Bottles dev team as done.
It isnt an easy problem to solve.
System perl is actually good. It's too bad the Linux vendors don't bother with system versions of newer languages.
App store software is excruciatingly vetted, though. Apple and Google spend far, far, FAR more on validating the software they ship to customers than Fedora or Canonical, and it's not remotely close.
It only looks like "randos" because the armies of auditors and datacenters of validation software are hidden behind the paywall.
Also Windows and Mac have existed for decades and there's zero vetting there. Yeah malware exists but its easy to avoid and easily worth the benefit of actually being able to get up-to-date software from anywhere.
The vetting on Mac is that any unsigned software will show a scary warning and make your users have to dig into the security options in Settings to get the software to open.
This isn't really proactive, but it means that if you ship malware, Microsoft/Apple can revoke your certificate.
If you're interested in something similar to this distribution model on Linux, I would check out Flatpak. It's similar to how distribution works on Windows/Mac with the added benefit that updates are handled centrally (so you don't need to write auto-update functionality into each program) and that all programs are manually vetted both before they go up on Flathub and when they change any permissions. It also doesn't cost any money to list software, unlike the "no scary warnings" distribution options for both Windows and Mac.
Isn't that only for applications? All the system software are provided and vetted by the OS developer.
The nice thing about Debian is that you can have 2 full years of routine maintenance while getting reading for the next big updates. The main issue is upstream developer having bug fixes and features update on the same patch.
Hahah! ...they don't. They really don't, man. They do have procedures in place that makes them look like they do, though; I'll give you that.
The value in Nix comes from the package set, nixpkgs. What is revolutionary is how nixpgks builds a Linux distribution declaratively, and reproducibly, from source through purely functional expressions. However, nixpkgs is almost an entire universe unto itself, and it is generally incompatible with the way any other distribution would handle things, so it would be no use to Fedora, Debian, and others
With Docker it turned out relatively straightforward. With Nix even when it runs in Linux Arm VM we tried but just gave up.
Ideally, a distro maintainer would come across a project packaged with nix and think:
> Oh good, the app dev has taken extra steps to make life easy for me.
As-is, I don't think that's the case. You can add a flake output to your project which builds an .rpm or a .deb file, but it's not commonly done.
I'm guessing that most of the time, distro maintainers would instead hook directly into a language specific build-tool like cmake or cargo and ignore the nix stuff. They benefit from nix only indirectly in cases where it has prevented the app dev from doing crazy things in their build (or at least has made that crazyness explicit, versus some kind of works-on-my-machine accident or some kind of nothing-to-see here skulduggery).
If we want to nixify the world I think we should focus less on talking people out of using package managers which they like and more on making the underlying packages more uniform.
Nix wasn't mentioned (I'm the author) because it really isn't relevant here -- the comparable distributions, when discussing what Fedora is doing, are Debian and other distributions that use similar packaging schemes and such.
Quoting the article:
> Irreproducible bits in packages are quite often "caused by an error or sloppiness in the code". For example, dependence on hardware architecture in architecture-independent (noarch) packages is "almost always unwanted and/or a bug", and reproducibility tests can uncover those bugs.
This is the sort of thing that nix is good at guarding against, and it's convenient that it doesn't require users to engage with the underlying toolchain if they're unfamiliar with it.
For instance I can use the command below to build helix at a certain commit without even knowing that it's a rust package. Although it doesn't guarantee all aspects of repeatability, it will fail if the build depends on any bits for which a hash is not known ahead of time, which gets you half way there I think.
nix build github:helix-editor/helix/340934db92aea902a61b9f79b9e6f4bd15111044
Used in this way, can nix help Fedora's reproducibility efforts? Or does it appear to Fedora as a superfluous layer to be stripped away so that they can plug into cargo more directly?A lot of Nix-based package builds will burn Nix store paths directly into the binary. If you are lucky it's only the rpath and you can strip it, but in some cases other Nix store paths end up in the binary. Seems pretty useless to Fedora.
Besides many of the difficult issues are not solved by Nix either. (E.g. build non-determinism by ordering differences due to the use of a hashmap somewhere in the build.)
I didn't know that, sounds like a bug. Maybe something can be done to make it easier to know that this is the case for your build.
I'd still think that by refusing to build things with unspecified inputs, nix prunes a whole category of problems away which then don't bite the distro maintainers, but maybe that's wishful thinking.
I'll continue to use it because it's nice to come to a project I haven't worked on in a few years and not have to think about whether it's going to now work on this machine or figure out what the underlying language-specific commands are--but if there were ways to tweak things so that others have this feeling also, I'd like to know them.
It's a feature. E.g. if a binary needs to load data files, it needs to know the full path, or you are back to an FHS filesystem layout (which has a lot of issues that Nix tries to solve).
I'd still think that by refusing to build things with unspecified inputs,
I haven't followed development of traditional Linux distributions, but I am pretty sure that they also build in minimal sandboxes that only contain specified dependencies. See e.g. Mock: https://github.com/rpm-software-management/mock
The article you linked is very clear that both qualitatively and quantitatively, NixOS has made achieved high degrees of reproducibility, and even explicitly rejects the possibility of assessing absolute reproducibility.
NixOS may not be the absolute leader here (that's probably stagex, or GuixSD if you limit yourself to more practical distros with large package collections), but it is indeed very good.
Did you mean to link to a different article?
Could you comment on how stagex is? It looks like it might indeed be best in class, but I've hardly heard it mentioned.
https://stagex.tools/ https://bootstrappable.org/ https://lwn.net/Articles/983340/
• % of deployed systems which consist only of reproducibly built packages
• % of commonly downloaded disk images (install media, live media, VM images, etc.) consist only of reproducibly built packages
• total # of reproducibly built packages available
• comparative measures of what NixOS is doing right like: of packages that are reproducibly built in some distros but not others, how many are built reproducibly in NixOS
• binary bootstrap size (smaller is better, obviously)
It's really not difficult to think of meaningful ways that reproducibility of different distros might be compared, even quantitatively.By any conceivable metric Nix really is ahead of the pack.
Disclaimer: I have no affiliation with Nix, Fedora, Debian etc. I just recognize that Nix has done a lot of hard work in this space & Fedora + Debian jumping onto this is in no small part thanks to the path shown by Nix.
Arch hovers around 87%-90% depending on regressions. https://reproducible.archlinux.org/
Debian reproduces 91%-95% of their packages (architecture dependent) https://reproduce.debian.net/
> Disclaimer: I have no affiliation with Nix, Fedora, Debian etc. I just recognize that Nix has done a lot of hard work in this space & Fedora + Debian jumping onto this is in no small part thanks to the path shown by Nix
This is completely the wrong way around.
Debian spearheaded the Reproducible Builds efforts in 2016 with contributions from SUSE, Fedora and Arch. NixOS got onto this as well but has seen less progress until the past 4-5 years.
The NixOS efforts owes the Debian project all their thanks.
> Arch Linux is 87.7% reproducible with 1794 bad 0 unknown and 12762 good packages.
That's < 15k packages. Nix by comparison has ~100k total packages they are trying to make reproducible and has about 85% of them reproducible. Same goes for Debian - ~37k packages tracked for reproducible builds. One way to lie with percentages is when the absolute numbers are so disparate.
> This is completely the wrong way around. Debian spearheaded the Reproducible Builds efforts in 2016 with contributions from SUSE, Fedora and Arch. NixOS got onto this as well but has seen less progress until the past 4-5 years. The NixOS efforts owes the Debian project all their thanks.
Debian organized the broader effort across Linux distros. However the Nix project was designed from the ground up around reproducibility. It also pioneered architectural approaches that other systems have tried to emulate since. I think you're grossly misunderstanding the role Nix played in this effort.
That's not a lie. That is the package target. The `nixpkgs` repository in the same vein package a huge number of source archives and repackages entire ecosystems into their own repository. This greatly inflates the number of packages. You can't look at the flat numbers.
> However the Nix project was designed from the ground up around reproducibility.
It wasn't.
> It also pioneered architectural approaches that other systems have tried to emulate since.
This has had no bearing, and you are greatly overestimating the technical details of nix here. It's fundamentally invented in 2002, and things has progressed since then. `rpath` hacking really is not magic.
> I think you're grossly misunderstanding the role Nix played in this effort.
I've been contributing to the Reproducible Builds effort since 2018.
However, this doesn't say much about build artifact reproducibility. A package set could always evaluate to the same drvs, but if all the source packages choose what to build based on random() > 0.5, then there is no of build artifacts at all. This type of reproducibility is spearheaded by Debian and Arch more than Nix.
Both notions are useful for different purposes and nix is not particularly good at the first one.
They have too many people familiar with the current approaches.
For what it's worth, there's also Guix, which is literally a clone of Nix but part of the GNU project, so it only uses free software and opts for Guile instead of a custom DSL for configuration. It wasn't a pre-existing distro that changed, of course.
What will happen is concepts from Nix will slowly get absorbed into other, more user-friendly tooling while Nix circles the complexity drain
```This definition excludes signatures and some metadata and focuses solely on the payload of packaged files in a given RPM:
A build is reproducible if given the same source code, build environment and build instructions, and metadata from the build artifacts, any party can recreate copies of the artifacts that are identical except for the signatures and parts of metadata.```
> The contents, however, should still be "bit-by-bit" identical, even though that phrase does not turn up in Fedora's definition.
So, according to the literal interpretation of the article, signatures inside the payload (e.g., files that are signed using an ephemeral key during the build, NOT the overall RPM signature) are still a self-contradictory area and IMHO constitute a possibly-valid reason for not reaching 100% payload reproducibility.
IIRC I don't think Debian packages themselves are signed themselves but the apt meta-data is signed.
type -p apt && (set -x; apt install -y debsums; debsums | grep -v 'OK$') || \
type -p rpm && rpm -Va # --verify --all
dnf reads .repo files from /etc/yum.repos.d/ [1] which have various gpg options; here's an /etc/yum.repos.d/fedora-updates.repo: [updates]
name=Fedora $releasever - $basearch - Updates
#baseurl=http://download.example/pub/fedora/linux/updates/$releasever/Everything/$basearch/
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
enabled=1
countme=1
repo_gpgcheck=0
type=rpm
gpgcheck=1
metadata_expire=6h
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False
From the dnf conf docs [1], there are actually even more per-repo gpg options: gpgkey
gpgkey_dns_verification
repo_gpgcheck
localpkg_gpgcheck
gpgcheck
1. https://dnf.readthedocs.io/en/latest/conf_ref.html#repo-opti...2. https://docs.ansible.com/ansible/latest/collections/ansible/... lists a gpgcakey parameter for the ansible.builtin.yum_repository module
For Debian, Ubuntu, Raspberry Pi OS and other dpkg .deb and apt distros:
man sources.list
man sources.list | grep -i keyring -C 10
# trusted:
# signed-by:
# /etc/apt/ trusted.gpg.d/
man apt-secure
man apt-key
apt-key help
less "$(type -p apt-key)"
signing-apt-repo-faq:
https://github.com/crystall1nedev/signing-apt-repo-faqFrom "New requirements for APT repository signing in 24.04" (2024) https://discourse.ubuntu.com/t/new-requirements-for-apt-repo... :
> In Ubuntu 24.04, APT will require repositories to be signed using one of the following public key algorithms: [ RSA with at least 2048-bit keys, Ed25519, Ed448 ]
> This has been made possible thanks to recent work in GnuPG 2.4 82 by Werner Koch to allow us to specify a “public key algorithm assertion” in APT when calling the gpgv tool for verifying repositories.
Also, maximally opt-in sandboxes for graphical applications have been possible for a while. Just use Podman and only mount your Wayland socket + any working files.
If you market it that way. Plenty of Linux users say they care about security, don't want maleware, etc. This is a step towards those desires. Users have been conditioned to use tools badly to designed for security for decades so there will be some growing pains, but it will get worse the longer people wait.
>Just use Podman and only mount your Wayland socket + any working files.
This won't work for the average user. Security needs to be accessible.
If only it was that simple…
a) effectively useless
or b) makes me want to throw my computer through the window and replace it with a 1990's device (still more useful than your average Android).
When it comes to community efforts, it’s rarely the case that all things have opportunity cost—people who contribute effort for X would not have necessarily done so for Y.
The Fedora Project is of course not a purely community effort, so I don’t know exactly how that applies here. But just wanted to point out that prioritization and opportunity cost don’t always work like you suggested.
Apart from that, any hardening in Fedora can be utilized inside a Fedora VM on Qubes. Qubes doesn't force you to use VMs with no isolation inside.
Two official examples of how one could benefit from Qubes:
https://www.qubes-os.org/news/2022/10/28/how-to-organize-you...
and
https://blog.invisiblethings.org/2011/03/13/partitioning-my-...
See also: https://forum.qubes-os.org/t/how-to-pitch-qubes-os/4499/15
Do you mean Flatpaks or something else?