That said, I haven't tried Gentoo with binaries from official repositories yet. Maybe that makes it less time-consuming to keep your system up to date.
While other distributions are struggling to bootstrap their package repositories for new ISAs and waiting for build farms to catch up, Gentoo's source based nature makes it architecture agnostic by definition. I applaud the risque team for having achieved parity with amd64 for the @system set. This proves that the meta-distribution model is the only scalable way to handle the explosion of hardware diversity we are seeing post 2025. If you are building an embedded platfrm or working on custom silicon, Gentoo is a top tier choice. You cross-compile the stage1 and portage handles the rest.
Since you've been on the ride since '04, I'm curious to hear your thoughts. How do you feel the maintenance burden compares today versus the GCC 3.x era? With the modern binhost fallback and the improvements in portage, I feel like we now spend less time fighting rebuild loops than back then? But I wonder if long time users feel the same.
I'm another one on it since the same era :)
In general stable has become _really_ stable, and unstable is still mostly usable without major hiccups. My maintenance burden is limited nowadays compared to 10y ago - pretty much running `emerge -uDN @world --quiet --keep-going` and fixing issues if any, maybe once a month I get package failures but I run a llvm+libcxx system and also package tests, so likely I get more issues than the average user on GCC.
For me these days it's not about the speed anymore of course, but really the customization options and the ability to build pretty much anything I need locally. I also really like the fact that ebuilds are basically bash scripts, and if I need to further customize or reproduce something I can literally copy-paste commands from the package manager in my local folder.
The project has successfully implemented a lot of by-default optimizations and best practices, and in general I feel the codebases for system packages have matured to the point where it's odd to run in internal compiler errors, weird dependency issues, whole-world rebuilds etc. From my point of view it also helped a lot that many compilers begun enforcing more modern and stricter C/C++ standards over time, and at the same time we got Github, CI workflows, better testing tools etc.
I run `emerge -e1 @world` maybe once a year just to shake out stuff lurking in the shadows (like stuff compiled with clang 19 vs clang 21), but it's really normally not needed anymore. The configuration stays pretty much untouched unless I want to enable a new USE for a new package I'm installing.
its been years since I had a build failure, and I even accept several on ~amd64. (with gcc)
I tried Gentoo around the time that OP started using it, and I also really liked that aspect of it. Most package managers really struggle with this, and when there is configuration, the default is usually "all features enabled". So, when you want to install, say, ffmpeg on Debian, it pulls in a tree of over 250 (!!) dependency packages. Even if you just wanted to use it once to convert a .mp4 container into .mkv.
Additionally gentoo has become way more strict with use flag dependencies, and it also checks if binaries are depending on old libs, and doesnt remove them when updating a package, such that the "app depends on old libstdc++" doesnt happen anymore. It then automatically removes the old when nothing needs it anymore
I have been running gentoo since before 04, continously, and things pretty much just work. I would be willing to put money that I spend less time "managing my OS" than most who run other systems such as osx, windows, debian etc. Sure, my cpu gets to compile a lot, but thats about it.
And yes, the "--omg-optimize" was never really the selling point, but rather the useflags, where theres complete control. Pretty much nothing else comes close, and it is why gentoo is awesome
Other distros don't support Risc-V because nobody has taken the time to bother with it because the hardware base is almost nonexistent.
It's crazy how projects this large and influential can get by on so little cash. Of course a lot of people are donating their very valuable labour to the project, but the ROI from Gentoo is incredible compared to what it costs to do anything in commercial software.
For example:
- Red Hat Identity Management -> FreeIPA (i.e. Active Directory for Linux)
- Red Hat Satellite -> The Foreman + Katello
- Ansible ... Ansible.
- Red Hat OpenShift -> OKD
- And more I'm not going to list.One example is virtualization: the virtio stack is maintained by Red Hat (afaik). This is a huge driver behind the “democratization” of virtualization in general, allowing users and small companies to access performant virt without selling a kidney to VMware.
Also, Red Hat contributes to or maintains all of the components involved in OpenShift and OpenStack (one of which is virtio!).
Red Hat primarily contributes code to the kernel and various OSS projects, paid for by the clients on enterprise contracts. A paying client needs something and it gets done. Then the rest of us get to benefit by receiving the code for free. It’s a beautiful model.
If you look at lists of top contributors, Red Hat (along with the usual suspects in enterprise) are consistently at the top.
It looks like they're second to Intel, at least by LF's metric. That said driver code tends to be take up a lot of space compared to other areas. Just look at the mass of AMD template garbage here: https://github.com/torvalds/linux/tree/master/drivers/gpu/dr...
Back in the day when the boxes were on display in brick-and-mortar stores, SuSE was a great way to get up and running with Linux.
It's a rock-solid distro, and if I had a use for enterprise support, I'd probably look into SLES as a pretty serious contender.
The breadth of what they're doing seems unparalleled, i.e. they have rolling release (Tumbleweed), delayed rolling release (Slowroll) which is pretty unique in and of itself, point release (Leap), and then both Tumbleweed and Leap are available in immutable form as well (MicroOS, and Leap Micro respectively), and all of the aforementioned with a broad choice of desktops or as server-focused minimal environments with an impressively small footprint without making unreasonable tradeoffs. ...if you multiply out all of those choices it gives you, it turns into quite a hairy ball of combinatorics, but they're doing a decent job supporting it all.
As far as graphical tools for system administration go, YaST is one of the most powerful and they are currently investing in properly replacing it, now that its 20-year history makes for an out-of-date appearance. I tried their new Agama installer just today, and was very pleased with the direction they're taking.
...so, not quite sure what you're getting at with your "Back in the day..." I, too, remember the days of going to a brick-and-mortar store to buy Linux as a box set, and it was between RedHat and SuSE. Since then, I think they've lost mindshare because other options became numerous and turned up the loudness, but I think they've been quiety doing a pretty decent job all this time and are still beloved by those who care to pay attention.
SUSE has always been pretty big in Europe but never was that prominent in North America except for IBM mainframes, which Red Hat chipped away at over time. (For a period, SUSE supported some mainframe features that Red Hat didn't--probably in part because some Red Hat engineering leadership was at least privately dismissive of the whole idea of running Linux on mainframes.)
Red Hat also has a nasty habit of pushing their decisions onto the other distributions; e.g.
- systemd
- pulseaudio (this one was more Fedora IIRC)
- Wayland
- Pipewire (which, to be fair, wasn't terrible by the time I tried it)
I guess Debian, SUSE, Canonical, etc get that email from Red Hat just go along with it. We better make the switch, we don’t want our ::checks notes:: competitor made at us.
This, despite the fact that Rocky, Alma, Oracle Enterprise Linux, etc exist because of the hard work and money spent by Red Hat.
And what are those companies doing to fix this issue you claim Red Hat causes? Nothing. Because they like money, especially when all you have to do is rebuild and put your name on other people’s hard work.
And what exactly is incomprehensible? What exactly is it that they’re doing to the Linux desktop that make it so that people can’t fix their own problems? Isn’t the whole selling point of Rocky and Alma by most integrators is that it’s so easy you don’t need red hat to support it?
So I'm not being critical. Yes, Red Hat employees do contribute to projects that are most relevant to the desktop even if doing so is not generally really the focus of their day jobs. And, no, other companies almost certainly haven't done more.
To some extent Valve. They have to, since the Steam Deck's desktop experience depends on the "Linux desktop" being a good experience.
Tbh it feels like NixOS is convenient in a large part because of systemd and all the other crap you have to wire together for a usable (read compatible) Linux desktop. Better to have a fat programming language, runtime and collection of packages which exposes one declarative interface.
Much of this issue is caused by the integrate-this-grab-bag-of-tools-someone-made approach to system design, which of course also has upsides. Redhat seems to be really helping with amplifying the downsides by providing the money to make a few mediocre tools absurdly big tho.
It is the Microsoft of the Linux world.
Gentoo also runs the backend infra of Sony's Playstation Cloud gaming service
Anecdotal evidence claims it used to also run the NASDAq
[0] https://www.pcworld.com/article/481872/how_linux_mastered_wa...
The game changer for me was using my NAS as a build host for all my machines. It has enough memory and cores to compile on 32 threads. But a full install from a stage3 on my ageing Thinkpad X13 or SBCs would fry the poor things and just isn't feasible to maintain.
I have systemd-nspawn containers for the different microarchitectures and mount their /var/cache/binpkgs and /etc/portage dirs over NFS on the target machines. The Thinkpad can now do an empty tree emerge in like an hour and leaving out the bdeps cuts down on about 150 packages.
Despite being focused on OpenRC, I have had the most pleasant experience with systemd on Gentoo over all the other distros I've tried.
I have this dream of moving all my ubuntu servers to gentoo but I don't have a clear enough picture of how to centralize management of a fleet of gentoo machines
With some notebooks, some of which were getting on in years, it was simply too resource-intensive to update. Only GHC, for example, often took 12+ hours to compile on some older notebooks.
I will say though that my valgrind is broken due to march native. :)
Very cool to see that it's still going strong - I remember managing many machines at scale was a bit of a challenge, especially keeping ahead of vulnerabilities.
I wish I had more time I could dedicate to maintaining my system, I'm marooned on Arch due to lack of time, such a shame.
https://blog.nawaz.org/posts/2023/May/20-years-of-gentoo/
Prior HN discussion: https://news.ycombinator.com/item?id=35989311
Edit: Curious, why the downvote?
Id Software provided a Doom 3 Linux client when the game was first released. I found Doom 3 ran better on a custom built Gentoo Linux system compared to Windows XP.
Are you look at Gentoo to maximize performance with compiling everything with custom build parameters and kernel configuration versus pre-built binaries and a generic kernel loaded with modules?
Custom Gentoo just adds more time with having to wait to install software upgrades. It is like having all your Arch packages only being provided by AUR. There is also a chance the build will fail and the parameters might need to be changed. Majority of the time everything compiles without issue once the build parameters are figured out. It was rare when something did not.
Where you lose time is in trying to optimize your system and packages using the multiple switches that Gentoo provides. If you're the OCD twiddler type, Gentoo can be both extremely satisfying and major time sink.
Installation is done by booting a liveCD, manually partitioning your storage, unpacking a Gentoo STAGE3 archive, chrooting in it, doing basic configuration such as network, timezone, portage (package manager) base profile and servers, etc., compiling and installing a kernel and then rebooting into the new system.
Then you get to play with /etc/portage/make.conf which is the root configuration of the package manager. You get to set CPU instruction sets (CPU_FLAGS), gcc CFLAGS flags, MAKE flags, video card targets, acceptable package licenses, global USE flags (those are simplified ./configure arguments that usually apply to several packages), which Apache modules get built, which qemu targets get built, etc. These are all env vars that portage (the package manager) uses to build packages for your system.
The more you use Gentoo, the more features of make.conf you discover. Never ending fun.
Then, you start installing packages and updates (same procedure):
1) You start the update by reviewing USE flags for each added/updated package - several screens of dense text.
For example, PHP has these USE flags: https://packages.gentoo.org/packages/dev-lang/php - mouse hover to see what they do. You get to play with them in /etc/portage/package.use and there's no end to tweaking them.
If you have any form of OCD, stay away from Gentoo or this will be your poison forever!
2) Then the compilation begins and that takes hours or days depending on what you install and uses a lot of CPU and either storage I/O or memory (if you have lots of memory, you can compile in a tmpfs a lot faster).
I'm not sure it is OK to compile the updates on a live server, especially during busy hours, but Gentoo has alternatives, including binary packages (recently added, but must match your USE flags with theirs), building packages remotely on another system (distcc), even on a different arch (crossdev). You could run an ARM server and build packages for it on a x86 workstation. I didn't use "steve", so I can't tell you what wonderful things that tool can do, yet.
3) Depending on architecture, some less used packages may fail to compile. You get to manually debug that and submit bug reports. You can also add patches to /etc/portage/patches/<package> that will automatically be applied when the package is built, and that includes the kernel.
I recommend you to run emerge with --keep-going to have the package manager continue after an error with the remaining packages.
4) When each package is done compiling, it's installed automatically. There are no automatic reboots or anything. The files are replaced live, both executables and libraries. Running services continue to use old files from memory until you restart them or reboot manually - they will appear red/yellow in htop until you do.
There were a few times, very very few, when I had crashes in new packages that were succesfuly built. It only happened on armv7, which is a practically abandoned platform everywhere. In those cases you can revert to the old ones and mask the bugged version to prevent it from being updated to next time.
5) Last step is to review the config changes. dispatch-conf will present a diff of all proposed changes to .ini and .cfg files for all updated packages. You get to review, accept, reject the changes or manually edit the files.
That's all. Simple. :)