I understand the downsides people have of systemd, but I have the feeling the huge upside is often overlooked.
> but I have the feeling the huge upside is often overlooked.
It is fine to objectively compare trade-offs. However had, it has to be a fair comparison; we can not start with "init manager" because systemd does a lot more, so how can a comparison to any software with less code be fair then? runit doesn't do much more than for initializing.
The objection to it is not to it as an init manager. To quote the description from the systemd site:
> systemd is a suite of basic building blocks for a Linux system
I remember embracing systemd funny enough due to when I was running alpine as my server and I had to write my own r script and boy did that quickly make me remember the awful times on freebsd debugging sh.
One does not need cron or systemd for scheduling tasks.
I don't want my init system to replace cron. I don't want it to manage logging. I don't want it to have any debugging capabilities. All of these things can be done with other programs, arguably in a much more flexible and robust way.
An init system should do one thing (well): manage system services. Within that context, it should start services on boot, keep them running in the background, and allow the user to create, stop, and start services. That's it. And it could be argued that even those responsibilities are too broad for a single program.
So I understand that you and many others like that systemd provides solutions to all of these tasks in an integrated way. But you should also understand that this does in fact go against the Unix philosophy, of small, independent, but composable programs with a single responsibility. It "proposes" alternatives to many other tools for no particular reason, until users are effectively using GNU/systemd/Linux.
And, yes, I know that technically systemd is not a monolith and is composed of many programs, but that is a moot point. It is a single project, maintained by a strongly opinionated team, and given the importance of it, most distros go all-in, so users are strongly recommended, if not forced, to use all of its programs. In many cases it's not even possible to use one individual systemd program idependently from another. This is why systemd is seen as a kraken that takes over the entire system if you do decide to use one part of it.
The Unix philosophy is not an end goal neither. It's not even something well defined. Everyone seems to have their own view on what it is. I personally take the "everything is a file" and "do one thing well and be composable" rules as a guideline, an ideal to consider when designing stuff, but not as a strict thing to adhere. It might be something that's nice to have in some contexts and something that's useless or even counter productive in others.
What I mean is that I take "does not follow the Unix philosophy" as something to look into to find potential improvements or design issues, but not as blocker or a counterpoint in itself.
To me, the Unix philosophy discussion is quite moot. Those discussions are often very vague. I don't care much that systemd follows the Unix philosophy or not. I'm more interested in what actual problems this does cause in practice.
You do, however, point out something practical here:
> An init system should do one thing (well): manage system services
I suppose one could consider that to manage system services well, you have to manage "everything". I also suppose systemd's scope is way bigger than "managing services", they do want to "fix/figure it all". It seems reasonable to me not to agree with any of these things.
I do believe the uniformisation systemd causes is a good thing, but I absolutely understand that the big scope can be seen as an issue, and I almost fully agree with your last paragraph. I would object to the statement that systemd is not a monolith and is composed of many programs is a moot point: this modularity still means that you can replace individual systemd programs with your own implementations if needs be…
… as long as you provide the expected features / APIs, yes, you are totally forced into this indeed. systemd is a de facto API. It brings / forces standardization at the cost of diversity. It broadens the standardization that comes with UNIX/POSIX and XDG. I'm sure this can be criticized in a few ways: the API design, the scope, the featureset, the way the project is lead…
The alternative to systemd is non-existent standardization and each alternative designs stuff its own way on their side. For the better and for the worse. I can see how systemd can be criticized for when we are in "the better" cases. I personally easily see the worse side where several projects (for instance desktop environments) would each have to implement features that come with systemd. And programs on top of these environments now have to implement APIs of each desktop environment to be well integrated.
This is more work for everyone.
I guess this is a diversity vs efficiency balance to strike and we don't all see it at the same place.
I suppose another alternative would be to have different people working on different implementations that are then grouped in some common "system core" package or set of standards that everyone adopts. I'd probably be happy with that, if this is at all possible.
You're right. But what I take issue with is that systemd authors deliberately decide to go against it. We know because there are other init systems that do follow these design principles much more closely.
Of course, an init system is not trivial, and is a special program that must be given additional permissions over most user space programs. But the problem with systemd is that it's not just an init system. It is a collection of tools that also manages logging, networking, DNS resolution, containers, and a bunch of other system tasks, which, in my opinion, it has no business managing. When you add to this the fact that these programs are all interdependent in some way, and that I can't use e.g. `journald` without systemd itself, it really drives the point that this is an attempt to establish a cohesive and centralized system, rather than rely on a collection of independent but composable tools, most of which already exist. So I get the appeal why some people would prefer this, particularly if they're not already experienced with existing tools, but it's also not a surprise why experienced Linux geeks would scoff at this.
In my experience, systemd doesn't give me anything that I can't do well with other tools. And instead of having the choice to use a tool of my preference for each individual task, I'm forced to use a gargantuan system designed by a single group of people. Whether or not this ultimately makes my life easier, it goes against the primary reason why I choose to use Linux in the first place. If I wanted someone else to make decisions about how I use my computer, I have Windows and macOS for that.
Tangentially, this is also why I have a love-hate relationship with NixOS. As much as I appreciate reproducibility, atomic upgrades and rollbacks, and having a fully declarative system, its authors insist on managing every part of my system with Nix, which is completely insane to me. So, for example, it tries to replace every single package manager in existence, whereas I much prefer using something like `mise` to manage my development environments. Technically, I still can and do that, but it's certainly not the "Nix way".
Interoperability and composability are the core tenets of the Unix philosophy IMO. It's this that allows me to use programs written decades ago together with programs written today, without either tool being aware of each other. In contrast, tools that try to take over my machine forcing their own UIs on me—no matter how supposedly superior they might be—are hostile to my overall computing experience.
Which one(s) would you recommend/suggest?
I felt like systemd was an epiphany. Software doesn't need to a collection of simple tools that do one thing really well. You can have one tool that does everything shittily, the pdf reader of init if you will, and that's systemd. The author went on to do brilliant work with pulseaudio, you know, the whole /dev/dsp everything including sound is a file, oof. Let's make it a weird complex server process, oh, and let's make another sound system after that too.
I was very happy to see Lennart Poelering had joined microsoft to bring his brilliance to windows. I'm sure he's just cranking out masterpiece after masterpiece of design for them. I actually switched from unix to windows after being so inspired the tremendous quality and sensical design of both pulseaudio and systemd. Oh, and both very reliable, simple, and intuitive.
- You claim that the Unix philosophy only survives to the GNUtils. Well, that shows to me a lack of understanding what the philosophy is about. Everything is a file is similar to the OOP approach of everything is an object. I recommend watching Ken Thompson when he was young here: https://www.youtube.com/watch?v=tc4ROCJYbm0
It does not capture all the UNIX philosophy but it does extend on the reasons why that philosophy works well. The philosophy is bigger than that of course but it helps serve as a counter-argument.
- The example of "writing your own script" is no different to a non-systemd system. Why would a script work or not work based on systemd? You mention as example FreeBSD debugging a shell script. Well, others use proper languages such as ruby or python. Everything that can be done via systemd I can do without it too and, in fact, have been doing so. Ruby essentially runs my system as the primary layer on everything (granted, it runs Linux, and thus mostly C, and ruby is at the end of the day a syntactic wrapper over C). I never understood why systemd would matter. I read the advertisement of the systemd devs - none of this applied to my use cases, so I never "embraced" systemd, simply because I never needed it. I did point out the increased complexity of it as a negative trade-off and this has been true til this day.
- Former "hater" also implies that criticism is not based on rationale and logic. This is not the case either. It's funny to me how the pro-systemd camp isn't really able to come up with compelling arguments on their own.
I would say your comment either shows a lack of understanding, or that you completely missed the point.
> The example of "writing your own script" is no different to a non-systemd system. Why would a script work or not work based on systemd?
Of course, you can write the service itself in Python or Ruby or whatever regardless the service manager. The point is that with systemd, or upstart, or other service managers like this that make things more declarative (launchd?), you don't have to write a script to manage the service at all.
On systemd, you declare which services yours depend on, how to run it, which user should be used to run your service, and many things are handled for you, including many security mechanisms you don't need to think about and provide further config for this stuff that would be a mess to handle with the traditional way of writing a custom rc script per service.
The problem is not being able to write in languages like ruby or python. It's to have to write something at all.
systemd makes many things declarative that were historically procedural, potentially painful to debug, code.
This eases distro maintenance and I suppose is one of the top reasons most distros adopted it.
wrt the Unix philosophy, discussions about it related to systemd are often (always?) too abstract to be useful, I'd suggest talking about specific problematic points.
cat something|filter step 1|filter step 2|filter step 3
instead of filter step 1 something|filter step 2|filter step 3
especially when confronted with filters which need their input to be fed in different ways filter step 1 < something
filter step 1 -i something
filter something step 1
cat something|filter step 1
It may be less 'pure' to use cat as the first step in a pipe but who cares? < something filter step 1|filter step 2|filter step 3
(just pointing this out in the hope it can be of interest to someone reading the thread, I don't personally care that much about UUOC - "useless" is quite subjective, one can still reasonably find the cat version more readable).I think it's a good example of when it's worth straying from the philosophy.
Yes, it 'prints' the file to stdout which is consumed by the pipe and turned into the input for the next command in the pipeline. It doesn't matter whether you're only 'printing' a single file or a bunch of them.
As for GNU utils and the examples you mention, those indeed align with the Unix philosophy, which you clearly misunderstand.
Convenience? `tar` integrates well with compression tools, but doesn't implement compression itself. This is the epitome of the Unix philosophy. You can just as well pipe its output to any compression tool of your choice, if you prefer not using its CLI.
> why find has a DSL?
Describing an advanced CLI as a DSL is a stretch. But to humour you: flexibility, and because files have many attributes which a good finding tool should expose to the user. Whether you like its CLI or not is a separate topic, but you're mistaking minimalism for simplicity as a requirement of the Unix philosophy. Some tasks are inherently complex, and forcing a tool to be "minimal" at the expense of flexibility would be counterproductive.
Besides, you're free to choose any other tool you like more to find files on your system. The fact GNU `find` is easily replaceable is precisely a sign that it follows the Unix philosophy well. I personally use `fd` and ripgrep more often than `find` these days.
Re: `cat -v`, I hardly know the history behind it, but it doesn't really matter. As a sibling comment mentions, there are no hard rules around this topic, and people will disagree about what it really means, and how a program should be designed. If I had an opinion on the topic of `cat -v`, I would probably argue with Rob Pike about it as well. None of this means that these design principles are not worth upholding, or that we won't make mistakes along the way. But going back on topic, it's a problem when a project like systemd explicitly chooses not to follow these principles.
> Even if your btrfs, after almost 18 years, still eats data in spectacular fashion.
Is this (by now) an urban legend ? Is btrfs any less reliable than, say, xfs/ext4 etc. nowadays ?
If you're concerned about the write hole, use -m DUP/raid1/raid1c2 instead of -m raid5. Plus raid-stripe-tree†† is coming - didn't check the status of it recently.
Many horror stories are because, while btrfs is fine, the operational model and tooling have some footguns which can cause either straight up data loss (due to operator error, but arguably that's really due to bad UX) or possible-but-hard-to-get-out-of situations.
I use btrfs because using zfs has been painful for me, for two reasons:
- btrfs can "shapeshift": I progressively moved _live_ from 2hdd raid1 to 5hdd raid5 data + raid1c2 meta with varying experiments in between. Probably five or six rebalance to change its shape over the years.
- the zfs module situation: when I tried it, the module silently failed to build properly and this resulted in a broken system til I fixed it; this happened twice over six months. Luckily I anticipated this failure mode and only the data array (not the root fs) was zfs, so I could still boot and login into a full system to fix.
Compared to zfs, btrfs is slow to scrub and rebalance though.
† https://unixdigest.com/articles/battle-testing-zfs-btrfs-and...
†† https://lore.kernel.org/linux-btrfs/cover.1698679287.git.dst...
btrfs may be great now, and more power to people who use it and are happy. However, I am so used to the ergonomics of ZFS (and zed, and ZFS integrated encryption) that I don't see a reason to migrate back.
On the other hand, I've been running a btrfs RAID1 on two HGST datacenter drives for a few years and haven't had issues with that.
The RAID 5 and RAID 6 modes of Btrfs are fatally flawed, and should not be used for "anything but testing with throw-away data."
From the ArchWiki: https://wiki.archlinux.org/title/Btrfs#Multi-device_file_sys...Actually I was more found of Solaris, Irix, NeXTSTEP in how they approached the whole development experience.
Still got some nice memories of Aix and HP-UX as well, with Xenix and DG/UX as introductory experiences to the UNIX world.
Something like Android is closer to how Plan 9/Inferno got to be, than most GNU/Linux distros, regarding a managed userspace, and more interesting to see where it all goes.
Or modern approaches like Unikernels (even if POSIX based), managed runtimes on top of type 1 hypervisors, immutable container based OSes,....
Also commercial UNIXes never followed the "Unix philosophy" that keeps being endless recited in Linux circles, ironically, given that GNU tools are hardly anything to go by, given the endless list of options they have available.
By default Windows shows ads in my start menu. It also shows me ads in my notifications. I guess I could understand if it was free, but it's not.
And then there's Copilot...
never in my life have i paid for a windows license.
[0] https://en.wikipedia.org/wiki/Bundling_of_Microsoft_Windows
- This is why I never feel guilty about "pirating" Windows. I've already paid for it!
the argument i always used is that Windows feels like spyware, and slowly seems to turn into it. showing me ads, selling my data. don't see why i should pay for this.
- Respect privacy - Is integrating better AI: no invasive AI, yet available if wanted - Usability and stability of UI and intefaces
MacOS and BSD [disclaimer: big fan of BSD] are somewhat stagnated. Depending on what you want to do, many open source projects are "linux first" what can be a problem (ask me how I know!)
So Linux has always been getting slowly better over the years (I first used it more than 30 years ago) and Windows has been getting a lot worse - so Linux easily wins.
Windows kernel is not _bad_, but it's developed by far fewer people.
Linux also has systemd with its unified system resource management. I can slice and dice my system as I want between containers. Oh, and containers are also awesome (Windows has them, macOS doesn't).
Desktop environments are a matter of personal taste. I like my DE very minimal: status bar, quick launcher panel, and that's it.
What changed is that you usually do not run a snowflake anymore which you carefully update to the next version in situ, but some amount of compute and storage. Today everything is blue-green and updates mean deploy, destroy behind a load balancer.
True, but server choice is typically made by professionals, while desktop choice typically isn't. So people measure those two by a (imo correct) double standard
Say what?!? I use btrfs for my backups...
Say what?!?
I got burned too. I made a snapshot and "btrfs send" it by pipe to an xz archive. When I tried to restore, I got a CRC error (from "btrfs receive", not xz). Everything lost. There's no way to restore it now.
What a mess.
well, for the desktop possible choices from the `Cathedral` are:
- windows, and
- macos
of late, both seem to have gone in directions that are antithetical to what $random user wants f.e. pushing ai-features, tahoe ui snafu respectively etc. etc.in `Bazaar` mode, xfce has been an *excellent* choice for quite a while now, and should probably serve `Cathedral` refugees quite well.
all in all, not super convinced of the argument that you seem to be proffering here.
fwiw, both the gnu project and freebsd champion this (cathedral style of) development model.
however, i don't think linux or bsd is *purely* either approach.
w.r.t `user-facing software` which seems to be central thesis of gp, both the alternates (bsd/linux) offer almost an identical choice.
No truer words, and it is very hard to get people to understand that phrase. Like the author I tried to get people on-board with Linux in the 90's, but it was a very hard time. No one switched, and considering I worked on a large programming group, to my surprise the people who heard of it was very small.
After IBM did its thing in 2000, a couple of years later these same people would ask me questions about it, but no one switched. My manager even had me do a demo on it at work.
But the direction of Linux is worrying me. But I have a second older Laptop with a BSD on it, and that will give me an out if necessary. Sadly that may happen sooner then later :(
The experimental flavors are also insane in their creativity. Alpine linux is an entire OS in tens of megabytes. That's crazy!
My impression has been that Linux is simply better than the BSDs. Now, BSD users may disagree; my point is primarily that Linux is more flexible overall. Take LFS/BLFS - you basically have extensive documention how to adjust Linux. Where is that available for BSD on an equal basis? And that is just one example of many more.
I remember in the past how NetBSD on the mailing list acknowledged that Linux runs on more computers, including the very important toasters, than NetBSD. Momentum means a lot. Top 500 supercomputers run Linux too: https://www.top500.org/statistics/details/osfam/1/
These may all be small reasons but they add up eventually.
> To give an example, I am not against systemd on principle
And it is possible to use Linux without systemd too.
Nobody wants a corporate-controlled project in Linux anyway. Where does Poettering work? ;)
> Therefore, in certain cases, the GPL becomes a double-edged sword: on one hand, it protects the software and ensures that contributions remain available. On the other, it risks creating a situation where the most "influential" player can totally direct development
But that is possible in the MIT/BSD world too. See Shopify controlling RubyCentral and thus the ruby ecosystem. Money makes the world go round. I don't think this complaint is really down to the GPL. The GPL is strict; it ensures that corporations need to open up their own modifications.
> And so yes, despite all this, I (still) love Linux.
I don't have any "love" for Linux as such. I simply think it is a good operating system. It is also a tinker-friendly operating system. I significantly prefer ruby as such; I also would not say I "love" ruby, but ruby is a very well-designed language (even ignoring the meta-influence by the shopify overlord). At the end of the day, though, these are just tools. They do things. They ideally help save time and cost. Having the same in the Windows world is not really possible, not even via WSL. WSL just makes windows suck less but windows still sucks immensely; I know because I also use windows almost daily. And I use ruby there too, which makes windows suck less, but it's not a great experience compared to linux.
> Because it has been my life companion for 30 years and has contributed significantly to putting food on the table and letting me sleep soundly. Because it allowed me to study without spending insane amounts on licenses or manuals. Because it taught me, first, to think outside the box. To be free.
> So thank you, GNU/Linux.
It's a strange summary to me. I also call Linux just Linux, without the prefaced GNU. I understand RMS; I just don't think you need to fight in an ideological way. Let the facts and advantages speak for themselves - that suffices. And pick the right licence too. But ... "life companion"? What does that even mean? And what does "to be free" even mean? You still depend on code written by other people. So you depend on those people too. It's better than depending on Microsoft, but I don't fully understand that blog entry really.
LFS is a process where a person can build a distro themselves by building all the many little pieces from disparate sources into a working whole. In BSD land, this isn't a thing because the system is built as a single thing in the first place; instead of having a long list of packages, each with its own download/configure/build steps, you just have 1. clone the (single) source, 2. configure with the standard tools, 3. build it all at once. I suggest reading about netbsd's build.sh ( https://www.netbsd.org/docs/guide/en/chap-build.html ) as a good example of something that's better than GNU/Linux's offering.