Depends upon the distro, Slackware is boring, others not so much. With the churn with systemd, Wayland on Linux, things are far from boring.
On NetBSD, things cannot get any more boring. Plus I now believe Linux is owned by fortune 500 corporations and to me it is slowly heading down the same path as Microsoft Windows.
NetBSD has long support time lines for their releases. Configuration (admin) has not changed much at all. I think pkgsrc is great, I wish this could become a standard across various OSs :) All you need to do is find supported hardware. Thinkpads tend to be good, NetBSD on my T430 has no problems at all.
The only NetBSD complaint I have is I cannot yet figure out how to get wireguard working correctly as a client. But wg is still considered experimental so hoping the issues are resolved in 10.1. The existing docs I do not understand at all, but I always had a block in my head when it came to network admin.
And hey, more power to you. But the thing is, the world has moved on. Linux is stable now. Linux containerization and VM tech is good now. The storage technologies are cross-platform now (I've had zero data loss since using zfs). The moat has shrunk.
So then, what are we left with?
* BSD is less common, so there are fewer CVEs on it. Hackers aren't stupid; they target popular platforms. You get plenty of CVEs for Apple systems (which are BSD, btw).
* Of course, since BSD is less common, it's harder to find employees who are wizards with it.
* Everyone releases for Linux first, so on some things you either have to wait, or do it yourself (or stick it in a Linux VM, at which point...)
* Licensing bites people all the time, but it's hardly a catastrophic thing; merely the normal politics that occur when things get big enough, and enough money is involved. You win by paying good money to hire the better legal team.
So what true benefit is there to using BSD over Linux? Other than satisfying your own self or going with what you're already familiar with, not much.
I prefer BSD because to me it’s more logical, it rarely changes, the documentation is first class and it’s very stable (yes more so than Linux). Plus I like its networking stack and PF
I prefer Linux when I need to run something popular and want a community.
I prefer Solaris clones for storage.
I prefer macOS for doing music, graphics and every day work and for laptops.
It’s no problem jumping between them. BSD is my favorite though.
This is exactly why I migrated my systems to Linux. I still prefer BSD honestly, having "grown up" with SunOS and then moving to FreeBSD very early, it's where I'm most comfortable. But it started to get so that it was noticably harder to keep my software up to date and functioning on BSD when it is often an afterthought for many projects now, or only supported by third party work.
I may not like it (in fact, I don't) but Linux is definitely the easier path for most things these days.
Mac OS X was essentially a continuation of NeXTSTEP, which is BSD with a new novel kernel. In fact, if you look into the research history of the Mach kernel at the core of XNU, it was intended as a novel kernel _for_ BSD. NeXT went and hired one of the key people behind Mach (Avie Tevanian), and he became one of the core systems guy that designed NeXTSTEP as a full OS around Mach.
Early in the proliferation of the Unix family, member systems went in one of two directions -- they based their OS on upstream AT&T Unix, or they based it on Berkley's BSD, and added their own features on top. NeXT was one of the latter. Famously, the original SunOS also was.
While Sun would eventually work closely with AT&T to unify their codebase with upstream, NeXT made no such change. NeXTSTEP stayed BSD-based.
The other extant BSDs like FreeBSD and NetBSD were also based directly on the original BSD code, through 386BSD.
If I have my history correct, Apple would later bring in code improvements from both NetBSD and FreeBSD, including some kernel code, and newer parts of the FreeBSD userland, to replace their older NeXT userland which was based on now-outdated 4.3BSD code. I think this is where the confusion comes in. People assume MacOS is a only "technically" a Unix by way of having borrowed some code from NetBSD and FreeBSD. They don't realize that it's fully and truly a BSD and Unix by way of having been built from NeXT and tracing its lineage directly through the original Berkeley Software Distribution. That code they borrowed was replacing older code, also BSD-derived.
XNU is a combination of a FreeBSD-Kernel (Networking, Filesystem, etc) and a Mach-Kernel (scheduling, ipc, virtual-memory etc):
FreeBSD is like a great grandparent, related but still very different.
In fact, it's been partially done for FreeBSD, https://github.com/dspinellis/unix-history-repo
We could in principle do something similar for Darwin (if we had enough of the historical code), which is the core of MacOS, which is based on NeXT, which was based on BSD with a new kernel. That makes MacOS every bit as much a member of the Unix/BSD family as FreeBSD is.
Sure, you have ls and df, but they behave similarly only on the surface.
Also, the article mentions Kubernetes a few times, which quite fairly has a reputation for massive complexity, but is again entirely optional, and a piece of software entirely separate from the operating system.
I agree with the basic point of achieving reliability by using the simplest technology available, but the focus on the operating system for me is misguided here, and at best a temporary fix. If BSD were to catch on for that reason, Kubernetes would be ported to BSD, and the same problem would arise there
I worked for SUSE from 2017 to 2021. Because of that, I ran openSUSE on my work computer. Btrfs self-destructed on me, on 3 different PCs, about twice a year in that 4-year period.
Not myth. Not from t'Internet. Direct personal experience.
Btrfs `df` lies. You, and programs, can't get an accurate estimate of free space. OS snapshots fill the volume, volume corrupts, new install time. Over and over again.
I do not trust Btrfs and since the Btrfs zealots are in denial and will not confront the very real problems, I don't think it will ever get fixed.
It's not like you'd need those for all eternity. By housekeeping I meant deleting them from time to time, with easily clickable tools, which exist(now/meanwhile), and DO give an overview. Maybe have to 'rebalance' afterwards, which can go wrong if the 'housekeeping' was too late, or something. OTOH the 'rebalancing' can be automated, from the beginning.
I'm sure similar haphazards (regarding common tools like df/du not being able to give an exact overview of remaining capacity) exist under ZFS, at least when your'e using compression.
I will clean up my own mess. If I take snapshots, it's my job to clean them up.
If the OS does its own then the OS can do the work and clean up its own mess.
More to the point, if the OS's developers thought this was a good idea, then complete the work, finish the job, track the space usage and never ever do operations needing lots of space without checking that space is available or making it available.
This is bad design and bad implementation. It is not my job to fix their omissions.
But so far I'm really enjoying my new hot technotoy, in combination with some other 'crazy' tools, like zram, profilesync-deamon for the browser, a really 'riced' kernal...err kernel with all sorts of powerful patches, and even most parts of the userland compiled with optimizations to the limits of my cpu, even the browser!
ISTR you mentioned the crappy default partioning suggestions from another OS in another thread, which seem inflexible because of the potential waste of space for different directories like /usr/var/serv/somecrap/whatnotelse/GO/HOME!, which really can't be known in advance for casual desktop-use, and I concure.
But with BTRFS-subvolumes that shit doesn't matter anymore! Whee! :)
I'll wait and see, and will abuse the really unexpectedly well working combination of components and their versions and settings to the max, not having experienced hitches, glitches, or even crashes so far.
But anything which could get lost is backed up incrementally to elsewhere anyways, just in case.
My take is just that, in the 21st century, I do not expect a Linux distro in normal routine use to crash and corrupt its disk. Not _ever._ That was acceptable in the '90s when it was new, but not now.
For the SUSE folks to complain that "U R doin it wrong" doesn't wash.
E.g. for an OS that takes a single-digit number of gigabytes of disk space, a 32GB disk partition should be plenty and it should never fill that up.
I note that recent releases of openSUSE disable Snapper if given a root volume of <= 20GB. Maybe that was due to me and my bug reports. I don't know. It's a rotten answer, though: "OK, this dude's weird usage breaks our snapshot system, so what we'll do is turn it off."
The correct answer is to fix the snapshot system. A better one is to fix the filesystem.
I didn't say perfect, I just said, querying for FS to use, everyone recommends ZFS, over Btrfs. Even if not perfect, it seems to have left a better impression than Btrfs.
I've also had a weird situation after that where a micro SD formatted with btrfs on my desktop PC wouldn't mount on a raspberry pi, and vice-versa the same micro SD formatted on the pi wouldn't mount on the desktop. This was apparently caused by a difference in the used block sizes, which were mutually incompatible.
So I'll quote myself on this.
But also, my server is running a btrfs raid 1 due to the flexibility for resizing and that has been just fine for a few years now. It's not black and white and with backups I am not really worried.
[0] https://www.techradar.com/news/best-alternative-operating-sy...
And Ubuntu really was the value-add, turnkey solution for someone who needed a desktop system as "daily driver" without endless tinkering and custom fixes.
But by 2018, Ubuntu was making enough proprietary additions that I didn't need, and I began to notice Debian's maturity and feature parity. But Debian still had a reputation for being extremely stable. So some new installs were Debian, and in 2018 when I purchased a Lenovo notebook, some were Ubuntu-certified, and I chose Red Hat certified, which ultimately ran Fedora quite smoothly.
(There were no BSD-certified notebooks.)
In case something goes wrong, you might have to restart anyway. So you better exercise that process to know it works. Also, I'd rather get really good at dealing with the first day of uptime of a system, then discover what the n-th day brings for ever larger values of n for the first time ever.
Not every system has to be connected to the internet....imagine that ;)
> They required a dhcp, an internal DNS, an Apache + PHP server for some internal (and a couple of external) websites, a file server accessible via both NFS and Samba (as Windows PCs needed access), an internal SMTP connecting to an external relay to ensure faster email dispatches for employees given their unstable connectivity, and a few other nuances.
It's running basically everything. But even if it was exposed only internally, that's usually only one other issue or misconfiguration away from being fully exposed again.
I believe they're saying it would be irresponsible - in the current climate - to leave a system without upgrades and attention for so long. We can agree on that.
That really only reflects the "current climate"; which is one of colossal dereliction and reckless engineering amounting to a total abandonment of cybersecurity. Cybersecurity is presently such a circus because of endemic poor software engineering, a worship of expedience, convenience, efficiency and plain old greed. These are all the reasons that good engineering and big business do not intersect in modern times.
A couple of weeks ago Jen Easterly called out current industry practices as enabling cybercrime and harming society [0].
So not all unattended code is equal. I think there is a great deal of pride to be taken in building and using stable and reliable systems. That shows up in smaller groups, non-profits, and volunteer networks that are strongly focused on a smaller set of goals and who eschew giddy neophyte values in favour of a more sedate and responsible stance. But those are rather different flavours of "pride". I would characterise what the parent said as more an "absence of shame" for leaving what are evidently cheap-ass vulnerable systems that are "pwned out of the box" in a hostile environment.
The fact having having been neglected nobody realised because it Just Kept Working for close to a decade is still a testament to the software running on it.
> The largest failure was with btrfs — after a reboot, a 50 TB filesystem (in mirror, for backups) simply stopped working.
RAID is not a backup.
But restoring 50TB of data from actual backups take a lot of time.
I like BTRFS to a fair degree, but thae fact that _any_ two drives failing in its "raid 10" configuration causes data loss is not obvious or intuitive.
I remember someone mentioning they did this with Linux back in 1994 or 1995. Not for a decade obviously but it had been running for at least a year with no reboots or needing maintenance
I inherited administration over a number of one-off critical linux systems back in ~2010 that had 6+ years of uptime. Spent a long time analysing their contents, and then building replacement stacks with redundancy alongside them, and carefully cutting traffic across...
At the end of the day, you need to be able to reboot/upgrade servers regularly (even if you don't in practice do it very often).
This time however the building had a fire go through the main office and took out a quarter of the facility. I finally convinced them to atleast get the data on one drive for a start.
First fire they'd had in 40 years of operating. I still can't convince them to switch to a cloud based payroll/book keeping system but hey atleast it's now backed up across two terminals, one that's kept offsite and a 3rd location of onedrive in the cloud. It's been a fun few days to say the least.
Astounded the drives survived whilst their chassis melted around them (samsung ssds and asus pn51-e1 mini pcs). They got insanely lucky.
See also https://skeptics.stackexchange.com/questions/32502/did-a-com... .
The Reg also has articles like https://www.theregister.com/2016/01/20/486_fleet_still_in_pr... 'Eighteen year old server trumped by functional 486 fleet!'
It's entirely possible that BSD is more stable and lends itself better to running for really long times uninterrupted. But Linux systems running for years really aren't unheard of, and by no means is one year the top of the range.
Whether you should do that is another question. Over the course of a decade, there could well be even kernel-level vulnerabilities discovered, let alone ones in other services running on top of it. You might have a system running without a reboot for years as long as you make sure to update (and restart) user space services as needed. But leaving an entire server unattended for years doesn't sound like a good idea generally.
That may not be as much of a concern if what the box is running is a limited set of services or functionality with little exposed surface. But that then comes more down to "what you're doing with it" rather than "which OS you're using".
The generally less conservative development culture around Linux leans more towards moving fast and breaking things, although generally while trying to avoid the latter. Perhaps that makes things like low-level OS vulnerabilities or whether the system still restarts cleanly after a decade more important in the Linux land, and what counts as prudent administration in Linux might be less of a concern in BSD.
But if you can have a BSD box running for a decade, with some particular set of services, in an internal network(?), and then compare that to someone else's report of a Linux box running for (at least) a year, in 1994 or 1995, probably running an entirely different workload, in a different environment (perhaps externally exposed?), and with no indication of why it may or may not have been restarted after that time, that's not really a fair comparison either.
My thought was more that maybe a lot of the issues people are having with Linux to push them to BSD is what we, developers collectively, have done to Linux over the last 2-3 decades.
Ah, yes, good point!
> My thought was more that maybe a lot of the issues people are having with Linux to push them to BSD is what we, developers collectively, have done to Linux over the last 2-3 decades.
Indeed so. Especially in the last decade or so: snap, Flatpak, Wayland, systemd, etc.
It's being made gratuitously more and more complex to meet the demands of the main commercial users -- running cloud servers -- even if this makes like much more complicated for individual users.
Which is why I advocate the BSDs as an alternative, but man, they are all forbidding and offputting to beginners/novices, and some of them don't even realise why and how.
But come on…
All that’s been said about security updates etc aside (some of which can be mitigated with that fancy in-place kernel update stuff), If something hasn’t rebooted in 10 years I’m going to be a bit nervous about what happens when it does reboot. If it’s in an uptime fetishist environment, chances are that it’ll be rebooted at a time that’s…inconvenient to say the least. Are you SURE that nothing has changed in that time? Some people are! Moreso than others at least. But that’s extra work, and my bet is most places with these high-uptime machines aren’t putting that work In, or think they are and are doing it poorly.
Mine is a Debian box. Upgraded from Debian 10 to 11 to 12. Running all kinds of things. I only use Debian packages alongside one or two Docker containers.
Linux can do boring just fine.
Maybe I'm biased because I had worse experience making FreeBSD and OpenBSD in a desktop than with Gentoo, but I think working Modern Linux distros are as customizable, as stable, and with a lot more community and professionals working on them than any BSD one, and with a business in mind, that last detail is really important.
My server never crash, my linux desktop do.
I assume they don't put BSD on their client's laptop.
You mean use a mainframe remotely ?
Or use MVS TK5 ;)
https://www.prince-webdesign.nl/index.php/software/mvs-3-8j-...
I've never used Solaris or illumos before, but I'm looking for something bullet-proof, idiot-proof and maintenance-free. The mainstream solution would probably be Proxmox and while I know how to administrate a Debian system, I don't trust myself as a sysadmin while acting in a personal capacity. Proxmox intrinsically just doesn't bring the peace of mind that I won't accidentally blow it up while away from my apartment.
Just because a solution isn't mainstream doesn't mean it's not worth taking a look at. Even if you don't end up selecting them, it brings a healthy perspective that you wouldn't have otherwise.
OmniOS I basically run as an appliance and has been trouble free since the start.
illumos Distribution comparison:
This veers close to treating your servers as pets instead of cattle. Which is fine if you’re small ( 99% of services are ), but not great if you have thousands of servers and scale up and down routinely.
That said, I don’t feel like that quote actually represents BSD vs Linux at all. You can have easy deploys and long term maintenance on either.
It’s kind of a ridiculous thing to claim without very substantial proof.
I think it’s made up to justify installing BSD.
> Clients are often influenced by hype. A few years ago, it was "Linux is a toy." Now, it's "Why bhyve and not Proxmox?" They ask, "How can they sell FreeBSD? There's no AI, there's no Cloud, there's no Kubernetes, there's no blockchain – there's nothing!"
I am very confident that this is more ‘fan fiction’ than the author would like to admit. The sort of hypothetical that someone cooks up on their head to anger up the blood and to then self-soothe by thinking about how much better than everyone else they are.
Why does everything have to be so bloody religious? If you like boring tech, stop politicising it.
I made a semi-successful blockchain based product 5 years ago. At two, separate employers, I was urged to make a blockchain based solution to.. well actually the problem was not described in either case, only the use of blockchain.
I'll be over here running a mixture of Debian Stable and FreeBSD and wondering why everybody insists on getting so angry about these things.
Admittedly, Linux really isn't boring enough for me. By boring I mean I don't really want to notice that OS exists, I want a distro that has most answers to any noob question on Google, all drivers for all hardware out of the box, no fancy package managers like snap or cool future technology like ZFS, no way to shoot myself in the foot however hard I try... But in fact almost all cool future technology comes to Linux from BSD, but it has less drivers, less packets, and googling yields less results. Everything else is pretty much the same everywhere and always comes with caveats.