87 pointsby dazzawazzaa year ago17 comments
  • HL33tibCe7a year ago
    Switching your customers from Linux to BSD sounds like the opposite of the "boring" choice to me. The boring choice would be to leave them on their functioning Linux boxes. But I guess that wouldn't make an interesting conference talk.
    • mardifoufsa year ago
      The term "boring" just means "stuff I'm familiar with" 99% of the time in online technical conversations. That's how stuff like PHP gets bundled with "boring tech" but node or similar are somehow seen as a complicated mess by some. It's just familiarity disguised as sound technical analysis lol.
    • jmclnxa year ago
      >he boring choice would be to leave them on their functioning Linux boxes

      Depends upon the distro, Slackware is boring, others not so much. With the churn with systemd, Wayland on Linux, things are far from boring.

      On NetBSD, things cannot get any more boring. Plus I now believe Linux is owned by fortune 500 corporations and to me it is slowly heading down the same path as Microsoft Windows.

      NetBSD has long support time lines for their releases. Configuration (admin) has not changed much at all. I think pkgsrc is great, I wish this could become a standard across various OSs :) All you need to do is find supported hardware. Thinkpads tend to be good, NetBSD on my T430 has no problems at all.

      The only NetBSD complaint I have is I cannot yet figure out how to get wireguard working correctly as a client. But wg is still considered experimental so hoping the issues are resolved in 10.1. The existing docs I do not understand at all, but I always had a block in my head when it came to network admin.

    • appendix-rocka year ago
      “Boring” in this instance obviously means some weird tech culture virtue signal. The situations in which Linux isn’t at least ‘just fine’, and a BSD is better enough to justify SWITCHING, are, well, I’m skeptical of the author’s reasoning.
  • ksteneruda year ago
    I've also read the original blog that this article spawns from. The gist of it is: "I switched to BSD during Linux's bad period, and now I'm comfortable with it so I'll stick with it".

    And hey, more power to you. But the thing is, the world has moved on. Linux is stable now. Linux containerization and VM tech is good now. The storage technologies are cross-platform now (I've had zero data loss since using zfs). The moat has shrunk.

    So then, what are we left with?

    * BSD is less common, so there are fewer CVEs on it. Hackers aren't stupid; they target popular platforms. You get plenty of CVEs for Apple systems (which are BSD, btw).

    * Of course, since BSD is less common, it's harder to find employees who are wizards with it.

    * Everyone releases for Linux first, so on some things you either have to wait, or do it yourself (or stick it in a Linux VM, at which point...)

    * Licensing bites people all the time, but it's hardly a catastrophic thing; merely the normal politics that occur when things get big enough, and enough money is involved. You win by paying good money to hire the better legal team.

    So what true benefit is there to using BSD over Linux? Other than satisfying your own self or going with what you're already familiar with, not much.

    • timc3a year ago
      I have run Linux, BSD and other unix for a long time (since the 90s and the very first Slackware) and continue to this day.

      I prefer BSD because to me it’s more logical, it rarely changes, the documentation is first class and it’s very stable (yes more so than Linux). Plus I like its networking stack and PF

      I prefer Linux when I need to run something popular and want a community.

      I prefer Solaris clones for storage.

      I prefer macOS for doing music, graphics and every day work and for laptops.

      It’s no problem jumping between them. BSD is my favorite though.

    • technothrashera year ago
      > the world has moved on

      This is exactly why I migrated my systems to Linux. I still prefer BSD honestly, having "grown up" with SunOS and then moving to FreeBSD very early, it's where I'm most comfortable. But it started to get so that it was noticably harder to keep my software up to date and functioning on BSD when it is often an afterthought for many projects now, or only supported by third party work.

      I may not like it (in fact, I don't) but Linux is definitely the easier path for most things these days.

    • parkcedara year ago
      Mac isn’t really BSD, it’s a common misconception. It shares some of the userland code, but it’s a vastly different kernel (derived from the Mach microkernel). The userland has diverged quite significantly now too. Though, I guess it probably is closer to BSD than Linux
      • aeadioa year ago
        Not a misconception at all.

        Mac OS X was essentially a continuation of NeXTSTEP, which is BSD with a new novel kernel. In fact, if you look into the research history of the Mach kernel at the core of XNU, it was intended as a novel kernel _for_ BSD. NeXT went and hired one of the key people behind Mach (Avie Tevanian), and he became one of the core systems guy that designed NeXTSTEP as a full OS around Mach.

        Early in the proliferation of the Unix family, member systems went in one of two directions -- they based their OS on upstream AT&T Unix, or they based it on Berkley's BSD, and added their own features on top. NeXT was one of the latter. Famously, the original SunOS also was.

        While Sun would eventually work closely with AT&T to unify their codebase with upstream, NeXT made no such change. NeXTSTEP stayed BSD-based.

        The other extant BSDs like FreeBSD and NetBSD were also based directly on the original BSD code, through 386BSD.

        If I have my history correct, Apple would later bring in code improvements from both NetBSD and FreeBSD, including some kernel code, and newer parts of the FreeBSD userland, to replace their older NeXT userland which was based on now-outdated 4.3BSD code. I think this is where the confusion comes in. People assume MacOS is a only "technically" a Unix by way of having borrowed some code from NetBSD and FreeBSD. They don't realize that it's fully and truly a BSD and Unix by way of having been built from NeXT and tracing its lineage directly through the original Berkeley Software Distribution. That code they borrowed was replacing older code, also BSD-derived.

      • BSDobelixa year ago
        >but it’s a vastly different kernel (derived from the Mach microkernel).

        XNU is a combination of a FreeBSD-Kernel (Networking, Filesystem, etc) and a Mach-Kernel (scheduling, ipc, virtual-memory etc):

        https://en.wikipedia.org/wiki/XNU

        https://www.youtube.com/watch?v=-7GMHB3Plc8

        • parkcedara year ago
          Yes, but since it was initially created I believe a lot of it has been rewritten. Eg, the filesystem. I suspect the memory system is different these days too, since macOS handles compressed memory quite differently (though not sure how that gets implemented under the hood).

          FreeBSD is like a great grandparent, related but still very different.

          • aeadioa year ago
            Every extant Unix has been rewritten since the original AT&T code, Ship of Theseus style. We still consider them members of the Unix family, because they can trace their lineage directly. One could built a Git repo showing every code change from the original Unix release through modern day BSDs, if only we had granular commit info going back that far.

            In fact, it's been partially done for FreeBSD, https://github.com/dspinellis/unix-history-repo

            We could in principle do something similar for Darwin (if we had enough of the historical code), which is the core of MacOS, which is based on NeXT, which was based on BSD with a new kernel. That makes MacOS every bit as much a member of the Unix/BSD family as FreeBSD is.

      • spha year ago
        No one really cares about the kernel internals, its origin or the license of it. What is prominent is the user-space components (i.e. shell and other binaries). macOS is closer to other BSDs than Linux is, as Linux usually ships with GNU libs and utilities.
        • parkcedara year ago
          You hardly notice the binaries though, especially since gnu cmd line utils are so similar anyway. You’d expect the c libs to be similar, but they’re not really (about as different as Mac to Linux anyway).
          • spha year ago
            GNU utils are not similar to BSD utils. My hardest time adjusting to macOS was learning that some command line options I was so used to didn't exist or didn't work the same way.

            Sure, you have ls and df, but they behave similarly only on the surface.

  • jl6a year ago
    I feel that “it just works and you don’t need to maintain it” is less an OS feature, and more about what you are doing with it.
    • steeleduncana year ago
      Yes, the most concrete issue the article mentions is losing data on Btrfs, but that is well known to be a flaky, semi-experimental file system, the operating system has very little to do with it. The equivalent would be running Debian on ext4

      Also, the article mentions Kubernetes a few times, which quite fairly has a reputation for massive complexity, but is again entirely optional, and a piece of software entirely separate from the operating system.

      I agree with the basic point of achieving reliability by using the simplest technology available, but the focus on the operating system for me is misguided here, and at best a temporary fix. If BSD were to catch on for that reason, Kubernetes would be ported to BSD, and the same problem would arise there

      • nxicvyvya year ago
        It's also a 10 years out of date complaint. Btrfs has been stable for fucking ages. That meme needs to die.
        • Ygg2a year ago
          Whenever topic of FS arises, I always hear anecdata that btrfs/ext4 lost data and ZFS was smooth sailing.
          • nxicvyvya year ago
            Yes and not a single one of them can quote someone from this decade because it's a shitty internet myth that won't die.
            • lprovena year ago
              Hi. Article author here.

              I worked for SUSE from 2017 to 2021. Because of that, I ran openSUSE on my work computer. Btrfs self-destructed on me, on 3 different PCs, about twice a year in that 4-year period.

              Not myth. Not from t'Internet. Direct personal experience.

              Btrfs `df` lies. You, and programs, can't get an accurate estimate of free space. OS snapshots fill the volume, volume corrupts, new install time. Over and over again.

              I do not trust Btrfs and since the Btrfs zealots are in denial and will not confront the very real problems, I don't think it will ever get fixed.

              • LargoLasskhyfva year ago
                Yah well. Do some housekeeping then? I mean if the Distro delivers automagically snapshotting in intervals, during installation of packages, or whatever they fancy?

                It's not like you'd need those for all eternity. By housekeeping I meant deleting them from time to time, with easily clickable tools, which exist(now/meanwhile), and DO give an overview. Maybe have to 'rebalance' afterwards, which can go wrong if the 'housekeeping' was too late, or something. OTOH the 'rebalancing' can be automated, from the beginning.

                I'm sure similar haphazards (regarding common tools like df/du not being able to give an exact overview of remaining capacity) exist under ZFS, at least when your'e using compression.

                • lprovena year ago
                  No. That is the simple answer. I refuse.

                  I will clean up my own mess. If I take snapshots, it's my job to clean them up.

                  If the OS does its own then the OS can do the work and clean up its own mess.

                  More to the point, if the OS's developers thought this was a good idea, then complete the work, finish the job, track the space usage and never ever do operations needing lots of space without checking that space is available or making it available.

                  This is bad design and bad implementation. It is not my job to fix their omissions.

                  • LargoLasskhyfva year ago
                    Yah. Maybe (or even probably, given the history and all the (not even that anecdotical) evidence of all the gotchas and oopsies that happended so far) I'll eat chalk.

                    But so far I'm really enjoying my new hot technotoy, in combination with some other 'crazy' tools, like zram, profilesync-deamon for the browser, a really 'riced' kernal...err kernel with all sorts of powerful patches, and even most parts of the userland compiled with optimizations to the limits of my cpu, even the browser!

                    ISTR you mentioned the crappy default partioning suggestions from another OS in another thread, which seem inflexible because of the potential waste of space for different directories like /usr/var/serv/somecrap/whatnotelse/GO/HOME!, which really can't be known in advance for casual desktop-use, and I concure.

                    But with BTRFS-subvolumes that shit doesn't matter anymore! Whee! :)

                    I'll wait and see, and will abuse the really unexpectedly well working combination of components and their versions and settings to the max, not having experienced hitches, glitches, or even crashes so far.

                    But anything which could get lost is backed up incrementally to elsewhere anyways, just in case.

                    • lprovena year ago
                      OK, that's perfectly fair. Enjoy! Seriously!

                      My take is just that, in the 21st century, I do not expect a Linux distro in normal routine use to crash and corrupt its disk. Not _ever._ That was acceptable in the '90s when it was new, but not now.

                      For the SUSE folks to complain that "U R doin it wrong" doesn't wash.

                      E.g. for an OS that takes a single-digit number of gigabytes of disk space, a 32GB disk partition should be plenty and it should never fill that up.

                      I note that recent releases of openSUSE disable Snapper if given a root volume of <= 20GB. Maybe that was due to me and my bug reports. I don't know. It's a rotten answer, though: "OK, this dude's weird usage breaks our snapshot system, so what we'll do is turn it off."

                      The correct answer is to fix the snapshot system. A better one is to fix the filesystem.

            • viraptora year ago
              Also everyone ignores the publicly visible zfs repo/issues. Corruption https://github.com/openzfs/zfs/issues/16631 crash/corruption https://github.com/openzfs/zfs/issues/16626 crash https://github.com/openzfs/zfs/issues/16623 just from this week. One of those filesystems is likely more stable than the other, but the image of perfect zfs is tiring.
              • Ygg2a year ago
                > One of those filesystems is likely more stable than the other, but the image of perfect zfs is tiring.

                I didn't say perfect, I just said, querying for FS to use, everyone recommends ZFS, over Btrfs. Even if not perfect, it seems to have left a better impression than Btrfs.

            • matrssa year ago
              I've had btrfs loose my laptops root filesystem, it just wouldn't mount anymore for no apparent reason. This was ~6 or 7 years ago. Reading the fs with some rescue command worked fine, the ssd continued to work for a few more years after formatting.

              I've also had a weird situation after that where a micro SD formatted with btrfs on my desktop PC wouldn't mount on a raspberry pi, and vice-versa the same micro SD formatted on the pi wouldn't mount on the desktop. This was apparently caused by a difference in the used block sizes, which were mutually incompatible.

              So I'll quote myself on this.

              But also, my server is running a btrfs raid 1 due to the flexibility for resizing and that has been just fine for a few years now. It's not black and white and with backups I am not really worried.

            • BSDobelixa year ago
              fill your btrfs with "dd if=/dev/urandom of=./testfile" as a normal user then "rm ./testfile && sync" then reboot. 6 month ago i could brick btrfs with that "trick".
            • bauruinea year ago
              I have a broken (parent transid verify failed on logical) btrfs RAID5 here that I can't mount anymore even with the recovery commands and google shows many results about it from less than a decade ago.
    • jimnotgyma year ago
      I agree. I have had several services run for a decade on a company LAN using Linux. They did boring things well. They will still work in 20 years if the hardware is still there.
      • nonrandomstringa year ago
        To be fair I've had Debian boxes with uptimes of many years and service lives of a decade. A sarge install doing nothing but DNS (on metal not VM) from 2006 lasted until 2015. So it isn't just a Linux - BSD thing it's a mindset of simplicity and focus of function. But since about 2020 my love for Debian waned, what with the systemd and ugly internal politics. Nowadays I'm inclined to look at whether a BSD or other operating system [0] might be a good fit for someones needs. Basically I find the most mature mindset exists wherever it's furthest away from the GAFAM/Bigtech values which corrupt, over-complicate everything and try to suck you into ongoing unstable dependency relation. Using "unfashionable" technology is pretty much always a win in my experience.

        [0] https://www.techradar.com/news/best-alternative-operating-sy...

        • AStonesThrowa year ago
          It's interesting, because around 2004-06 was when I struggled mightily with hardware issues, especially audio, and Chris Siebenmann was the one who urged me to "install Ubuntu and be done with it."

          And Ubuntu really was the value-add, turnkey solution for someone who needed a desktop system as "daily driver" without endless tinkering and custom fixes.

          But by 2018, Ubuntu was making enough proprietary additions that I didn't need, and I began to notice Debian's maturity and feature parity. But Debian still had a reputation for being extremely stable. So some new installs were Debian, and in 2018 when I purchased a Lenovo notebook, some were Ubuntu-certified, and I chose Red Hat certified, which ultimately ran Fedora quite smoothly.

          (There were no BSD-certified notebooks.)

        • a year ago
          undefined
  • squarefoota year ago
    A server whose uptime goes to X years is a server that wasn't updated in X years. I'm proud of my old XigmaNAS (FreeBSD) ~15 months uptime years ago that only a blackout did interrupt (yep, I had no UPS), and I write this as a mostly Linux user, but I'm not sure it's a good thing to aim for in production or in anything connected to the Internet.
    • erua year ago
      There's another reason to restart all your stuff (computers, server processes, etc) every so often:

      In case something goes wrong, you might have to restart anyway. So you better exercise that process to know it works. Also, I'd rather get really good at dealing with the first day of uptime of a system, then discover what the n-th day brings for ever larger values of n for the first time ever.

    • kleibaa year ago
      Why do updates necessarily require reboots though?
      • jiggunjera year ago
        I suppose it's about kernel and driver packages mostly.
        • viraptora year ago
          If you pay enough, you can also get live patches which don't require a restart. This some to other software too - I've done some upstart live-patching in the past to save a cluster. But yeah, if you can afford to just restart part of your service like a normal person, your restarts will come from the kernel update.
    • HL33tibCe7a year ago
      Yeah, the article links another post which talks about a machine with a 9 year uptime as if it's a good thing. 9 years without a NetBSD release update (or kernel upgrade) is not something to be proud of.
      • BSDobelixa year ago
        >9 years without a NetBSD release update (or kernel upgrade) is not something to be proud of.

        Not every system has to be connected to the internet....imagine that ;)

        • viraptora year ago
          In the post describing that server:

          > They required a dhcp, an internal DNS, an Apache + PHP server for some internal (and a couple of external) websites, a file server accessible via both NFS and Samba (as Windows PCs needed access), an internal SMTP connecting to an external relay to ensure faster email dispatches for employees given their unstable connectivity, and a few other nuances.

          It's running basically everything. But even if it was exposed only internally, that's usually only one other issue or misconfiguration away from being fully exposed again.

          • BSDobelixa year ago
            I don't talk about this case, but the automatic assumption that every system is connected to a common network.
            • a year ago
              undefined
        • nonrandomstringa year ago
          Whether something is connected to the internet or not wouldn't be my objection to the parents personal notion of "pride".

          I believe they're saying it would be irresponsible - in the current climate - to leave a system without upgrades and attention for so long. We can agree on that.

          That really only reflects the "current climate"; which is one of colossal dereliction and reckless engineering amounting to a total abandonment of cybersecurity. Cybersecurity is presently such a circus because of endemic poor software engineering, a worship of expedience, convenience, efficiency and plain old greed. These are all the reasons that good engineering and big business do not intersect in modern times.

          A couple of weeks ago Jen Easterly called out current industry practices as enabling cybercrime and harming society [0].

          So not all unattended code is equal. I think there is a great deal of pride to be taken in building and using stable and reliable systems. That shows up in smaller groups, non-profits, and volunteer networks that are strongly focused on a smaller set of goals and who eschew giddy neophyte values in favour of a more sedate and responsible stance. But those are rather different flavours of "pride". I would characterise what the parent said as more an "absence of shame" for leaving what are evidently cheap-ass vulnerable systems that are "pwned out of the box" in a hostile environment.

          [0] https://www.cisa.gov/about/leadership/jen-easterly

      • msta year ago
        The fact that the system accidentally got neglected isn't, no.

        The fact having having been neglected nobody realised because it Just Kept Working for close to a decade is still a testament to the software running on it.

  • 28304283409234a year ago
    From the blog post linked in the article:

    > The largest failure was with btrfs — after a reboot, a 50 TB filesystem (in mirror, for backups) simply stopped working.

    RAID is not a backup.

    • timc3a year ago
      It sounded to me more like the system was being used for Backups, not that they expect RAID to be a backup.
    • georgyoa year ago
      Agreed, raid is not backup.

      But restoring 50TB of data from actual backups take a lot of time.

      I like BTRFS to a fair degree, but thae fact that _any_ two drives failing in its "raid 10" configuration causes data loss is not obvious or intuitive.

      • 28304283409234a year ago
        If one has 50TB(!!) of mission critical data, one should not store it on one machine running btrfs. That is just silly. No many how many mirrors you throw at it.
  • hk1337a year ago
    > One of Stefano Marinelli's NetBSD boxes sat quietly serving for a decade, because everyone forgot about it. This is how Unix is meant to be.

    I remember someone mentioning they did this with Linux back in 1994 or 1995. Not for a decade obviously but it had been running for at least a year with no reboots or needing maintenance

    • swiftcodera year ago
      The problem with a box that has been running the same critical service for years on end, is that nobody knows if it would actually boot back up successfully in the event of a reboot/crash/power cut/etc...

      I inherited administration over a number of one-off critical linux systems back in ~2010 that had 6+ years of uptime. Spent a long time analysing their contents, and then building replacement stacks with redundancy alongside them, and carefully cutting traffic across...

      At the end of the day, you need to be able to reboot/upgrade servers regularly (even if you don't in practice do it very often).

      • gtvwilla year ago
        I just spent the last 4 days recovering payroll systems and data from machines that have no backup. Turns out they were storing all the payroll data on an external hdd in a folder called "dead hdd recovery" that was from the last time I recovered a dead system for them about 4 years ago. They weren't ment to keep using that external hdd or folder but they did. They also weren't interested in other forms of redundancy the last time the payroll machine died.

        This time however the building had a fire go through the main office and took out a quarter of the facility. I finally convinced them to atleast get the data on one drive for a start.

        First fire they'd had in 40 years of operating. I still can't convince them to switch to a cloud based payroll/book keeping system but hey atleast it's now backed up across two terminals, one that's kept offsite and a 3rd location of onedrive in the cloud. It's been a fun few days to say the least.

        Astounded the drives survived whilst their chassis melted around them (samsung ssds and asus pn51-e1 mini pcs). They got insanely lucky.

      • erua year ago
        Yes, you should semi-regularly restart everything (both OS and server processes), so you know you can bring them back up.
        • kemotepa year ago
          Even restoring from the latest backups, running the two in parallel as you hand off the services and then shutdown the old one. You confirm that backups work, and that deploying the service is still well documented/automations work.
          • erua year ago
            Yes, the only backups you can count on working are those you practice restoring from.
    • eesmitha year ago
      The classic urban legend is of a Novell server rediscovered four years after being walled in, https://www.theregister.com/2001/04/12/missing_novell_server... .

      See also https://skeptics.stackexchange.com/questions/32502/did-a-com... .

      The Reg also has articles like https://www.theregister.com/2016/01/20/486_fleet_still_in_pr... 'Eighteen year old server trumped by functional 486 fleet!'

    • lprovena year ago
      There's an order of magnitude difference between running for a year and a decade... That's a lot.
      • Delka year ago
        A couple of years ago at work, we came across a Linux (virtual) server related to a customer project that had been running for some 2.5 or 3 years since the last reboot. Pretty much by accident. I think we rebooted it for good measure (applying security updates, making sure it still booted cleanly, etc.) but there was no immediate need for doing that just to keep it functional.

        It's entirely possible that BSD is more stable and lends itself better to running for really long times uninterrupted. But Linux systems running for years really aren't unheard of, and by no means is one year the top of the range.

        Whether you should do that is another question. Over the course of a decade, there could well be even kernel-level vulnerabilities discovered, let alone ones in other services running on top of it. You might have a system running without a reboot for years as long as you make sure to update (and restart) user space services as needed. But leaving an entire server unattended for years doesn't sound like a good idea generally.

        That may not be as much of a concern if what the box is running is a limited set of services or functionality with little exposed surface. But that then comes more down to "what you're doing with it" rather than "which OS you're using".

        The generally less conservative development culture around Linux leans more towards moving fast and breaking things, although generally while trying to avoid the latter. Perhaps that makes things like low-level OS vulnerabilities or whether the system still restarts cleanly after a decade more important in the Linux land, and what counts as prudent administration in Linux might be less of a concern in BSD.

        But if you can have a BSD box running for a decade, with some particular set of services, in an internal network(?), and then compare that to someone else's report of a Linux box running for (at least) a year, in 1994 or 1995, probably running an entirely different workload, in a different environment (perhaps externally exposed?), and with no indication of why it may or may not have been restarted after that time, that's not really a fair comparison either.

      • hk1337a year ago
        Well, yeah but also at this time Linux had not even been around half a decade.

        My thought was more that maybe a lot of the issues people are having with Linux to push them to BSD is what we, developers collectively, have done to Linux over the last 2-3 decades.

        • lprovena year ago
          > Well, yeah but also at this time Linux had not even been around half a decade.

          Ah, yes, good point!

          > My thought was more that maybe a lot of the issues people are having with Linux to push them to BSD is what we, developers collectively, have done to Linux over the last 2-3 decades.

          Indeed so. Especially in the last decade or so: snap, Flatpak, Wayland, systemd, etc.

          It's being made gratuitously more and more complex to meet the demands of the main commercial users -- running cloud servers -- even if this makes like much more complicated for individual users.

          Which is why I advocate the BSDs as an alternative, but man, they are all forbidding and offputting to beginners/novices, and some of them don't even realise why and how.

      • appendix-rocka year ago
        Yep. Not rebooting for a decade is in most cases an irresponsible ‘advanced rookie’ move. I’m more than aware of the greybeard-era uptime fetishism. I’ve more than dabbled in it. I’ve typed /exec uptime into my IRC client more times than I dare admit.

        But come on…

        All that’s been said about security updates etc aside (some of which can be mitigated with that fancy in-place kernel update stuff), If something hasn’t rebooted in 10 years I’m going to be a bit nervous about what happens when it does reboot. If it’s in an uptime fetishist environment, chances are that it’ll be rebooted at a time that’s…inconvenient to say the least. Are you SURE that nothing has changed in that time? Some people are! Moreso than others at least. But that’s extra work, and my bet is most places with these high-uptime machines aren’t putting that work In, or think they are and are doing it poorly.

  • 28304283409234a year ago
    All the home-server setups of my colleagues are tiny Kubernetes setups in various ways. Breaking down in various ways.

    Mine is a Debian box. Upgraded from Debian 10 to 11 to 12. Running all kinds of things. I only use Debian packages alongside one or two Docker containers.

    Linux can do boring just fine.

    • bionsystema year ago
      Debian is incredible and I think there is a lot of mutual respect between both communities (debian and freebsd ; debian is even the default choice for the linux compatibility layer). I wish Debian GNU/kFreeBSD would be a thing...
      • hiAndrewQuinna year ago
        Debian is the FreeBSD of the Linuxes, and FreeBSD is the Debian of the BSDs.
    • RALaBargea year ago
      Proxmox runs on top of Debian by default, which is a fun and easy way to increase the layers of abstraction. All of my homelab stuff, my DNS server, and different things run on Proxmox, running on an old T430 on my bookshelf.
  • JohnFena year ago
    I'm halfway through switching all of my personal machines from Linux to BSD for pretty much this exact reason. It's going very well so far.
    • bionsystema year ago
      I wish I could do that but I can't recall any employer that would have allowed that. One of my employer was around 90% Solaris though (10 and 11) over 5000+ hostnames (VMs, zones and physical) and it required very little maintenance, a team of 3 seasoned ops was enough to do everything up to L2/L3.
    • ameliusa year ago
      I'm stuck with Linux on an nVidia Jetson that is locked by the manufacturer to Ubuntu :(
  • jmrma year ago
    After reading the article and the comments, I sincerely don't know why use BSD outside programming a server software that use `pledge()` and `jail()` system calls, like when people make [BCHS servers](https://learnbchs.org/index.html).

    Maybe I'm biased because I had worse experience making FreeBSD and OpenBSD in a desktop than with Gentoo, but I think working Modern Linux distros are as customizable, as stable, and with a lot more community and professionals working on them than any BSD one, and with a business in mind, that last detail is really important.

  • BiteCode_deva year ago
    Every companies have linux servers with crazy uptimes.

    My server never crash, my linux desktop do.

    I assume they don't put BSD on their client's laptop.

  • lprovena year ago
    This is my article -- but it's also a dupe...

    https://news.ycombinator.com/item?id=41776849

  • boricja year ago
    I'm currently building my homelab and I'm seriously considering SmartOS as a hypervisor.

    I've never used Solaris or illumos before, but I'm looking for something bullet-proof, idiot-proof and maintenance-free. The mainstream solution would probably be Proxmox and while I know how to administrate a Debian system, I don't trust myself as a sysadmin while acting in a personal capacity. Proxmox intrinsically just doesn't bring the peace of mind that I won't accidentally blow it up while away from my apartment.

    Just because a solution isn't mainstream doesn't mean it's not worth taking a look at. Even if you don't end up selecting them, it brings a healthy perspective that you wouldn't have otherwise.

    • timc3a year ago
      I’ve run SmartOS, but mainly use OmniOS. As with all these, they are very good but make sure to run them on supported hardware. They have good documentation, very small community, and if you run into problems then debugging them is very possible - but not for the technically challenged.

      OmniOS I basically run as an appliance and has been trouble free since the start.

  • cowsandmilka year ago
    > easy deployment isn't as important as easy longterm maintenance and support

    This veers close to treating your servers as pets instead of cattle. Which is fine if you’re small ( 99% of services are ), but not great if you have thousands of servers and scale up and down routinely.

    That said, I don’t feel like that quote actually represents BSD vs Linux at all. You can have easy deploys and long term maintenance on either.

  • le-marka year ago
    Reminds me of a new cto who, coming from a windows server background, demanded we reboot the servers (Linux) to prepare for the holiday rush (e-commerce site, mid oughts). Our admins had to explain we don’t do that. He persisted, so the admins stayed late and then went for beers, not rebooting a thing. Iirc this was outside of a maintenance window, and it simply wasn’t necessary.
  • andrewstuarta year ago
    I just don’t buy the story that Linux is unreliable.

    It’s kind of a ridiculous thing to claim without very substantial proof.

    I think it’s made up to justify installing BSD.

    • timc3a year ago
      It’s not inherently unreliable, but can be a bit depending in what you are doing. Debian stable on very mainstream server hardware is fine. Newer filesystems, cutting edge kernels, interesting hardware and your results may vary.
  • zicda year ago
    Sounds fun, not boring to me
  • appendix-rocka year ago
    I don’t buy this article.

    > Clients are often influenced by hype. A few years ago, it was "Linux is a toy." Now, it's "Why bhyve and not Proxmox?" They ask, "How can they sell FreeBSD? There's no AI, there's no Cloud, there's no Kubernetes, there's no blockchain – there's nothing!"

    I am very confident that this is more ‘fan fiction’ than the author would like to admit. The sort of hypothetical that someone cooks up on their head to anger up the blood and to then self-soothe by thinking about how much better than everyone else they are.

    Why does everything have to be so bloody religious? If you like boring tech, stop politicising it.

    • OsrsNeedsf2Pa year ago
      You sure?

      I made a semi-successful blockchain based product 5 years ago. At two, separate employers, I was urged to make a blockchain based solution to.. well actually the problem was not described in either case, only the use of blockchain.

      • viraptora year ago
        If the described people understand so little about tech, it doesn't even matter what they say. You can try to educate them or say something even more BS for them to agree with (blockchain 2 on kubershmetes).
    • msta year ago
      The vast majority of the comments on this article are people being religious about Linux.

      I'll be over here running a mixture of Debian Stable and FreeBSD and wondering why everybody insists on getting so angry about these things.

    • kricka year ago
      Yeah, as if Linux has AI and Blockchain and these don't exist on BSD systems...

      Admittedly, Linux really isn't boring enough for me. By boring I mean I don't really want to notice that OS exists, I want a distro that has most answers to any noob question on Google, all drivers for all hardware out of the box, no fancy package managers like snap or cool future technology like ZFS, no way to shoot myself in the foot however hard I try... But in fact almost all cool future technology comes to Linux from BSD, but it has less drivers, less packets, and googling yields less results. Everything else is pretty much the same everywhere and always comes with caveats.