562 pointsby zdw8 days ago49 comments
  • adrian_b8 days ago
    Looking at the sparse documentation of openrsync does not create any confidence for me that it can be an acceptable substitute for rsync.

    In my opinion, any program that is supposed to copy files, but which is not able to make perfect copies, i.e. copies that do not lose any bit of data or metadata that was present in the original file, is just unusable garbage.

    Unfortunately, most copying programs available in UNIX-like operating systems (and also many archiving programs) do not make perfect file copies with their default options and many of them are never able to make perfect copies, regardless what options are used.

    I have not looked recently at the scp command of ssh, but at least until a few years ago it was not possible to make perfect file copies with scp, especially when the copies were done between different operating systems and file systems. That is why I never use scp, but only rsync over ssh.

    Rsync is the only program that I have seen, which is able (with the right options) to make perfect file copies even between different operating systems and file systems (for instance between FreeBSD with UFS and Linux with XFS), preserving also metadata like extended file attributes, access control lists and high-precision file timestamps (some copying programs and archiving programs truncate high-precision timestamps).

    The current documentation of openrsync does not make any guarantee that it can make complete file copies, so by default I assume that it cannot, so for now it is a program that I consider useless.

    Beside rsync for copying, one of the few Linux archiving programs that can archive perfect file copies is bsdtar (when using the pax file format; the ancient tar and cpio file formats cannot store all modern file metadata).

    (FYI: I always alias rsync to '/usr/bin/rsync --archive --xattrs --acls --hard-links --progress --rsh="ssh -p XXX -l YYYYYYY"')

    (With the right CLI options, "cp" from coreutils can make perfect file copies, but only if it has been compiled with appropriate options; some Linux distributions compile coreutils with wrong options, e.g. without extended file attributes support, in which case "cp" makes only partial file copies, without giving any warnings or errors.)

    • inglor8 days ago
      As a contrast to your take - I work for a backup company and I was really surprised to discover most of our customers (big enterprises) really do not care about 99% of metadata restored correctly and are fine with just restoring the data.

      (We restore everything super carefully but sometimes I feel like we're the only ones who care)

      • nolok8 days ago
        I'm willing to bet a decent number "don't care" until they do care because their permissions don't work or their time based script screws up or whatever else nobody thinks about when they're in panic mode about "I lost my data".
        • ExoticPearTree8 days ago
          In case of a complete disaster recovery, the fact that a script or two might fail is super OK. That's why after recovery there's always the cleanup phase where you fix stuff that broke during recovery.
      • mmcnl8 days ago
        They don't care because you care, so they never experienced the misfortune of not caring.
        • crabbone8 days ago
          Nah. Not really. A lot of the useful data out there doesn't need ACL, precise (or any dates at all) etc.

          Also, a lot of application-specific data formats already don't care about the "extra" attributes available in various filesystems because those aren't universally supported and implement them themselves in the file format they operate on. For example, DICOMs, or password-protected PDFs or Zip archives etc.

          • treve7 days ago
            Extended attributes (and resource forks) are mostly a liability and anti-pattern because of their non portability. It would be a huge red flag to find something important in there other than cases of backing up entire OS images.
      • mbrumlow8 days ago
        I too have worked at a back company and I can’t recall any of the customers caring or even knowing about the metadata.

        We would only care if the software our customers were running did. Big enterprise software suits were defined to run in hostile environments, in such they mostly rely on their data formats and don’t care about attributes from the filesystem other than so they have access.

      • mohas8 days ago
        I'm with you on this, I think that data is 99% of what is important and the rest can be recreated or improvised and if in your system you rely too much on file metadata your need more engineering
        • buttercraft8 days ago
          > in your system you rely too much on file metadata your need more engineering

          Except sometimes it's a 3rd party's app whose data you have to restore, and you don't have control over their engineers.

        • ForHackernews8 days ago
          If that information drives operational processes then you can argue it is data, not metadata.
          • wyclif8 days ago
            The output of the command the OP mentions

              $ /usr/bin/rsync –version
            
            ...doesn't return anything referencing openrsync. I'm on Sequoia 15.3.1.
            • luckman2128 days ago
              The change was made in 15.4
              • unilynx8 days ago
                ugh. these kind of changes should not be made in a minor release..
                • dcow8 days ago
                  Is it a breaking change?
      • m4637 days ago
        That's the kind of nonsense thinking that leads to folks like apple removing critical features that "noone uses".

        reminds me of that yogi berra quote "nobody goes there anymore, it's too crowded"

        For example, many people don't even understand target disk mode on apple hardware, but it has saved me countless hours over the years and made administering apple systems a breeze. Ask people who've used target display mode if they can imagine going without it.

        on another subject - it's worth mentioning that time machine is based on rsync.

        • treve7 days ago
          Apple is the wrong horse to back on for this sort of thing.
    • dcow8 days ago
      > The current documentation of openrsync does not make any guarantee that it can make complete file copies, so by default I assume that it cannot, so for now it is a program that I consider useless.

      Is it possible this is just a documentation style-tone mismatch? My default assumption would be that openrsync is simply a less restrictively licensed rsync, and I wouldn’t assume it works any differently. Have you verified your strong hypothesis? Or are you just expressing skepticism. It’s hard to tell exactly.

      Edit: I read the openrsync readme. It says it’s compatible with rsync and points the reader to rsync’s docs. Unless extended file attributes, ACLs, and high resolution timestamps are optional at the protocol level, it must support everything modern rsync supports to be considered compatible, right? Or are you suggesting it lies and accepts the full protocol but just e.g. drops ACLs on the floor?

      • wkat42428 days ago
        From the article:

        > The openrsync command line tool is compatible with rsync, but as noted in the documentation openrsync accepts only a subset of rsync’s command line arguments.

        • dcow8 days ago
          Yes but that doesn't necessarily mean it is lacking the functionality to fully copy metadata. It could mean that openrsync has removed archaic and vestigial options to simplify implementation.
          • wkat42427 days ago
            Hmm those archaic and vestigial options are probably still a pillar in many usecases :)

            I'm still using the multi volume support in tar for example. Which was something that stems from the time when tar was used for tape archives (hence the name tar) on actual tapes. Without that I'd be really screwed because I use a box full of harddrives as backup "tapes" (which works surprisingly well I must say, I needed a small restore only a week or two ago and it really saved my bacon). But I bet 99.9% of tar users have no idea it can even do that.

            Rsync is another one of those swiss army knives that people use for a lot more stuff than you might expect. Especially the remote capabilities are amazing.

            The problem is that when you clone something but don't provide full compatibility, you're putting your users through much headscratching and frustration. It would be better to not name it after the original then so it's clear it's something different.

            • dcow7 days ago
              It says it’s fully compatible which I take at face value. I guess I’m curious if there’s a real problem here and openrsync is missing support for 0.1% of use cases, or if it’s just pessimistic speculation.
            • sbuk7 days ago
              • wkat42427 days ago
                Well, yes. I do really hate opinionated software (e.g. Apple, or GNOME). I do tend to find the weird niches that work for me. I'm currently using KDE and I've totally worked it over. Which is great because I can be much more productive if I'm not constantly fighting against the UI. But yeah such tools with millions of niche features are great for me.

                The software packages I really value the most are the ones where a situation causes something really weird I need to do, and I read in the documentation to try to find some workaround, and then I discover that it already has exactly that feature that I need hidden in there somewhere. It's like the developer read my mind :) There's been very few packages that I truly cherished (and very few in this day and age, software in the early PC days was often more powerful IMO).

                One of them was SP (SK Packet Radio), where this happened several times. That was truly amazing software, there was so much it could do and it all worked on an 8088 together with a TSR-based softmodem (connected to a radio not a phone line). Wow. Even the insanest stuff that popped into my head I could make happen with just some settings.

                • chupasaurus7 days ago
                  • wkat42426 days ago
                    That's a really decent discussion though, from both sides. The option was seriously considered.

                    It's a world of difference to Gnome devs that will just shut down everything.

                    PS I do think dolphin is the weakest link in the KDE experience though. But they have made some really good improvements with KDE 6. Like the typable crumb trail.

                    It's a bit of a contrast though with macOS where the finder is one of the least opinionated parts of the OS (and thus for me one of the best). I think that's more of a historical thing though, Apple's vision seems to be more centered around moving file management into the domain of different apps like on iOS. Another thing i don't like but I think ios has loosened that somewhat as they had to contend that it was necessary to make the iPad more of a productivity device (it still really isn't one though)

      • kt515 days ago
        > it must support everything modern rsync supports to be considered compatible, right?

        Apple's openrsync says that it is compatible with rsync protocol versions 27 - 29, which predate rsync 3.X. I don't consider it modern.

    • WhyNotHugo8 days ago
      OpenRsync is from the OpenBSD project. This is typically an indicator of good quality and a good focus on security. However, in this case, even the official website indicates:

      > We are still working on it... so please wait.

      • SoftTalker8 days ago
        OpenBSD often takes an approach of removing rarely-used or archaic functionality to achieve simplicity in code or configuration or improved security. They gutted a lot of openssl when they made libressl. Their OpenSMTPD is vastly simpler than something like postfix or sendmail.

        openrsync is very likely good code, but that doesn't mean it replicates every feature of the rsync utility.

      • onetom8 days ago
        OpenBSD removed ACL support though, iirc
        • SoftTalker7 days ago
          OpenBSD's filesystem doesn't have them. It just has normal unix permission bits.
    • graemep8 days ago
      This is a licensing issue for Apple, and only a small proportion of their users will care about this, and those users will just install rsync.
      • adrian_b8 days ago
        You are right, but I have written my comment exactly for making those users aware of this problem.

        I consider this a very serious problem, because most naive users will assume automatically that when they give a file copy command they obtain a perfect duplicate of the original file.

        It is surprising for them to discover that this is frequently not true.

      • sneak8 days ago
        And the rsync that has historically come with macOS was always way out of date, so we end up installing a newer one anyway. This doesn’t change much.
    • nix0n7 days ago
      I was able to find (in source-code form) the list of what arguments openrsync accepts[0].

      Among permissions you use in your alias: --xattrs, --acls, --hard-links are all missing.

      [0]https://github.com/kristapsdz/openrsync/blob/a257c0f495af2b5...

    • karel-3d8 days ago
      Current rsync version Apple ships is from 2006. It predates the iPhone.
      • adestefan8 days ago
        That's the last GPLv2-only version of rsync.
        • _fat_santa8 days ago
          Honest question: what prevents Apple from using software that's licensed under GPL v3 vs v2?
          • blokey8 days ago
            The risk is that Apple code sign's all the executables they ship and that someone could try to use GPLv3 to force Apple to either give them their signing keys to run their own version (the anti-tivo clauses) or that it would restrict Apple from suing someone for patent infringement because they've shipped GPLv3 software.

            Valid or not in anyone else's opinion, it doesn't really matter, the risk that someone will attempt to use a court to enforce one of these tends to mean companies don't want to even go near it.

            Working in a Bank we won't touch anything GPL3, even to build our software/services or mobile app, because we don't want to even open that Pandora's box.

            We don't have to find out if a court would try to force us to release our signing keys if we don't use or ship any code that contains language that could in some ways be phrased to do that.

            • imglorp8 days ago
              Why do orgs like this bank sometimes also ban GPL3 for internal use, even if it's not part of the product?
              • blokey7 days ago
                For the same reason we spent £1.8m "licensing" iText PDF for Java..... And removing it with extreme prejudice immediately afterwards.

                We had very keen developer upgrade all the libraries in our codebase as a "reducing technical debt" task that they decided to undertake themselves.

                They couldn't get something working and posted a stack-trace to ask for help..... Some enterprising sales person in iText saw it and emailed them offering to help and asked a question about what they were running and the developer effectively told them they were running version 5 which they didn't even check (or possibly understand) is relicensed under AGPL or commercial license.

                The legal threats from iText and the resulting fallout means we now do not allow developers access to the internet from their machines, even via a proxy, they have a separate RDP machine for that.

                And they can only pull in libraries that are scanned via jFrog xRay and ensure the license of said library is "acceptable".

                On the plus side, means we're doing something about supply-chain vulnerabilities.

              • antonvs7 days ago
                There's a risk that someone uses such a library the wrong way. A big part of the goal of legal compliance and security at large enterprises is to protect staff from doing dumb things that could have bad consequences, and one of the easiest ways to do that is to ban things that are particularly prone to that. It's a blunt weapon, but a more targeted one requires much more work and care.
          • creatonez5 days ago
            Nothing prevents it. All they would have to do is loosen the DRM just enough to make just the GPLv3 stuff modifiable. It would be incredibly trivial for Apple to start shipping GPLv3 software, but they are stubborn.
          • 8 days ago
            undefined
          • dcow8 days ago
            GPLv3 I believe includes language that could be construed to cover your entire software distribution. IOW shipping a GPLv3 thing with the OS puts Apple at a very minor risk that a court could decide that the everything distributed with rsync must also be able to be compiled by the end user.
    • scrapheap8 days ago
      What do you mean by perfect copies here? Do you mean the file content itself or are you also including the filesystem attributes related to the file in your definition?
      • adrian_b8 days ago
        A file consists of data and various metadata, e.g. file name, timestamps, access rights, user-defined file attributes.

        By default, a file copy should include everything that is contained in the original file. Sometimes the destination file system cannot store all the original metadata, but in such cases a file copying utility must give a warning that some file metadata has been lost, e.g. like when copying to a FAT file system or to a tmpfs file system as implemented by older Linux kernels. (Many file copy or archiving utilities fail to warn the user when metadata cannot be preserved.)

        Some times you may no longer need some of the file metadata, but the user should be the one who chooses to loose some information, it should not be the default behavior, especially when this unexpected behavior is not advertised anywhere in the documentation.

        The origin of the problem is that the old UNIX file systems did not support many kinds of modern file metadata, i.e. they did not have access control lists or extended file attributes and the file timestamps had a very low resolution.

        When the file systems were modernized (XFS was the first Linux file system supporting such features, then slowly also the other file systems were modernized), most UNIX utilities have not been updated until many years later, and even then the additional features remained disabled by default.

        Copying like rsync, between different computers, creates additional problems, because even if e.g. both Windows and Linux have extended file attributes, access control lists and high-resolution file timestamps, the APIs used for accessing file metadata differ between operating systems, so a utility like rsync must contain code able to handle all such APIs, otherwise it will not be able to preserve all file metadata.

        • scrapheap8 days ago
          But what you're referring to here are the attributes that the file system stores about the file, not the file itself. By default I wouldn't expect a copy of a file to have identical file system attributes, just an identical content for the file. I would expect some of the file system attributes to be copied, but not all of them.

          Take the file owner for example if I take a copy of a file then by default I should be the owner of that file as it's my copy of the file, and not the original file owner's copy.

          An alternative way of looking at it is if I have created a file on my local machine that's owned by root and has the setuid bit set on it's file permissions then there's no way that I should be able to copy that file up to a server with my normal user account and have those atttibutes still set on the copy.

          • bmacho8 days ago
            > But what you're referring to here are the attributes that the file system stores about the file, not the file itself.

            Yes. Sometimes you need that additional information too. And if you do, then rsync is your tool. If you only need the data stored in the file, then drag & drop suffices.

          • LoganDark8 days ago
            "File" means an entry in the file system, and so includes the metadata. It is not only the data.

            When a copy a file you will be the owner because the new copy is your copy. Other attributes however like modification date for example will remain the same. It's not as if you wrote the contents of the file anew, especially not for copy-on-write architectures like Apple's APFS.

            • scrapheap8 days ago
              So you also would expect some of the file system attributes to be copied, but not all of them. :D
              • LoganDark8 days ago
                I expect all of them to be copied except for specifically the owner and group. Created date, modified date, ACLs, extended attributes, eeeverything else.

                My expectations are more specific than "not all of them", so please don't misrepresent them.

                • scrapheap8 days ago
                  Out of interest, why wouldn't you expect the created timestamp for a file that you've created by copying another file to be the point in time which the copy was made? After all, before that moment the file didn't exist, and after that moment it did.
                  • brulard8 days ago
                    For some context you may want the new file creation time, but if I copy a folder of some backups for example, I don't want every file to have date set for today. I'll lose the possibility to filter files based on creation date, which is very useful for such use case. I don't remember that I would ever need a copy to have creation date reset.
                  • Galanwe7 days ago
                    Most tools that sync files (in contrast to mere copies) need a way to know which files need to be copied, and which can be skiped. The expensive way is to perform a checksum, but most sync tools rely on the creation or modified date unless told otherwise.

                    Now say Alice and Bob have the same copy of file F, Bob modifies it first which gets stored at timestamp T, then Alice modifies her copy at time T+1.

                    Bob syncs his files on a filer, its timestamp gets reset to now, which is say T+2. Then Alice does the same, but her file does not get copied, since the remote timestamp T+2 is newer than her local timestamp T+1.

                  • LoganDark8 days ago
                    macOS has "date added" for this, which is the date the file was added to its containing folder. It's not the exact same as the date created that you're talking about, though.

                    I honestly don't have a strong preference either way on this. I don't use date created except for misbehaving media downloaders that think the file modified date is a good place to put the video publication date. I'm sure there's a flag somewhere that I don't care enough to find.

          • bayindirh8 days ago
            As a counterpoint, many daemons or programs (e.g.: sshd, ssh, slurm, munge to name a few) expect their files to have specific users, groups and modes for security and behavioral guarantees, and flat out refuse to run if these requirements are not met.

            When installing these things from archives or moving/distributing relevant files to large fleets, I expect the file contents and all metadata incl. datestamps to be carried the way I want, because all of that data is useful for me and the application which uses the file.

            If the user doing the copying has no right to copy the file exactly, I either expect a loud warning or an error depending on the situation.

            • op00to8 days ago
              Should the SELinux context of a file always be copied from the source when moving or copying it? Or should it typically inherit the context defined by policy for the destination directory structure?

              For example, copying a file from a user's home directory (perhaps user_home_t) into /var/www/html/ usually requires it to get the httpd_sys_content_t context (or similar) to be served by the webserver correctly and securely. Blindly copying the original user_home_t context would likely prevent the webserver from accessing the file.

              Doesn't this suggest that some metadata, specifically the SELinux context, often shouldn't be copied verbatim from the source but rather be determined by the destination and the system's security policy?

              • bayindirh8 days ago
                What if the tool accessing the file is malicious, and can copy the file, but can't change the context of the said file? SELinux shall be strict on its behavior even if it's a detriment to user convenience.

                SELinux contexts shall be sticky, and needs to be manually (re)set after copying.

                This is the default behavior, BTW. SELinux contexts are not (re)set during copy operations in most cases, from my experience. You need to change/fix the context manually.

                • op00to7 days ago
                  I think when I cp a file it takes on the context of the directory or whatever the default context for that path is supposed to be, and when I mv, it retains the original context.
            • prmoustache8 days ago
              That is not what most file copying tools do by default. They usually only do that when you specify it and for good reasons.

              When foo copy a file from user bar, and put it on his homedir, the last thing h want is for it to be owned by the foo user.

              Your expectations are irrealistics.

              • bayindirh8 days ago
                > That is not what most file copying tools do by default.

                Yes, and that's OK.

                > When foo copy a file from user bar, and put it on his homedir, the last thing h want is for it to be owned by the foo user.

                It depends.

                > Your expectations are irrealistics (sic).

                No, rsync can do this (try -avSHAX) and tar does this by default, and we're talking about rsync here.

                • prmoustache8 days ago
                  > rsync can do this (try -avSHAX)

                  That is exactly what I am saying, rsync do not do this by default either, you have to tell him to via optionnal parameters.

                  • bayindirh8 days ago
                    The thing is, if you’re knowledgeable enough to use rsync over cp, you already know relevant flags to do that.
          • mannyv7 days ago
            The executed bit is an attribute that the FS stores about the file, and isn't technically part of the file itself.

            Strip all the execute attributes out of your *nix system and see what happens.

        • prmoustache8 days ago
          The cp command does copy the file data but not the metadata. There is a reason we have come up with 2 words to distinguish them.

          Rsync only cp the metadata when you specifically ask it to anyway. I haven't had a look at openrsync man page but I would assume it is the same in the case of the later.

          • adrian_b7 days ago
            Nope.

            Openrsync lacks the options of rsync for making exact copies.

            Moreover, the OpenBSD file systems are unable to store all metadata that can accompany files in Linux filesystems or Windows filesystems, so that is the likely reason for removing the rsync options.

            I also doubt that the developers of an utility for OpenBSD are also interested in taking care to preserve file metadata when copying to/from Windows, because the metadata access API is not portable, so a complete "rsync" utility must include specific code paths at least for Windows, for Linux and for FreeBSD. I do not know if the API of MacOS is also specific to it, or it is compatible with anything else.

      • fhars8 days ago
        It maens that if you copy a file from NTFS to ext4, ext4 will magically sprout support for alternate data streams.
        • johnisgood8 days ago
          And all files from NTFS have +x. :|
    • nickelpro7 days ago
      I actively do not want this in a file copy utility, relying on extended file attributes is a massive anti-pattern. If you care about time stamps, they go in the file format itself. If you care about permissions, those belong in the provisioning and access systems in front of the file. The web application or other API that is providing the access.

      I expect file attributes of the target to be what I say they should be, not copied over from wherever the content happened to live before.

      • shwouchk7 days ago
        you only ever look at files through a web application or other api?
        • nickelpro7 days ago
          Of course not, but I don't rely on the extended file attributes for anything important such that they need to be replicated during copies
    • naikrovek8 days ago
      I think Apple's choice here is less about functionality and more about the allergy that companies have to AGPL and GPLv3. To be clear this is an allergy that the GPL authors intended to create for this very reason.

      I personally dislike the GPL because I think my modifications should belong to me. I spent the effort on them, and I don't think any license which requires me to forfeit my effort is worth spending time with. Corporations agree and this is why the code licensed with GPLv2 is sparingly accepted and code licensed with GPLv3 is outright rejected by most large companies.

      • thisislife28 days ago
        What effort is "forfeited" (if you are talking about open source code)? If you use any GPL software, any modifications you make belongs to you until you choose to distribute the software with your modifications. Modifying a GPL software, for personal use, doesn't mean that you are obliged to make your modifications also open source. Moreover, GPL also means that you can never be denied access to the source code of a GPL software that is publicly distributed. This is because GPL protects a users "right to repair".

        For example, consider the case of a software distributed under a permissive license like MIT or BSD. If you modify and re-distribute it, anyone else can further modify the software that you patched and improved but they are not obliged to release the new source code to you. In such a scenario, you are willing "forfeiting" your effort. With GPL, at least your right to access future source codes (of publicly distributed GPL software) cannot be denied to not just you, but any user of the software.

    • LeFantome8 days ago
      Is this why bsdtar is popular, even on Linux systems that otherwise use GNU utils? I have often wondered why bsdtar is chosen. You see it in the AUR for example.
      • adrian_b7 days ago
        Bsdtar certainly has additional features over GNU tar.

        I have switched to bsdtar many years ago precisely because I have discovered that at least at that time it was the only Linux utility that could make exact archives for the backup of my filesystems.

        I make extensive use of extended file attributes. For instance any file on my filesystems has a hash stored in an extended attribute for detecting errors/modifications in its content and for deduplication (the hash is updated whenever the file is modified intentionally).

        When I make backups, I always store at least two copies on different media and it is essential that the file hashes in extended file attributes are preserved by the archiving/backup program, so that I will be able to detect corrupted files if I try to restore them. If some file is corrupted, I can retrieve it from the other backup copy. This has saved me several times with archives stored for many years, because neither HDDs nor any other available archival media are currently reliable enough to trust them for long-term storage without errors.

        Like I have said, for modern backups one must use the pax file format. GNU tar, and also other "tar" programs have made some custom non-standard extensions to the standard "tar" file format, in order to be able to store some things not allowed in standard tar files, but those workarounds are inferior to what can be done in the more recent "pax" file format.

  • thrdbndndn8 days ago
    As a relatively new Linux user, I often find the "versioning" of bundled system utilities also to be a bit of a mess, for lack of a better word.

    A classic example, at least from my experience, is `unzip`. On two of my servers (one running Debian and the other an older Ubuntu), neither of their bundled `unzip` versions can handle AES-256 encrypted ZIP files. But apparently, according to some Stack Overflow posts, some distributions have updated theirs to support it.

    So here is what I ran into:

    1. I couldn't easily find an "updated" version of `unzip`, even though I assume it exists and is open source.

    2. To make things more confusing, they all claim to be "version 6.00", even though they obviously behave differently.

    3. Even if I did find the right version, I'm not sure if replacing the system-bundled one is safe or a good idea.

    So the end result is that some developer out there (probably volunteering their time) added a great feature to a widely used utility, and yet I still can’t use it. So in a sense, being a core system utility makes `unzip` harder to update than if it were just a third-party tool.

    I get that it's probably just as bad if not worse on Windows or macOS when it comes to system utilities. But I honestly expected Linux to handle this kind of thing better.

    (Please feel free to correct me if I’ve misunderstood anything or if there’s a better way to approach this.)

    • adwf8 days ago
      In the specific case here, 7z is your friend for all zips and compressed files in general, not sure I've ever used unzip on Linux.

      Related to that, the Unix philosophy of simple tools that do one job and do it well, also applies here a bit. More typical workflow would be a utility to tarball something, then another utility to gzip it, then finally another to encrypt it. Leading to file extensions like .tar.gz.pgp, all from piping commands together.

      As for versioning, I'm not entirely sure why your Debian and Ubuntu installs both claim version 6.00, but that's not typical. If this is for a personal machine, I might recommend switching to a rolling release distro like Arch or Manjaro, which at least give upto date packages on a consistent basis, tracking the upstream version. However, this does come with it's own set of maintenance issues and increased expectation of managing it all yourself.

      My usual bugbear complaint about Linux (or rather OSS) versioning is that people are far too reluctant to declare v1.00 of their library. Leading to major useful libraries and programs being embedded in the ecosystem, but only reaching something like v0.2 or v0.68 and staying that way for years on end, which can be confusing for people just starting out in the Linux world. They are usually very stable and almost feature complete, but because they aren't finished to perfection according to the original design, people hold off on that final v1 declaration.

      • Squossifrage8 days ago
        Info-Zip Unzip 6.00 was released in 2009 and has not been updated since. Most Linux distros (and Apple) just ship that 15-plus-year-old code with their own patches on top to fix bugs and improve compatibility with still-maintained but non-free (or less-free) competing implementations. Unfortunately, while the Info-Zip license is pretty liberal when it comes to redistribution and patching, it makes it hard to fork the project; furthermore, anyone who wanted to do so would face the difficult decision of either dropping or trying to continue to support dozens of legacy platforms. Therefore, nobody has stepped up to take charge and unify the many wildly disparate mini-forks.
      • DonHopkins8 days ago
        The "Unix Philosophy" is a bankrupt romanticized after the fact rationalization to make up excuses and justifications for ridiculous ancient vestigial historic baggage like the lack of shared libraries and decent scripting languages, where you had to shell out THREE heavyweight processes -- "[" and "expr" and a sub-shell -- with an inexplicable flurry of punctuation [ "$(expr 1 + 1)" -eq 2 ] just to test if 1 + 1 = 2, even though the processor has single cycle instructions to add two numbers and test for equality.
        • chubot8 days ago
          ??? This complaint seems more than 20 years too late

          Arithmetic is built into POSIX shell, and it's universally implemented. The following works in basically every shell, and starts 0 new processes, not 2:

              $ bash -c '[ $((1 + 1)) = 2 ]; echo $?'
              0
              $ zsh -c '[ $((1 + 1)) = 2 ]; echo $?'
              0
              $ busybox ash -c '[ $((1 + 1)) = 2 ]; echo $?'
              0
          
          YSH (part of https://oils.pub/ ) has a more familiar C- or JavaScript-like syntax:

              $ ysh -c 'if (1 + 1 === 2) { echo hi }'
              hi
          
          It also has structured data types like Python or JS:

              $ echo '{"foo": 42}' > test.json
              $ ysh
              ysh-0.28$ json read < test.json
              ysh-0.28$ echo "next = $[_reply.foo + 1]"
              next = 43
          
          and floats, etc.

              $ echo "q = $[_reply.foo / 5]"
              q = 8.4
          
          https://oils.pub/release/latest/doc/ysh-tour.html (It's probably more useful for scripting now, but it's also an interactive shell)
          • DonHopkins8 days ago
            20 years doesn't even get you back to the last century, it's more like 48 years since 1977 when Bourne wrote sh. As one of the authors of the Unix Haters Handbook, published relatively recently in 1994, and someone who's used many versions of Unix since the 1980's, of course I'm fully aware that those problems are hell of a lot more than 20 years old, and that's the whole point: we're still suffering from their "vestigial historic baggage", arcane syntax and semantics originally intended to fork processes and pipe text to solve trivial tasks instead of using shared libraries and machine instructions to perform simple math operations, and people are still trying to justify all that claptrap as the "Unix Philosophy".

            Care to explain to me how all the problems of X-Windows have been solved so it's no longer valid to criticize the fallout from its legacy vestigial historic baggage we still suffer from even today? How many decades ago did they first promise the Year of the Linux Desktop?

            The X-Windows Disaster: This is Chapter 7 of the UNIX-HATERS Handbook. The X-Windows Disaster chapter was written by Don Hopkins.

            https://medium.com/@donhopkins/the-x-windows-disaster-128d39...

            Why it took THREE processes and a shitload of context switches and punctuation that we are still stuck with to simply test if 1 + 1 = 2 in classic Unix [TM]:

              [ "$(expr 1 + 1)" -eq 2 ]
            
            Breakdown:

              expr 1 + 1
            
            An external program used to perform arithmetic.

              $(...) (Command substitution)
            
            Runs expr in a subshell to capture its output.

              [ ... ]
            
            In early shells, [ (aka test) was also an external binary.

            It took THREE separate processes because:

            Unix lacked built-in arithmetic.

            The shell couldn't do math.

            Even conditionals ([) were external.

            Everything was glued together with fragile text and subprocesses.

            All of this just to evaluate a single arithmetic expression by ping-ponging in and out of user and kernel space so many times -- despite the CPU being able to do it in a single cycle.

            That’s exactly the kind of historical inefficiency the "Unix Philosophy" retroactively romanticizes.

            • op00to8 days ago
              > The X-Windows Disaster: This is Chapter 7 of the UNIX-HATERS Handbook. The X-Windows Disaster chapter was written by Don Hopkins.

              This gave me a big laugh, I love the UNIX-haters Handbook despite loving UNIXy systems. Thank you for decades of enjoyment and learning, especially in my late-90s impressionable youth.

            • chubot8 days ago
              I'm aware it used to be that way, but it's long been fixed

              It's fine to hate Unix, but you should update your examples :)

              • pjmlp8 days ago
                It has been fixed on Linux distributions, which aren't UNIX.

                And even that, only bash, because I wouldn't consider the state of X Windows and Wayland fixed in any way, shape, or form.

                • jillyboel8 days ago
                  UNIX is dead, no one cares anymore. It's just Linux now. Your examples and complaints are both outdated and not in good faith.

                  For all the weirdos smashing that downvote button: How about you name me some UNIX distros you have ran in the past year? Other than Linux, OpenBSD (~0.1% market share btw) and ostensibly MacOS (which we all know has dropped any pretense of caring to be UNIX-like many years ago), that is.

                  • int_19h7 days ago
                    macOS is literally certified Unix.
                    • jillyboel7 days ago
                      Sure, but it's not actually. Everyone knows that everything you do for macos needs macos specific exceptions.
                      • DonHopkins6 days ago
                        Obviously and obliviously spoken by someone who never used a Real Unix System back in the day that EVERY Unix system needed MANY specific exceptions.
                        • jillyboel6 days ago
                          We're talking about the current day
                          • DonHopkins6 days ago
                            You have no idea what we're talking about. Are you not even old enough to have seen Jurassic Park?

                            https://www.youtube.com/watch?v=JOeY07qKU9c

                            macOS is absolutely Unix, and a lot more like mainstream Unix than many of the other vastly different Unix systems of the past and present, so exactly when did the definition of Unix suddenly tighten up so much that it somehow excludes macOS? And how does your arbitrary gatekeeping and delusional denial of the ubiquity and popularity of macOS, and ignorance of the Unix 03 certification, the embedded, real time, and automotive space, and many other Unix operating systems you've never heard of or used, suddenly change the actual definition of Unix that the rest of the world uses?

                            Have you ever even attended or presented at a Usenix conference? Or worked for a company like UniPress who ports cross platform software to many extremely different Unix systems? Maybe then you'd be more qualified to singlehandedly change the definition of the word, and erase Unix 03 certification from existence, and shut down all the computers and devices running it, but you're not. Who do you think you are, one of Musk's DOGE script kiddies? Because you sound as overconfident and factually incorrect as one.

                            https://en.wikipedia.org/wiki/No_true_Scotsman

                            >The "no true Scotsman" fallacy is committed when the arguer satisfies the following conditions:

                            >1) not publicly retreating from the initial, falsified a posteriori assertion: CHECK

                            >2) offering a modified assertion that definitionally excludes a targeted unwanted counterexample: DOUBLE CHECK

                            >3) using rhetoric to signal the modification: TRIPLE CHECK

                            macOS, AIX, HP-UX, Solaris (still technically certified), Inspur K-UX, EulerOS, etc.

                            POSIX-compliant and Unix-alike OSes (e.g., FreeBSD, QNX, etc.) are very active in many common domains (networking, firewalls, embedded, industrial).

                            Mission-critical infrastructure, telco, financial systems, military/spacecraft, automotive, and embedded still widely use non-Linux Unix or Unix-like systems.

                            QNX in cars, AIX in banks, Illumos in storage, RTEMS in space systems.

                            • jillyboel5 days ago
                              keep those ad hominems coming kiddo

                              have fun pretending anything you said still matters in 2025

                              • DonHopkins4 days ago
                                You have no clue what you're talking about, you're completely incapable and afraid to respond to any of my points, and you've been just making shit up and throwing around random buzzwords you don't understand for quit some time now, incoherently unable to complete a sentence, like you're on ketamine. Nobody's falling for any of it. All you've done is make ad hominem attacks, no true scotsman defenses, move the goalposts, then hypocritically accuse other people of doing exactly what you just did: textbook psychological projection. Every single leaf of this argument is you unable to take the L, counter any the valid arguments other people have made, and implicitly admitting defeat that you can't defend anything you said or counter anything anyone else has.

                                macOS is certified Unix, widely used and extremely popular, and there's absolutely nothing you can do or say that will change that fact, and everyone knows it.

                      • int_19h7 days ago
                        Yes, because apps are written for Linux first, and Linux isn't Unix.
                        • jillyboel6 days ago
                          sem_init would like a word
                          • int_19h5 days ago
                            It doesn't get one, on account of not being a mandatory part of SUS.
                  • pjmlp7 days ago
                    AIX and Solaris, Oxide uses Illumos, a Solaris fork.

                    Orbit OS on the PlayStation.

                    On embedded space, with various levels of POSIX compliancy without being Linux distributions, QNX, vxWorks, INTEGRITY, NuttX, FreeRTOS, Zephyr, RTEMS.

                    • jillyboel7 days ago
                      Thanks for proving my point
              • DonHopkins8 days ago
                I'll update my examples when your examples of how it's been fixed don't use the same arcane syntax and semantics as the 48 year old Bourne shell. That's the whole point, which you're still missing.

                > $ bash -c '[ $((1 + 1)) = 2 ]; echo $?'

                Not even Perl uses that much arcane punctuation to test if 1 + 1 = 2. As if [] isn't enough, you've got to throw in two more levels of (()), plus enough grawlix profanity for a Popeye comic strip. And people complain Lisp has too many parens. Sheez.

                https://en.wikipedia.org/wiki/Grawlix

                There's even an emoji for Unix shell syntax:

                https://www.compart.com/en/unicode/U+1F92C

            • wazoox8 days ago
              I love "the Unix Haters Handbook", just as I love "Worse is Better", but this ship has sailed 30 years ago as you mentioned. Your "old man yelling at clouds" rant reminds me of Bjarne Stroustrup's quip, "there are two type of languages, those everyone complains about and those nobody uses". I mean run your nice, coherent, logical LISP machine or Plan9 system of whatever is that you prefer, but let us enjoy our imperfect tools and their philosophy :)
              • DonHopkins8 days ago
                The Unix philosophy really comes down to: "I have a hammer, and everything is a nail."

                ESR's claptrap book The Art of Unix Programming turns Unix into philosophy-as-dogma, where flaws are reframed as virtues. His book romanticizes history and ignores inconvenient truths. He's a self-appointed and self-aggrandizing PR spokesperson, not a designer, and definitely not a hacker, and he overstates and over-idealizes the Unix way, as well as and his own skills and contributions. Plus he's an insufferable unrepentant racist bigot.

                Don't let historical accident become sacred design. Don’t confuse an ancient workaround with elegant philosophy. We can, and should, do better.

                Philosophies need scrutiny, not reverence.

                Tools should evolve, not stagnate.

                And sometimes, yelling at clouds stirs the winds of change.

                https://en.wikipedia.org/wiki/Unix_philosophy#Criticism

                >In a 1981 article entitled "The truth about Unix: The user interface is horrid" published in Datamation, Don Norman criticized the design philosophy of Unix for its lack of concern for the user interface. Writing from his background in cognitive science and from the perspective of the then-current philosophy of cognitive engineering, he focused on how end-users comprehend and form a personal cognitive model of systems—or, in the case of Unix, fail to understand, with the result that disastrous mistakes (such as losing an hour's worth of work) are all too easy.

                Donald A. Norman: The truth about Unix: The user interface is horrid:

                http://www.ceri.memphis.edu/people/smalley/ESCI7205_misc_fil...

                >In the podcast On the Metal, game developer Jonathan Blow criticised UNIX philosophy as being outdated. He argued that tying together modular tools results in very inefficient programs. He says that UNIX philosophy suffers from similar problems to microservices: without overall supervision, big architectures end up ineffective and inefficient.

                On the Metal: Jonathan Blow:

                https://archive.org/details/on-the-metal-jonathan-blow

                >Well, the Unix philosophy for example it has been inherited by Windows to some degree even though it's a different operating system, right? The Unix philosophy of you have all these small programs that you put together in two like Waves, I think is wrong. It's wrong for today and it was also picked up by Plan Nine as well and so -

                >It's micro services, micro services are an expression of Unix philosophy, so the Unix philosophy, I've got a complicated relationship with Unix philosophy. Jess, I imagine you do too, where it's like, I love it, I love a pipeline, I love it when I want to do something that is ad hoc, that is not designed to be permanent because it allows me- and you were getting inside this earlier about Rust for video games and why maybe it's not a fit in terms of that ability to prototype quickly, Unix philosophy great for ad hoc prototyping.

                >[...] All this Unix stuff, it's the sort of the same thing, except instead of libraries or crates, you just have programs, and then you have like your other program that calls out to the other programs and pipes them around, which is, as far from strongly typed as you can get. It’s like your data coming in a stream on a pipe. Other things about Unix that seemed cool, well, in the last point there is just to say- we've got two levels of redundancy that are doing the same thing. Why? Get rid of that. Do that do the one that works and then if you want a looser version of that, maybe you can have a version of a language that just doesn't type check and use that for your crappy spell. There it is.

                >[...] It went too far. That's levels of redundancy that where one of the levels is not very sound, but adds a great deal of complexity. Maybe we should put those together. Another thing about Unix that like- this is maybe getting more picky but one of the cool philosophical things was like, file descriptors, hey, this thing could be a file on disk or I could be talking over the network, isn't it so totally badass, that those are both the same thing? In a nerd kind of way, like, sure, that's great but actually, when I'm writing software, I need to know whether I'm talking over the network or to a file. I'm going to do very different things in both of those cases. I would actually like them to be different things, because I want to know what things that I could do to one that I'm not allowed to do to another, and so forth.

                >Yes, and I am of such mixed mind. Because it's like, it is a powerful abstraction when it works and when it breaks, it breaks badly.

                • skydhash8 days ago
                  No tool is perfect. The unix philosophy is a philosophy, not a dogma. It serves well in some use cases. And in the other use case, you’re perfectly fine to put the whole domain in a single program. The hammer has been there for millennia, but once we invented screw, we had to invent the screwdriver.
                  • int_19h7 days ago
                    The point is that Unix philosophy is mostly a retroactive justification of why things are the way they are, and not really a coherent philosophy that drove the design of those things, even though it is now often represented as such.
                    • flavio817 days ago
                      Correct. The Unix philosophy is based on hagiography.
                    • skydhash7 days ago
                      That's the point of philosophy: to give a coherent, empirical explanation on why that thing is good or bad.
                  • sbuk8 days ago
                    "The unix philosophy is a philosophy, not a dogma"

                    Shame that it is dogmatically followed by a very loud but vocal minority.

                • wazoox7 days ago
                  > And sometimes, yelling at clouds stirs the winds of change.

                  > "The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." George Bernard Shaw.

                  Man, I'm with you, but I'll put my efforts elsewhere :)

                • michaelsbradley7 days ago
                  What’s that famous Bangles song?
            • flavio817 days ago
              >As one of the authors of the Unix Haters Handbook

              One day, one remote day, mankind will attain Reason and will award you a Nobel Prize for speaking out loud against retrograde computing.

              I salute you, hero, veteran.

            • exe348 days ago
              [flagged]
              • eesmith8 days ago
                Based on the account name, bio, and internal evidence you should assume this is Don Hopkins. His Wikipedia entry at https://en.wikipedia.org/wiki/Don_Hopkins includes:

                > He inspired Richard Stallman, who described him as a "very imaginative fellow", to use the term copyleft. ... He ported the SimCity computer game to several versions of Unix and developed a multi player version of SimCity for X11, did much of the core programming of The Sims, ... He is also known for having written a chapter "The X-Windows Disaster" on X Window System in the book The UNIX-HATERS Handbook.

                I hope this experience helps you realize that jumping immediately to contempt can easily backfire.

                • exe348 days ago
                  Nice, I'll put some bandaids on the stump that used to be my foot :-D
        • verandaguy8 days ago

              > TWO heavyweight processes
          
          If you're going to emphasize that it's two processes, at least make sure it's actually two processes. `[` is a shell builtin.

              > `eval` being heavy
          
          If you want a more lightweight option, `calc` is available and generally better-suited.

              > inexplicable flurry of punctuation
          
          It's very explicable. It's actually exceptionally well-documented. Shell scripting isn't syntactically easy, which is an artifact of its time plus standardization. The bourne shell dates back to 1979, and POSIX has made backwards-compatibility a priority between editions.

          In this case:

          - `[` and `]` delimit a test expression

          - `"..."` ensure that the result of an expression is always treated as a single-token string rather than splitting a token into multiple based on spaces, which is the default behaviour (and an artifact of sh and bash's basic type system)

          - `$(...)` denotes that the expression between the parens gets run in a subshell

          - `-eq` is used for numerical comparison since POSIX shells default to string comparison using the normal `=` equals sign (which is, again, a limitation of the type system and a practical compromise)

              > even though the processor has single cycle instructions to add two numbers and test for equality
          
          I don't really understand what this argument is trying to argue for; shell scripting languages are, for practical reasons, usually interpreted, and in the POSIX case, they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance. Their main priority is ease of interop with their domain.

          If I wanted to test if one plus one equals two at a multi-terabit-per-second bandwidth I'd write a C program for it that forces AVX512 use via inline assembly, but at that point I think I'd have lost the plot a bit.

          • DonHopkins8 days ago
            I was quite clear that this is HISTORICAL baggage whose syntax and semantics we're still suffering from. I corrected it from TWO to THREE and wrote a step by step description of why it was three processes in the other comment. That's the whole point: it was originally a terrible design, but we're still stuck with the syntactic and semantic consequences even today, in the name of "backwards compatibility".

            > they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance

            Even now you're bending over backwards to make ridiculous rationalizations for the bankrupt "Unix Philosophy". And you're just making my point for me. Does the Unix Philosophy say that the shell should be designed to be slow and inefficient and syntactically byzantine on purpose, or are you just making excuses? Maybe you don't think YOUR shell scripts have to be fast, or easy to write, read, and maintain, or perform simple arithmetic, or not have arsenals of pre-loaded foot guns, but speak for yourself.

            • mmcgaha8 days ago
              When my son was six he found a girly magazine at a friends house and was sneaking away to look at it. When my wife caught him she told him the magazine was bad and he should not be looking at it. His simple reply was "But I like it Mom."

              Even if Unix is bad, I like it.

            • verandaguy8 days ago
              I actually didn't mention the Unix philosophy once in my comment, I just explained why the shell snippet you posted is the way it is. As far as I can tell, nobody in this thread's making long-winded ideological arguments about the Unix philosophy except you.

              I think it's a perfectly reasonable assessment to think of shell scripts as a glue layer between more complex software. It does a few things well, including abstracting away stuff like pipelining software, navigating file systems, dispatching batch jobs, and exposing the same interface to scripts as you'd use to navigate a command line as a human, interactively.

                  > Maybe you don't think YOUR shell scripts have to be fast, or easy to write, read, and maintain, or perform simple arithmetic, or not have arsenals of pre-loaded foot guns, but speak for yourself.
              
              This is the opinion of the vast majority of sysadmins, devops people, and other shell-adjacent working professionals I've encountered during my career. None of them, including myself when I'm wearing a sysadmin hat, deny the shortcomings of bash and friends, but none of us have found anything as stable or ubiquitous that fits this domain remotely as well.

              I also reject the idea that faster or more full-featured alternatives lack footguns, pre-loaded or otherwise.

              - C has a relatively limited type system by modern standards, no memory safety, no bounds checking, a slew of non-reentrant stdlib functions, UB, and relies on the user to account for all of that to benefit from its speed.

              - C++ offers some improvements, but, being a near superset of C, it still has the footguns of its predecessor, to say nothing of the STL and the bloat issues caused by it.

              - Rust improves upon C++ by miles, but the borrow checker can bite you in nontrivial ways, the type system can be obtuse under some circumstances, cargo can introduce issues in the form of competing dependency versions, and build times can be very slow. Mutable global state is also, by design, difficult to work with.

              - Python offers ergonomic and speed improvements over POSIX shells in some cases, and a better type system than anything in POSIX shells, but it can't compete with most serious compiled languages for speed. It's also starting to have a serious feature bloat issue.

              Pick your poison. The reality is that all tools will suck if you use them wrong enough, and most tools are designed to serve a specific domain well. Even general-purpose programming languages like the ones I mentioned have specializations -- you can use C to build an MVC website, yes, but there are better tools out there for most real-world applications in that domain. You can write an optimizing compiler in Ruby, but if you do that, you should reevaluate what life choices led you to do that.

              Bash and co. are fine as shell languages. Their syntax is obtuse but it's everywhere, which means that it's worth learning, cause a bash script that works on one host should, within reason, work on almost any other *nix host (plus or minus things like relying on a specific host's directory structure or some such). I'd argue the biggest hurdle when learning are the difference between pure POSIX shell scripting idioms and bashisms, which are themselves very widely available, but that's a separate topic.

              • pjmlp8 days ago
                C was already limited by 1960's standards when compared to PL/I, NEWP and JOVIAL, 1970's standards when compared to Mesa and Modula-2, .....

                It got lucky ridding the UNIX adoptiong wave, an OS that got adopted over the others, thanks to having its source available almost at a symbol price of a tape copy, and a book commenting its source code, had it been available as commercial AT&T product at VMS, MVS, et al price points, no one would be talking about UNIX philosophy.

              • johnisgood8 days ago
                > - C has a relatively limited type system by modern standards, no memory safety, no bounds checking, a slew of non-reentrant stdlib functions, UB, and relies on the user to account for all of that to benefit from its speed.

                That is a feature, not a bug. Add your own bound checks if you want it, or use Ada or other languages that add a lot of fluff (Ada has options to disable the addition of bound checks, FWIW).

                I am fine with Bash too (and I use shellcheck all the time), but I try to aim to be POSIX-compliant by default. Additionally, sometimes I just end up using Perl or Lua (LuaJIT).

                • verandaguy8 days ago
                  I never said it wasn't a feature. There was a time, and there are still certain specific domains, where bit bashing the way C lets you is a big benefit to have. But bug or not, I think it's reasonable to call these limitations as far as general-purpose programming goes.

                  My argument was that C puts the onus on the user to work within those limitations. Implementing your own bounds checks, doing shared memory management, all that stuff, is extra work that you either have to do yourself or know and trust a library enough to use it, and in either case carry around the weight of having to know that nonstandard stuff.

                  • johnisgood7 days ago
                    You add bounds checks where you want it to.

                    The warnings of GCC and Clang can help you spot a lot of possible bugs, too, FWIW.

            • wpm8 days ago
              We’re stuck with plenty of non-optimal stuff because of path dependency and historical baggage. So what? Propose something better. Show that the benefits of following the happy path of historical baggage don’t outweigh the outrageously “arcane” and byzantine syntax of…double quotes, brackets, dollar signs, and other symbols that pretty much every other language uses too.
          • DonHopkins8 days ago
            >I don't really understand what this argument is trying to argue for; shell scripting languages are, for practical reasons, usually interpreted, and in the POSIX case, they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance. Their main priority is ease of interop with their domain.

            DDT is a hell of a lot older than Bourne shell, is not interpreted, does have full efficient access to the machine instructions and operation system, and it even features a built-in PDP-10 assembler and disassembler, and lets you use inline assembly in your login file to customize it, like I described here:

            https://news.ycombinator.com/item?id=43609418

            And even the lowly Windows PowerShell is much more recent, and blows Bourne shell out of the water along so many dimensions, by being VASTLY more interoperable, powerful, usable, learnable, maintainable, efficient, and flexible, with a much better syntax, as I described here:

            https://news.ycombinator.com/item?id=43609571

            >When even lowly Windows PowerShell blows your Unix shell out of the water along so many dimensions of power, usability, learnability, maintainability, efficiency, and flexibility, you know for sure your that your Unix shell and the philosophy it rode in on totally sucks, and self imposed ignorance and delusional denial is your only defense against realizing how bankrupt the Unix Philosophy really is.

            >It's such a LOW BAR to lose spectacularly to, and then still try to carry the water and make excuses for the bankrupt "Unix Philosophy" cargo cult. Do better.

        • whatnow373738 days ago
          Shell != Unix (philosophy) as I’m sure you are aware. The unix philosophy is having a shell and being able to replace it, not its particular idiosyncrasies at any moment in time.

          This is like bashing Windows for the look of its buttons.

        • eesmith8 days ago
          I realized the hype for the Unix Philosophy was overblown around 1993 when I learned Perl and almost immediately stopped using a dozen different command-line tools.
          • whatnow373738 days ago
            I realized the hype for composing $thing$s was overblown around 1993 when I learned I could just have "A Grand Unified $thing$" and almost immediately stopped using a dozen different $thing$s.

            Then, a decade or two later, I realized the Grand Unified $thing$ was itself composed, but not by me so I had no control over it. Then I thought to myself, how great would it be if we decompose this Grand Unified $thing$ into many reusable $thing$s? That way we can be optimally productive by not being dependent on the idiosyncrasies of Grand Unified $thing$.

            And so it was written and so it was done. We built many a $thing$ and the world was good, excellent even. But then one of the Ancients realized we could increase our productivity dramatically if we would compose our many $thing$s into one Grand Unified $thing$ so we wouldn't have to learn to use all these different $thing$s.

            And so it was written and so it was done. Thus goes the story of the Ancients and their initiation of the most holy of cycles.

            • eesmith8 days ago
              I don't really get the joke. Given the context and timing, I'm guessing it's a Perl6/Raku commentary, with Larry Wall as the Ancient?
              • whatnow373738 days ago
                There is a world outside of Perl. There really is.

                It's a general observation of how we are infatuated with composibility, then tire of it and unify and then learn to love it again because the unifications grow stale and weird of which Perl is an excellent example.

                • eesmith8 days ago
                  I switched to Python in 1998, and I haven't gone back to the Unix philosophy of decomposition into small command-line tools which interoperate via text and pipes, nor the COM/DCOM/CORBA approach, nor microservices, nor even Erlang processes, so I'm really not the target audience for your joke.
          • anthk8 days ago
            Ken Thompson and Unix folks agree with you. The point is... Perl was a solution to the former Unix (BSD/GNU) bloatings.

            When you have a look at Plan 9 (now 9fron) with rc as a shell, awk and the power of rio/acme scripting and namespaces among aux/listen... Perl feels bloated and with the same terse syntax as SH derived shells.

            • eesmith8 days ago
              I've been using Python almost full time since 1998 so, to misquote Dijkstra, I am mentally mutilated beyond regeneration.
            • bmacho8 days ago
              Is plan9 awk different?
              • anthk7 days ago
                Not much; what makes AWK shine it's the I/O in plan9; it's trivial to spawn sockets (literally from the command line), either plain text or encrypted. Also, rc it's much simpler than Bash.
        • a-french-anon8 days ago
          I don't see what crusty implementation details have to do with a philosophy. In fact, UNIX itself is a poor implementation of the "UNIX" philosophy, which is why Plan 9 exists.

          The idea of small composable tools doing one thing and doing it well may have been mostly an ideal (and now pretty niche), but I don't think it was purely invented after the fact. Just crippled by the "worse is better".

        • pjmlp8 days ago
          The "Unix Philosophy" is some cargo cult among FOSS folks that never used commercial UNIX systems, since Xenix I haven't used any that doesn't have endless options on their man pages.
          • anthk8 days ago
            Well, we are set by your "Windows philosphy", and forget NT being a VMS rehash, we would still be using the crappy W9x designs with DOS crap back and forth.

            Even Risc OS seems to do better even if it doesn't have memory protection too (I think it hasn't, I didn't try it for more than a few days).

            • pjmlp8 days ago
              Thing is there is no "Windows philosphy" cargo cult, and I don't worship OSes nor languages, all have their plus and minus, use any of them when the situation calls for it, and it is a disservice to oneself to identify themselves to technology stacks like football club memberships given at birth.
              • anthk8 days ago
                Neither I am a sole Unix user; I have Risc OS open (Apache 2.0?) on an RPI to experiment something else beyond Unix/C.

                But Windows it's too heavyweight, from 8 it has been a disaster. And the NT kernel+explorer can be really slim (look at ReactOS, or XP, or a debloated W7).

                The problem it's that Apple and MS (and RedHat) are just selling shiny turds wasting tons of cycles to do trivial tasks.

                Worse, you can't slim down your install so it behaves like a sane system for 1GB of RAM.

                I can watch 720p@30FPS videos under a n270 netbook with MPV. Something even native players for WXP can't do with low level direct draw calls well enough.

                The Windows > XP philosophy among RedHat and Apple it's: let bloat and crap out our OSes with unnecesary services and XML crap (and interpreted languages such as JS and C#) for the desktop until hardware vendors idolize US so the average user has to buy new crap to do the same task ever and ever.

                Security? Why the fuck does Gnome 3 need JS at first? Where's Vala, where it could shine here and Mutter could get a big boost and memory leaks could be a thing of the past?

                • neonsunset8 days ago
                  C# is a compiled language at all levels (source into bytecode, then bytecode into machine code either JIT or AOT). V8 has JIT compilation for hot paths. As a result, JS is significantly faster than the interpreted languages like Python, Ruby and Erlang/Elixir/Gleam.
                  • anthk7 days ago
                    C and Vala curbstomp JS in speed.
                    • neonsunset7 days ago
                      C is an unserious and an unusable language when alternatives are available to you. Vala is a toy to write Gnome applications.
                      • anthk7 days ago
                        No one under GTK/Gnome uses plain C, they use Glib as a wrapper. Plain ANSI C might be 'unusable' for modern UI needs, but, as I said, just have a look on WebkitGTK4. Glib everywhere, WebkitSettings are a breeze to setup.

                        Vala it's a toy because Miguel de Icaza went full MS with C# since Ximian. If Vala had more support from Red Hat, Gnome 4 could support Vala as the main language for it. JS? Lua and Luajit wouldb be a better choice for Mutter scripting. If you have a look on how Luakit and Vimb behave, the difference it's almost nil.

                • skydhash8 days ago
                  While I’m mot bothered by Gnome UI design choices, I was surprised by the choice of JS for the implementation.
            • DonHopkins8 days ago
              Even an operating system as brain damaged as Windows still has PowerShell, which lets you easily and efficiently perform all kinds of operations, dynamically link in libraries ("cmdlets") and call them directly, call functions with typed non-string parameters, pipe live OBJECTS between code running in the SAME address space without copying and context switching and serializing and piping and deserializing everything as text.

              PowerShell even has a hosting api that lets you embed it inside other applications -- try doing that with bash. At least you can do that with python!

              When even lowly Windows PowerShell blows your Unix shell out of the water along so many dimensions of power, usability, learnability, maintainability, efficiency, and flexibility, you know for sure your that your Unix shell and the philosophy it rode in on totally sucks, and self imposed ignorance and delusional denial is your only defense against realizing how bankrupt the Unix Philosophy really is.

              It's such a LOW BAR to lose spectacularly to, and then still try to carry the water and make excuses for the bankrupt "Unix Philosophy" cargo cult. Do better.

              https://en.wikipedia.org/wiki/PowerShell

              >Pipeline

              >PowerShell implements the concept of a pipeline, which enables piping the output of one cmdlet to another cmdlet as input. As with Unix pipelines, PowerShell pipelines can construct complex commands, using the | operator to connect stages. However, the PowerShell pipeline differs from Unix pipelines in that stages execute within the PowerShell runtime rather than as a set of processes coordinated by the operating system. Additionally, structured .NET objects, rather than byte streams, are passed from one stage to the next. Using objects and executing stages within the PowerShell runtime eliminates the need to serialize data structures, or to extract them by explicitly parsing text output.[47] An object can also encapsulate certain functions that work on the contained data, which become available to the recipient command for use.[48][49] For the last cmdlet in a pipeline, PowerShell automatically pipes its output object to the Out-Default cmdlet, which transforms the objects into a stream of format objects and then renders those to the screen.[50][51]

              >Because all PowerShell objects are .NET objects, they share a .ToString() method, which retrieves the text representation of the data in an object. In addition, PowerShell allows formatting definitions to be specified, so the text representation of objects can be customized by choosing which data elements to display, and in what manner. However, in order to maintain backward compatibility, if an external executable is used in a pipeline, it receives a text stream representing the object, instead of directly integrating with the PowerShell type system.[52][53][54]

              > Hosting

              >One can also use PowerShell embedded in a management application, which uses the PowerShell runtime to implement the management functionality. For this, PowerShell provides a managed hosting API. Via the APIs, the application can instantiate a runspace (one instantiation of the PowerShell runtime), which runs in the application's process and is exposed as a Runspace object.[12] The state of the runspace is encased in a SessionState object. When the runspace is created, the Windows PowerShell runtime initializes the instantiation, including initializing the providers and enumerating the cmdlets, and updates the SessionState object accordingly. The Runspace then must be opened for either synchronous processing or asynchronous processing. After that it can be used to execute commands. [...]

              • anthk8 days ago
                9front it's the truest Unix philosophy since Unix v6. It makes it much better. Proper devices and network connections as files, plus namespaces and aux/listen plus friends. It makes AWK better than Perl and rc it's much simpler without the bullshit of sh. You only have functions, not aliases, and the syntax it's much saner.

                On Powershell/C#, TCL/Tk might not be as powerful but it works under Windows XP with IronTCL unlike MS' own and newest C# implementations ( >= 4.5). Double irony there. TCL can help to write some useful software such as a Gopher /Gemini client with embedded TLS support. And the resource usage will still be far lower.

                On embedding, TCL wins here, hands down. It's everywhere.

                And JimTCL can run under a potato.

                • pjmlp8 days ago
                  If we forget that the authors moved on into Inferno and Limbo, while re-doing all the Plan 9 decisions they had to rollback like Alef as main userspace language.
                  • anthk8 days ago
                    9front lives today and Inferno can run on top it perfectly well.

                    Golang it's almost the byproduct of Limbo, and we owe tons of legacy or 9front's C compilers into Golang too.

              • anthk8 days ago
                >Because all PowerShell objects are .NET objects, they share a .ToString() method,

                Congrats, PSH, you did what TCL did ~30 years ago, but worse. With TCL everything it's a string, even numbers. Yes, it sucks you need to [ eval ] math operations, but well, the advantages outnumber the quirks.

                If you came from Lisp, you will be at home in the spot. Use the l* functions as you were doing the same with Lisp lists, but without juggling with car, cdr, caar, cddr and so on.

                And there's Expect which is utterly underrated.

                Yes, I hate upvar sometimes, but with namespaces you can almost avoid that issue.

                On TCL done for serious stuff... if people have been using Excel with millions of rows for covid patients and census, TCL/Tk with SQLite would outperform these by a huge margin.

                • DonHopkins7 days ago
                  PowerShell is the opposite of TCL and bash. You pass objects directly, NOT strings. I have no idea what you're trying to say. And yes I've written and shipped and open sourced shitloads of TCL/Tk.

                  https://github.com/SimHacker/micropolis/blob/master/micropol...

                  • anthk7 days ago
                    Objects are not my thing, they are just good for Inform6 as a Z-Machine game maps really well with OOP because a text adventure based on events tied to attributes it's ideal.

                    With strings and lists the complexity goes away.

                    • DonHopkins6 days ago
                      Now you're making even less sense than before, with incoherent grammar and random buzzwords, which is an impressive leap. I don't think "your thing", whatever that is, has any bearing on this conversation. Are you an LLM?

                      I played the original Zork on MIT-DM, and read the original source code written in MDL, which is essentially Lisp with angled brackets and data types, and it's neither object nor text oriented, so I have no idea what point you're trying to make about its descendent ZIL, because it makes no sense and has no bearing on this discussion.

                      https://news.ycombinator.com/item?id=31846457

                      https://news.ycombinator.com/item?id=42771971

                      You're arguing with a well vetted factually correct evidence based wikipedia page, so if you disagree, go try to edit it, and see how long your hallucinations and vandalisms last without citations to reality or coherent sentences.

                      At least my code doesn't shit its pants when you pass it a filename with a space in it.

                      • anthk5 days ago
                        I am not an LLM. I am talking about Inform6, an OOP language born in the 90's where they created games far more powerful than the Infocom ones. If6 maps pretty well to MDL. Both compile to ZMachine games, but if6 it's far easier.

                        On games, have a look on Anchorhead, Spider and Web, Curses, Jigsaw... in these kind of games OOP has tons of sense.

                        https://www.ifwiki.org/Inform_6

                        Get the Inform Beggnner's Guide and you'll see an approach far easier than MDL and ZIL.

                        • DonHopkins4 days ago
                          Wow it's really sad that you're not an LLM. That would have been a great excuse. Too bad you've been superseded and displaced by computers. My condolences.
      • setopt8 days ago
        > Related to that, the Unix philosophy of simple tools that do one job and do it well, also applies here a bit. More typical workflow would be a utility to tarball something, then another utility to gzip it, then finally another to encrypt it. Leading to file extensions like .tar.gz.pgp, all from piping commands together.

        I do this for my own files, but half of the time I zip something, it’s to send it to a Windows user, in which case zip is king.

        • issafram7 days ago
          fyi latest version of Windows 11 supports native opening of 7zip files
      • aragilar8 days ago
        The issue in this case is upstream is dead, so there are random patches. Same thing happened to screen for a bit.
      • tecleandor8 days ago
        Was there any problem with 7z some years ago? I feel like I've been actively avoiding it for having the feeling that I've read something bad about it, but I can't remember what. But I could've mixed it with something else. It sometimes happens to me.
        • oblio8 days ago
          Hard to say for sure, did SourceForge put malware in their installers many millennia ago?
          • tecleandor8 days ago
            Ah, I think I might remember a couple RCE they had... [0]

            So for Windows use I then started to recommend a fork called NanaZip [1] that enabled some Windows security features (CFG, CET, Package Integrity Check...) and added support for additional formats that other forks already had [2] [3].

            ---

              0: https://en.wikipedia.org/wiki/7-Zip#Security
              1: https://github.com/M2Team/NanaZip
              2: https://github.com/mcmilk/7-Zip-zstd
              3: https://github.com/myfreeer/7z-build-nsis
            • jcotton427 days ago
              NanaZip also works with the Windows 11 File Explorer context menu changes.
      • pxc8 days ago
        I came here to make the same recommendation. Just use p7zip for everything; no need to learn a bunch of different compression tools.
        • setopt8 days ago
          If you use `atool`, there is no need to use different tools either – it wraps all the different compression tools behind a single interface (`apack`, `aunpack`, `als`) and chooses the right one based on file extensions.
          • pxc8 days ago
            I'll check this out. I actually don't love p7zip's CLI.
          • Sander_Marechal8 days ago
            There's also `unp`, the universal unpacker.
    • cogman108 days ago
      Debian and Ubuntu tend to want to lock the version of a system tools to the version of the OS.

      Debian tends to have long release cycles, but is very stable. Everything will work perfectly together on stable (in fact, testing tends to be almost as good at stability vs other OSes).

      Ubuntu is basically Debian with "but what if we released more frequently?".

      If you want the latest tools, then you'll have to settle for a less stable OS (sort of). Nix and Arch come to mind. Neither are super user friendly.

      If you want stable and the latest tools, Gentoo is the way to go. However, it's even more intimidating than Arch.

      If you want stability and simplicity, then the other way to go is sacrificing disk space. Docker/podman, flatpak, appcontainers, and snap are all contenders in this field.

      Windows and Mac both have the same problem. Windows solved this by basically just shipping old versions of libraries and dynamically linking them in based on what app is running.

      • chrismorgan8 days ago
        I find it funny calling Arch “less stable”, because I’m inclined to find it more stable, for my purposes, skills and attitudes.

        I’ve administered at least one each of: Ubuntu server (set up by another; the rest were by me), Ubuntu desktop at least ten years ago, Arch desktop, Arch server.

        The Arch machines get very occasional breakages, generally either very obvious, or signposted well. I did have real trouble once, but that was connected with cutting corners while updating a laptop that had been switched off for two years. (I’ve updated by more than a year at least two other times, with no problems beyond having to update the keyring package manually before doing the rest. The specific corners I cut this one time led to the post-upgrade hooks not running, and I simply forgot to trigger them manually in order to redo the initcpio image, because I was in a hurry. Due to boot process changes, maybe it was zstd stuff, can’t remember, it wouldn’t boot until I fixed it via booting from a USB drive and chrooting into it and running the hooks.)

        Now Ubuntu… within a distro release it’s no trouble, except that you’re more likely to need to add external package sources, which will cause trouble later. I feel like Ubuntu release upgrades have caused a lot more pain than Arch ever did. Partly that may be due to differences in the sorts of packages that are installed on the machines, and partly it may be due to having used third-party repositories and/or PPAs, but there were reasons why those things had to be added, whether because software or OS were too old or too new, and none of them would have been needed under Arch (maybe a few AUR packages, but ones where there would have been no trouble). You could say that I saw more trouble from Ubuntu because I was using it wrong, but… it wouldn’t have been suitable without so “using it wrong”.

      • odo12428 days ago
        Fedora strikes a pretty good tradeoff on the “is user friendly” and “has latest tools regardless of system version” balance, I would say.
        • rurban8 days ago
          Exactly. Much more stable and much more uptodate as Debian derivates. But much less packages also.
      • thayne8 days ago
        "stable" as used to describe debian (and Ubuntu) means "does not change", which includes adding new functionality.
        • damentz8 days ago
          Correct, another way of looking at it is from a programming angle. If Debian fixes a bug that breaks your tool, then Debian is unstable. Therefore, to maintain stability, Debian must not fix bugs unless they threaten security.

          The term "stable" is the most polluted term in Linux, it's not something to be proud of. Similar to how high uptime was a virtue, now it just means your system probably has been pwned at some point.

      • jjayj8 days ago
        The other option here is "pick an OS and when necessary install newer packages from source."

        We've been doing this for a long time at my current workplace (for dev containers) and haven't run into any problems.

    • tame39028 days ago
      unzip is a special case: upstream development has basically stopped. The last release was in 2009[0]. (That's the version 6.0.) Since then there were multiple issues discovered and it lacks some features. So everybody patches the hell out of that release[1]. The end result is that you have very different executables with the same version number.

      [0]: https://infozip.sourceforge.net/UnZip.html

      [1]: here the build recipe from Arch, where you can see the number of patches that are applied: https://gitlab.archlinux.org/archlinux/packaging/packages/un...

      • blueflow8 days ago
        I maintain a huge number of git mirror of git repositories and i have some overview of activity there. Many open source projects have stopped activity and/or do not make any new releases. Like syslinux, which seems to be in a similar situation as unzip. And some projects like Quagga went completely awol and don't even have a functional git remote.

        So unzip is not really that special, its a mode general problem with waning interest.

        • tame39028 days ago
          I wasn't trying to imply that unzip is the only one.

          But the way I learned that unzip is unmaintained was pretty horrible. I found an old zip file I created ages ago on Windows. Extracting it on Arch caused no problem. But on FreeBSD, filenames containing non-ASCII characters were not decoded correctly. Well, they probably use different projects for unzip, this happens. Wrong, they use the same upstream, but each decided to apply different patches to add features. And some of the patches address nasty bugs.

          For something as basic as unzip, my experience as a user is that when it has so many issues, it either gets removed completely or it gets forked. The most reliable way I found to unzip a zip archive consists of a few lines of python.

          • blueflow8 days ago
            I think you got unlucky with unzip because you noticed. Distributions heavily patching software is rather the norm than the exception.

            As an example, look how Debian patches the Linux kernel: https://udd.debian.org/patches.cgi?src=linux&version=6.12.21... . And the kernel is a very active project.

            Funnily, this makes recoding the version number for a SBOM pretty useless.

            • tame39028 days ago
              I agree completely. I also know that distros patch packages.

              But for unzip the situation is particularly bad because it has no maintainer. Normally, you would raise feature requests for basic functionality upstream and once added, the maintainer would cut a new release. So software with the same version number generally, but not always, behaves similarly across distros.

              But for unzip, because upstream is unmaintained, distro maintainers started to add features while keeping the version number. So in the end you end up with different behavior for what looks like the same release.

        • erinnh8 days ago
          Quagga got forked though and is actively being developed.

          FRRouting is the fork.

    • soraminazuki8 days ago
      Distros are independent projects, so that's to be expected IMO. Though some level of interoperability is nice, diverse options being available is good.

      That said, most distros have bsdtar in their repositories so you might want to use that instead. The package might be called libarchive depending on the distro. It can extract pretty much any format with a simple `bsdtar xf path/to/file`. AES is also supported for zips.

      macOS includes it by default and Windows too IIRC, in case you're forced to become a paying Microsoft product^Wuser.

    • __MatrixMan__8 days ago
      It is a mess. My suggestion is to just rely on the built-in stuff as little as possible.

      Everything I do gets a git repo and a flake.nix, and direnv activates the environment declared in the flake when I cd to that dir. If I write a script that uses grep, I add the script to the repo and I add pkgs.gnugrep to the flake.nix (also part of the repo).

      This way, it's the declared version that gets used, not the system version. Later, when I hop from MacOS to Linux, or visa versa, or to WSL, the flake declares the same version of grep, so the script calls the same version of grep, again avoiding whatever the system has lying around.

      It's a flow that I rather like, although many would describe nix as unfriendly to beginniners, so I'm reluctant to outright recommend it precisely. The important part is: declare your dependencies somehow and use only declared dependencies.

      Nix is one way to do that, but there's also docker, or you could stick with a particular language ecosystem. python, nodejs, go, rust... they all have ways to bundle and invoke dependencies so you don't have to rely on the system being a certain way and be surprised when it isn't.

      A nice side effect of doing this is that when you update your dependencies to newer versions, that ends up in a commit, so if everything breaks you can just check out the old commit and use that instead. And these repos, they don't have to be for software projects--they can just be for "all the tools I need when I'm doing XYZ". I have one for a patio I'm building.

      • Spivak7 days ago
        This is the way, system packages are for the system. Everything you need lives in .local or in your case /nix. The amount of tooling headaches I've had to deal with is pretty close to zero now that I don't depend on a platform that by design is shifting sand.
    • NoboruWataya8 days ago
      I use Arch on my personal laptop daily but have Debian installed on a VPS, and this is one aspect of Debian that bugs me (though I totally understand why they do it). I am so used to having the latest version of everything available to me very quickly on Arch, I am quite commonly stung when I try to do something on my VPS only to find that the tools in the Debian repos are a few versions behind and don't yet have the features I have been happily using on Arch. It's particularly frustrating when I have been working on a project on my personal laptop and then try to deploy it on my VPS only to find that all of the dependencies are several versions behind and don't work.

      Again, not a criticism of Debian, just a friction I noticed moving between a "bleeding edge" and more stable distro regularly.

      • everfrustrated8 days ago
        If you want the latest version of everything you are looking for Debian Unstable
    • ciupicri8 days ago
      Some Linux distributions patch their unzip package, e.g. Fedora https://src.fedoraproject.org/rpms/unzip/tree/f42
    • procaryote8 days ago
      Compressing and encrypting as separate operations would bypass this issue.

      A symmetrically encrypted foo.zip.gpg or foo.tgz.gpg would work in a lot more places than a bleeding edge zip version. Also you get better tested and audited encryption code

    • wmf8 days ago
      If I want to mess around with something without endangering the system I put it in ~/bin. You could compile unzip from source and rename it something like ~/bin/newunzip. If it doesn't work just delete it.
    • MisterTea8 days ago
      > As a relatively new Linux user,

      You need to understand that you are now in Unix land which means you compose this pipeline using programs that perform each step of the processes. So when creating an encrypted backup you would use: `tar -c /home/foo | gzip | aescrypt >backup.tgz.aes` or something to that effect. This lets you use whatever compression program in the pipe.

      Breaking this composability leads to the kind of problem you are complaining about. It also removes the ability of splitting this pipeline across machines allowing you to distribute the compute cost.

    • dazzawazza8 days ago
      On of the many reasons I switched to FreeBSD over 20 years ago. Kernel and user-space developed together. No surprises, just consistent productivity.
    • lukan8 days ago
      No idea, I feel your confusion, I just use 7z and it could handle my zip needs so far (There are always a million ways to do anything on linux).

      But I assume, you should be able to update unzip without issues. And if no critical service depends on it, just update and see.

      • DonHopkins8 days ago
        [flagged]
        • jdwithit8 days ago
          Settle down, Beavis. Not everyone is running Linux in a 24/7 production environment. I hear some people even fart around with it at home for fun.

          I've been in pager rotations for most of the last 20 years so I'm sympathetic to that. If some genius symlinked unzip to 7z with no testing in production and caused an incident I'd be real mad. But uh I don't think that's remotely what OP was suggesting here.

          • lukan8 days ago
            Indeed. Which is why I said:

            "if no critical service depends on it, just update and see"

            It did not sound like OP was running a hospital infrastructure. And I never did either, nor intend to. I try to have a linux that does what I want on my computer. 7z was helpful to me, so I shared it, that's it.

        • eli8 days ago
          Why the hostile comments?
          • donnachangstein8 days ago
            This guy has been ranting and raving here longer than I can remember or thought to make an account so I assume he is HN royalty and that's why it's tolerated. That said it doesn't really bother me if I understand the circumstances.
            • sbuk8 days ago
              “This guy” is Don Hopkins who, amongst a long list of achievements in the field of computer science specializing in human computer interaction and computer graphics, is one of the authors of the UNIX haters handbook - specifically the extremely prescient chapter 7 "The X-Windows Disaster", published when Linux was in its infancy. You don't have to like what he is saying, but he has decades of experience and research behind what he says. Know where your field came from. The longer you can look back, the farther you can look forward - sadly something a vocal minority of the community refuses to do.

              https://www.donhopkins.com/home/resume.html

              • eli8 days ago
                Neat. But to be clear: my complaint was the rude and dismissive attitude, not the substance.
                • sbuk8 days ago
                  When you've been saying the same thing for the last 40 years, and seeing the same responses, more often than not made by people who don't understand where this all comes from, and which do not really counter what you're saying, you'd be rude and dismissive - especially whith the dogma that surronds the "UNIX philosophy", which, in case you aren't aware, wasn't actually put forward by anyone heavily involved with UNIX development. Some empathy with the protagonist whould help.

                  But I get your point...

                  • lukan8 days ago
                    I once got a mail back from dang, about why my account got restricted. (Limited posts per day)

                    "I made plenty of good comments, but this one thread counts as flame war and he has to go by the worst, not by the best comments"

                    I thought about replying with some Don Hopkins comments, that were way worse than what was here and he is clearly not restricted. But I didn't, as I don't do Kindergarten, I just took a time off from HN.

                    But it definitely is not equal standards.

                    So I respect Don Hopkins for his knowledge and experience, but not his style of communication sometimes.

    • aragilar8 days ago
      unzip 6.0 is from 2009 (see the manpage or https://infozip.sourceforge.net/UnZip.html). I suspect there are patches floating around (so YMMV as to which patches are applied), or someone has aliases/symlinked some other implementation as "unzip" (like Apple has done here, though unlike unzip rsync is maintained).

      Try using atool (which wraps the various options for different archives and should hopefully fix your problem) or the tools provided by https://libzip.org/documentation/.

      Practically, what you're hitting is the problem when upstream is dead, and there is no coordination between different distros to centrally take over maintenance.

    • sneak8 days ago
      I feel there is an opportunity for a modern go or rust utility that does compression/decompression in a zillion different formats with a subcommand interface “z gzip -d” or “z zstd -9” or “z zip -d” or “z cpio -d” or similar.

      Maybe I’ll write it.

    • neckro238 days ago
      It is even worse on MacOS, because Apple bundles the BSD versions of common Unix utilities instead of the (generally more featureful) GNU versions. So good luck writing a Bash script that works on both MacOS and Linux...
      • pjmlp8 days ago
        Do like in the good old days of portable UNIX scripts, write a POSIX sh script instead, use Perl or Python.
      • everfrustrated8 days ago
        First thing anyone doing dev on MacOS should do is install brew. Second is use brew to install the coreutils and bash packages to get a linux-compatiable gnu environment.
      • SoftTalker7 days ago
        Also because the vast majority of MacOS users never open a terminal. Unix utilities are something they don't even know they have, and they don't care what versions they are.

        Anyone using MacOS as a unix platform is installing updated tooling with brew or similar.

      • pathartl8 days ago
        Just use PowerShell!

        Half sarcastic with that one

        • papichulo20238 days ago
          I used to think like this, but PS is kinda slow. Nowadays bunjs seems to be the best one imo.
      • petre8 days ago
        Just use zsh on MacOS.
        • ElectricalUnion8 days ago
          Using zsh will not fix the fact that other, non-shell POSIX utilities will not suddenly have useful GNU extensions.

          Also, zsh is not installed by default on most distros.

          • petre8 days ago
            MacOS userspace was forked from FreeBSD, that's why it bundles non-GNU extensions. Also the FreeBSD userspace has since then incorporated GNUisms.

            Why they went with Bash 2 as the defualt shell is beyond me. I always switched to and used Zsh which had a more recent version. Now I'm also using it on Linux and FreeBSD, because I want a consistent shell.

            • Squossifrage8 days ago
              The macOS userspace was never forked from FreeBSD or any other BSD. If anything, it was forked from NextSTEP. In actual practice, it is a collection of individual components taken from a variety of sources. When development of Mac OS X began in 1999, most command-line tools and a large part of libc were derived from either NetBSD or OpenBSD via NextSTEP. Over the years, there has been a shift toward FreeBSD. Apple maintain a collection of GitHub repositories of their Open Source components where you can see the evolution from one release to the next. Most of them have XML metadata indicating the origin of each individual component.
            • wkat42428 days ago
              Apple no longer ships bash 2. They moved to zsh also a few years ago.

              The reason was the same as here: bash moved to GPL v3.

              • Squossifrage6 days ago
                Incorrect. They default to zsh for interactive use, but their /bin/sh is bash 2. They also ship a copy of dash, but it's not sufficiently POSIX-conforming to replace bash.
                • wkat42424 days ago
                  Huh weird, I remember many years ago getting a notification in the terminal that bash would be deprecated so I assumed that would have happened by now. I no longer use macs so I wasn't up to date, sorry.
          • everfrustrated8 days ago
            I would argue that POSIX is long dead. The real standard is Linux (GNU) compatibility and has been for a while now.
            • bentley8 days ago
              As an OpenBSD developer who frequently fixes portability issues in external software, this doesn’t match my experience. Upstream developers are typically happy to merge patches to improve POSIX compliance; often the result is simpler than their existing kludges attempting to support desired platforms like MacOS, Alpine/Musl, Android, Dash-as-sh, and various BSDs. It turns out a lot of people find value in relying on an agreed‐upon behavior that’s explicitly documented, rather than “this seems to work at the moment on the two or three distros I’ve tested.”
    • mistrial98 days ago
      forthright point of view and more power to that.. however in this case the weight falls on one small bit there - the same version number. There is information missing somehow someways
  • duskwuff8 days ago
    On one hand, it's a little annoying that openrsync doesn't support some features that rsync does.

    On the other hand, it's great that there are multiple independent implementations of rsync now. It means that it's actually being treated as a protocol, not just a piece of software.

    • varenc8 days ago
      I'm exciting about this too. It becoming more like a protocol makes me optimistic we'll see binary diff API points based on the rsync algorithm.

      fun fact: Dropbox internally used rsync binary diff to quickly upload small changes to large file. I assume they still do. But their public API endpoints don't offer this and a small change to a large file means the whole file must be updated.

      • zmj8 days ago
        I implemented rsync's binary diff/patch in .NET several years ago: https://github.com/zmj/rsync-delta

        It's a decent protocol, but it has shortcomings. I'd expect most future use cases for that kind of thing to reach for a content-defined chunking algorithm tuned towards their common file formats and sizes.

      • andrewflnr8 days ago
        > binary diff API points based on the rsync algorithm

        Now that's an idea I never considered. Nice.

        • nine_k8 days ago
          Now consider applying it to git. How about clean semantic diffs to your .xlsx files? To your .PNG files?
          • andrewflnr8 days ago
            ...that's rather a different question, I think. Rsync doesn't claim to use a semantic diff.
    • secure8 days ago
      Indeed! Have a look at http://github.com/stapelberg/rsync-over-grpc/, where I demonstrate how to use the rsync protocol (specifically, my https://github.com/gokrazy/rsync implementation) over gRPC.

      Very handy if SSH+rsync is locked down in your corporate environment, but building services with gRPC isn’t :)

    • chungy8 days ago
      The website says "We are still working on it... so please wait."

      rsync has a lot of features, surely this will take a good amount of time.

    • drob5188 days ago
      librsync, anyone?
      • edoceo8 days ago
        LGPL
        • mattl8 days ago
          librsync is distributed under the GNU LGPL v2.1

          I can see no reason why Apple wouldn't be fine with that.

        • DrillShopper8 days ago
          Maybe Apple should stop leeching off Free Software then
          • p_ing8 days ago
            BSD license allows/intends for this. The basic netutils in Windows come from BSD.
          • simondotau8 days ago
            I don’t even know what you mean by leeching. Clearly you don’t like it so you should add a clause disallowing it into whatever software you write.

            If someone else wants to release their software under a more permissive license, that’s none of your damn business.

    • candiddevmike8 days ago
      How does this mean rsync is a protocol?
      • somat8 days ago
        it was always a protocol, however it is never good when the protocol is defined by it's only implementation

        My understanding is that this is the whole reason for the existence of openrsync. The people doing work on the rpki standards wanted to use rsync for one type of transfer, the standards body (IETF?) balked with a concern that the rsync protocol had only one implementation, so the openbsd folk, specifically Kristaps Dzonson stepped up and wrote a second implementation. It does not do everything rsync does but it interoperates enough for the rpki project.

        https://man.openbsd.org/rpki-client

        • superkuh8 days ago
          >however it is never good when the protocol is defined by it's only implementation

          One counter-example to this is in desktop GUI environments. You want one single strong reference implementation there for stability/consistent expectations of what will run. Pretty much everything that will run on the eleventh X protocol will work X.orgs X11 everywhere. Whereas the core wayland protocol is not feature complete and the reference implementation weston is weak. So every wayland compositor implements what should be core wayland protocol features in their own choice of third party lib or custom code. Like libei vs libinput vs no support at all (weston) for normal keyboard/mouse features. Software that works on one wayland won't work on others.

          My point here is that strong single reference implementations prevent fragmentation. And sometimes that's important. This is not one of those cases and I'm glad to see more rsync protocol implementations.

        • josephg8 days ago
          > it is never good when the protocol is defined by it's only implementation

          I don't know that I'd go that far. The benefit of having only one implementation of a protocol is that the protocol can evolve much faster. You don't have to have committee meetings to tweak how it works. And as a first pass, the more iterations you make of something, the better the result.

          Rsync is mature enough to benefit from multiple implementations. But I'm glad it had some time to iterate on the protocol first.

          • throw0101d8 days ago
            > The benefit of having only one implementation of a protocol is that the protocol can evolve much faster.

            Or you design the protocol to allow non-standard extensions, like with SSH, so you can have foo@example.com implemented by one product (and others can look for it if useful), and bar@example.org by another product. And if enough folks file the feature(s) useful they can be standardize with tweaks that fixed issues that were found though operational experience.

            Lots of IETF standards have a "x-" prefix mentioned for private / proprietary extensions.

            • josephg8 days ago
              Sure; but there's a limit of how much you can sensibly do with an extension mechanism. You can't - for example - change a text based protocol into a binary protocol using an extension mechanism. If you're in control of both client and server, you can change everything.
          • ycombinatrix8 days ago
            e.g. Wireguard
      • bombela8 days ago
        Think ssh, http etc
      • 8 days ago
        undefined
  • watersb8 days ago
    Patches to mainline rsync added support for extended attributes, particularly for supporting macOS metadata.

    Bombich "Carbon Copy Cloner" is a GUI app that wraps it.

    https://support.bombich.com/hc/en-us/articles/20686446501143...

    I started following Mike Bombich from his posts on macOS Server sysadmin boards; see

    https://web.archive.org/web/20140707182312/http://static.afp...

    Nathaniel Gray created a testing tool to verify the fidelity of backups; files with multiple streams, extended attributes and ACLs, all the good stuff... Backup Bouncer:

    https://github.com/n8gray/Backup-Bouncer

    See also this SwiftUI app that wraps rsync, RsyncX.

    https://github.com/rsyncOSX/RsyncOSX

    We used to really care about this stuff, back when we were still running software from "Classic" macOS on top of our new UNIX systems.

    https://web.archive.org/web/20161022012615/http://blog.plast...

    • doctorpangloss8 days ago
      The problem with rsync is that it is ridiculously slow.

      IFileOperation (Windows) and FileManager (macOS) will do the most performant copy supported by the underlying FS.

      Enabling CRC checks is a checkbox in SMB and ReFS - rsync's content matching step is redundant to a modern SMB share on a modern Windows Server. Windows to Windows, IFileOperation will be like 1.5-8x faster throughput with lower CPU usage than rsync, and maybe 1.2-3x faster than doing a file copy using vanilla golang.

      And if you don't care about the operating systems that actually use all the complex filesystem metadata, if you only care about Linux, then you only need openrsync or simpler programs.

  • jeroenhd8 days ago
    So, anyone got a good resource on why Apple is so afraid of GPLv3? Surely this shouldn't be a problem as long as they statically compile the executables?
    • ninkendo8 days ago
      GPL3 closes what was considered a loophole, where device makers would ship a product derived from GPL’d code, and release the source, but provide no ability for users to actually compile and run that source on the device (this was called “tivo-ization” at the time, because TiVo did it.)

      So for iOS, it’s pretty obvious why they don’t use gplv3… because it would violate the terms.

      For macOS they could certainly get away with shipping gplv3 code, but they do a lot of code sharing between iOS and macOS (and watchOS/tvOS/visionOS/etc) and it doesn’t make much sense to build on a gplv3 foundation for just one of these operating systems and not the others. So it’s simpler to just not use it at all.

      It also means they’re more free to lock down macOS from running your own code on it in the future, without worrying about having to rip out all the gpl3 code when it happens. Better to just not build on it in the first place.

      • mappu8 days ago
        > this was called “tivo-ization” at the time, because TiVo did it.

        It's not widely known but what TiVo actually did was something different than this, and both RMS and the SFC believe that both the GPLv2 and GPLv3 allow what TiVo actually did. Some discussion and further links via https://lwn.net/Articles/858905/

        • imcritic8 days ago
          I'm just curious: do you have that link bookmarked?
          • skydhash8 days ago
            Sometimes you just remember the arricle and the right keywords can get it easily from a search engine.
      • duskwuff8 days ago
        Current versions of macOS use a signed system volume [1], much like iOS - under a standard system configuration, the user can't replace system executables or other files, even as root. Unlike iOS, the user can disable SSV, but I'm not certain that's sufficient for GPLv3 - and I can't imagine Apple feels comfortable with that ambiguity.

        [1]: https://support.apple.com/guide/security/signed-system-volum...

        • ezfe8 days ago
          By the GNU website it would be sufficient. The website says:

          > GPLv3 stops tivoization by requiring the distributor to provide you with whatever information or data is necessary to install modified software on the device

          By my reading of this, there is not a requirement that the operating system is unlocked, but the device. Being able to install an alternate operating system should meet the requirement to "install modified software on the device."

          > This may be as simple as a set of instructions, or it may include special data such as cryptographic keys or information about how to bypass an integrity check in the hardware.

          As you've mentioned with disabling SSV, and as Asahi Linux has shown, Apple Silicon hardware can run 3rd party operating systems without any problems.

          • WD-428 days ago
            The hardware might be open for now but you can imagine Apple would like to keep the possibility of closing it off on the table, thus the allergy to gplv3.

            Edit: "without any problems" is definitely a stretch.

            • ezfe7 days ago
              Those problems are development challenges. The system is fully set up to allow it, even if Apple doesn't hand hold them through.
              • WD-427 days ago
                I feel like there is a wide gap between "hand holding" and holding the specs locked up in Cupertino never to see the light of day. A M-generation Mac is not the same kind of set up to allow running software as say, any x86 machine.
          • rtpg8 days ago
            I also imagine that quite simply saying "look you can compile this binary as an alternative and run it on the machine" would fit the requirements, even if it doesn't entirely capture the spirit of anti-tivoisation
            • philistine8 days ago
              Still doesn't change the fact that Darwin is the basis for iOS, tvOS, watchOS etc.

              Can't install Asahi Linux on those!

        • chongli8 days ago
          Sure, though there's little point in replacing executables such as rsync when you can install your own version (perhaps through a package manager and package repository / database such as Homebrew [1] or MacPorts [2]) and use the PATH environment variable to decide which version of the executable you'd like to use in which context.

          [1] https://brew.sh

          [2] https://www.macports.org

          • __float8 days ago
            This might be true for the most part as an end user, but from a licensing perspective regarding the original binaries, this is irrelevant.

            You must be able to modify and change the code, not merely append to the PATH:

            > Tivoization: Some companies have created various different kinds of devices that run GPLed software, and then rigged the hardware so that they can change the software that's running, but you cannot.

            from https://www.gnu.org/licenses/quick-guide-gplv3.en.html

            • duskwuff8 days ago
              I'd advise looking at the actual language of the GPL, not the FSF's (non-binding) statements about what they intended it to mean. The relevant text is at the end of section 6 of https://www.gnu.org/licenses/gpl-3.0.txt - search for the words "Installation Information". I am not a lawyer, but my reading of the text suggests that:

              1) The so-called anti-Tivoization clauses are scoped to "consumer products". Don't ask me why, but the language is very deliberately constructed to limit these terms to products "which are normally used for personal, family, or household purposes" - if you're building hardware for commercial or industrial use, none of this applies.

              2) These clauses are also scoped to object code which is conveyed "as part of a transaction" in which the user purchases or rents a consumer product which the code is intended for use with. The intent was to limit this to software which was incorporated in the device; however, it accidentally ends up applying to any consumer transaction where the user purchases (e.g.) both a computer and a piece of software which includes GPLv3 code - regardless of who's selling them. So, in practice, this actually applies to any GPLv3 software, regardless of whether it's part of a device's firmware or not.

              3) The end result of these clauses is to require that any software distributed under these conditions (which is to say, any GPLv3 software) be distributed with "Installation Information". It's somewhat ambiguous what precisely this encompasses, but it's quite possible that, if Apple distributed GPLv3 software, some of their internal software signing keys and/or build processes would be considered part of that Installation Information.

            • chongli8 days ago
              My claim is entirely from the end user perspective. We should not really care which tool Apple includes for their licensing purposes. If we have a dependency on a particular tool then we have the ability to install and use it ourselves. The signed system volume does not interfere with our ability to do that.
          • kuschku8 days ago
            I'm not sure that'd qualify, as many tools shipped with the system would continue to use the preinstalled version, not yours.
        • jillyboel8 days ago
          Ew, how hostile.
        • troyvit8 days ago
          > Current versions of macOS use a signed system volume

          Sometimes I feel like I'm deluding myself with the small inconveniences I put myself through only using Linux, but finding out about stuff like this wipes that away.

      • pabs38 days ago
        TiVo didn't do that, they broke their proprietary software when it ran on a modified version of the GPLed Linux kernel.

        Also, GPLv2 requires the ability to modify and reinstall, just like GPLv3.

        https://sfconservancy.org/blog/2021/mar/25/install-gplv2/ https://sfconservancy.org/blog/2021/jul/23/tivoization-and-t...

        Neither GPLv2 nor GPLv3 prevent what TiVo actually did.

        https://events19.linuxfoundation.org/wp-content/uploads/2017...

      • harry88 days ago
        > So for iOS, it’s pretty obvious why they don’t use gplv3… because it would violate the terms.

        Apple using "openrsync" because they want to close the code more than the rsync license lets them.

        • mattl8 days ago
          I’m not sure they care about rsync’s code, they probably just don’t want to maintain an old fork of rsync under GPLv2.
          • Aachen8 days ago
            Why should they? Apple isn't the maintainer of rsync or other third party software last I checked
            • mattl8 days ago
              This is what they have been doing for years since rsync was released under the GPLv3.
      • jitl8 days ago
        > It also means they’re more free to lock down macOS from running your own code on it in the future, without worrying about having to rip out all the gpl3 code when it happens. Better to just not build on it in the first place.

        how does locking down macOS have anything to do w/ GPL compliance? Apple is free to do whatever BS with the OS they ship in terms of terminal access, user permission level, etc regardless of GPL of any code on the device. I could ship a GPLv3 system tomorrow that disallows user root access and as long as I make the OS source freely available and redistributable, it's fine.

        • ninkendo8 days ago
          If you make a device which uses GPL’d code, and provide all the covered source code you used, but prevent users from putting any modified code on the device, you are in violation of GPLv3, but not GPLv2. That means this sentence:

          > I could ship a GPLv3 system tomorrow that disallows user root access and as long as I make the OS source freely available and redistributable, it's fine.

          Is not true for gpl3. It’s called the “tivo-ization” loophole, and it’s one of the principal reasons the GPL3 was made in the first place. I think you’re just wrong.

          (Note: I’m not claiming Apple is would be in violation for shipping e.g. a GPLv3 bash on macOS, today, only that they would be in violation for doing that on iOS today, or if in the future they locked down macOS in the same way that iOS was, then for macOS too.)

        • 8 days ago
          undefined
        • 8 days ago
          undefined
        • NewsaHackO8 days ago
          [flagged]
      • p0w3n3d8 days ago
        > they’re more free to lock down macOS from running your own code on it in the future, without worrying about having to rip out all the gpl3 code when it happens. Better to just not build on it in the first place.

        That's actually quite scary what you wrote there.

        That's also even more scary to me, as I am really watchful for such restrictions which can IMO happen in current OSes any time now ...

        • kijiki7 days ago
          This is really easy, just use Linux.
          • p0w3n3d7 days ago
            Easy unless web services start requiring you to use TPM or other things that limit your possibilities further
      • KerrAvon8 days ago
        No, this doesn't quite scan, because there's no reason they couldn't ship a current of `bash` or any number of other GPL3 things. Aurornis is probably closest to the mark: it is legally ambiguous, and Apple probably does not want to be a test case for GPL3 compliance.
        • ninkendo8 days ago
          If they shipped a gpl3 version of bash on iOS, they would be in violation. This isn’t really a question: gpl3 requires you to not only provide the source if you use it in a product, but the ability to modify it and run your modified version. Which iOS doesn’t let you do.

          Now, macOS would be fine in shipping a gpl3 bash. But not iOS. (Yes, iOS has bash. Or ar least they used to, they may be all on zsh now, I’m not sure.)

          So, the question becomes to Apple, do we ship different bash versions for different devices, and treat macOS as being different, and have to worry about only using newer bash features on macOS? Or do we keep the same old version on all platforms, and just eschew the new bash everywhere? It’s a pretty simple decision IMO, especially because users can just use brew on macOS and put their own bash on there if they want.

          Others are pointing out that gpl3 is less tested in court and that lawyers are just more uncertain/afraid of gpl3 than gpl2, especially with respect to patents… but I don’t think these are mutually exclusive. It’s clear that they can’t ship gpl3 on 4 out of their 5 operating systems. macOS is an outlier, and from an engineering standpoint it’s a lot simpler to just keep them all the same than it is to ship different scripts/etc for different platforms. It can be both reasons.

      • Someone8 days ago
        > For macOS they could certainly get away with shipping gplv3 code

        Even limiting that to “in the USA” I would never say certainly for a license for which so little jurisprudence exists.

        Once you add in multiple countries, it doesn’t get clearer.

        And yes, that applies to GPLv2, too, but that ship has sailed. I also don’t see them add much new GPLv2 licensed software.

        For GPLv3, they also may be concerned about patents. If, to support some MacOS feature, they change a GPLv3 licensed program that uses one of their patents, GPLv3 gives others the rights to use those patents in versions of the tool that run on other platforms.

      • jeroenhd8 days ago
        For iOS that makes sense, I suppose, but does Apple really ship the rsync binary to iOS?

        I suppose the way they prevent you from replacing system files could violate the GPLv3 clause, but still, it seems silly.

    • Aurornis8 days ago
      My perspective on GPL and related licenses changed a lot after working with lawyers on the topic. Some of the things I thought to be completely safe were not as definitive to the lawyers.

      I don’t know Apple’s reasoning, but I know that choosing non-GPL licenses when available was one of the guiding principals given to us by corporate lawyers at another company.

      • cosmic_cheese8 days ago
        A lot of it is indeed the legal murkiness.

        On the engineering level, other liceneses likely get selected because it’s easy. You don’t need to consult the legal department to know how to comply with licenses like MIT, BSD, etc, so you just pull the thing in, make any required attributions, and continue on with your day. It’s a lot less friction, which is extremely attractive.

        • KerrAvon8 days ago
          Yes, although even for the more liberal licenses you actually still want legal review at a sufficiently large company to ensure that your engineering read of the license is accurate. What if someone changed the wording slightly in some way that turns out to be legally significant, etc.
          • cosmic_cheese8 days ago
            That might apply in a handful of cases, but the vast majority will check out when a quick diff against a reference license file shows that the only changes are party names.
            • KerrAvon8 days ago
              I think it's very unlikely to happen, in general. I'm just saying a large corporation will want to check every time because they cannot really afford to do otherwise.
              • arccy8 days ago
                you didn't have to be a large corporations, there's a bunch of automated tools that help you check for your dependencies' licenses and flag anything non standard.
        • butchlugrod8 days ago
          I work at a large corporation, but one that only has 6% of Apple’s annual revenue. Even the emails we send to end users get a review from the legal team prior to us hitting send.

          Yeah, there are some assumptions which can be made about licenses and their suitability for our purposes, but no serious organization is touching that code until there has been a full audit of those license terms and the origin of every commit to the repository.

        • pjmlp8 days ago
          The kind of places I usually work for, you do need to consult with legal regardless of the license.

          And to prevent your scenario, usually CI/CD systems are gapped to internal repos, unless dependencies are validated and uploaded into those repos, the build is going to break.

      • giantrobot8 days ago
        This was basically the justification I was told when I was at Apple. The GPLv3 is too viral for the liking of Apple's legal department. They do not want to be the test case for the license.
        • quotemstr8 days ago
          The funny thing is that the rest of the world has moved on and is no longer afraid of the GPLv3. The reality that people aren't, as Apple's legal people predicted, being legally obliterated hasn't changed Apple legal's stance. Doomsday cults actually get stronger when doomsday fails to drive.
          • kmeisthax8 days ago
            The reason why doomsday never came is that the GPLv3 bomb was never dropped. Linux, Android, and busybox all rejected v3, because it's basically a ban on embedded development[0], and that's all the FOSS most embedded developers care about using.

            Likewise, if you don't do any embedded, you don't need to worry about v3, it's functionally identical to v2 except the compliance story is slightly easier (you don't immediately lose your license if you fuck up a source release).

            There's very few companies that have their fingers in both the embedded and desktop markets; those are the ones that need to worry about GPLv3 doomsday. AFAIK that's only Apple and Microsoft[1], both of which have very hostile attitudes towards v3 as a result.

            [0] To be clear, when you hear "embedded development", think "TiVoization". The business model of embedded development is putting your proprietary software in a box to sell. GPLv3 wants to make it so that if you do that, you can't stop someone from modifying the OS around the software by making the software detect that and break. But that also makes it significantly harder to defend your business model. Remember: the embedded landscape is chock full of very evil DRM schemes, many of which would break trivially if the app had to support running on arbitrarily modified OSes or with arbitrarily modified libraries.

            [1] Microsoft controls the signing keys for UEFI, and while they are willing to sign stuff to let Linux boot, they will not sign GRUB because that's GPLv3 and they worry signing any v3 software will obligate them to release their signing keys.

          • hnfong8 days ago
            The rest of the world has moved on and no longer using GPLv3.

            In the early 2000s all the miscellaneous small projects on sourceforge used GPLv2 (v3 was not out yet).

            These days you'll be hard pressed to find any new projects using GPLv3, except the ones with close ties to the GNU or FSF.

            The GPL is getting more irrelevant and more easy to avoid. That's why nobody is afraid of GPLv3 any more.

            • rs1868 days ago
              Exactly. I am surprised this isn't talked more.

              The web stack is such an example. Almost everything you use -- chrome, webpack, electron, babel, React etc all adopted the permissive license.

              Not quite so for other areas, but I can count with one hand the number of GPLv3 licenses I have seen in new projects.

              • skydhash8 days ago
                Most of those projects are from corporate settings and were created for corporate projects.
          • giantrobot8 days ago
            Most organizations don't have many billions of dollars at stake. I doubt you'll find many Fortune 500 companies with a flippant attitude towards the GPLv3. You don't even see the GPLv3 used much by the "we love Open Source" crowd. Most externally released FOSS is under non-viral Open Source licenses.

            No big company wants to spend a million(s) dollars defending themselves from an NPE with an East Texas mailbox in a frivolous licensing suit. Worst case is a judge deciding the license infects their proprietary code because they're built on the same cluster.

            The rest of the world has hardly moved on. I've heard of multiple companies with the same GPLv3 policy as Apple for largely the same reasons.

          • arccy8 days ago
            I think the rest of the world is very much moving in Apple's direction: look at what Ubuntu is doing, and any big open source project with more than a single corporate backer (i.e. not just using open source as a marketing channel) isn't using GPL.
            • pama8 days ago
              Not sure what you mean about ubuntu… there is tons of GPL there. https://ubuntu.com/legal/open-source-licences?release=jammy
              • anonfordays8 days ago
                Replacing GPL coreutils with Rust reimplementations. The conspiracy theorists say that's the reason behind the huge RiiR push. There's effectively zero GPL'ed Rust software.
                • quotemstr8 days ago
                  It makes me sad to realize that it was possible that the GPL was necessary to bootstrap free software culture and that we no longer need it now that we've won.
                  • johannes12343218 days ago
                    Is there a win?

                    One large side of the industry is turning to managed services. They run free/libre software, but build lock-in on higher level and avoid giving direct contact.

                    On the other market, the desktop free/libre software won as with Android and free/libre parts of MacOS/iOS.

                    However they don't do that to benefit the free/libre software in any way, but for getting software cheap or even for free.

                    The amount by which this flows in one direction, there isn't a win.

                  • pabs38 days ago
                    We definitely have not won, locked-down consumer device vendors like Apple are the prime example of how we lost.
                  • pjmlp8 days ago
                    MIT and BSD predates it, and GPL only had a go at it for two reasons:

                    1 - Sun decided to inovate by spliting UNIX into user and developer SKUs, thus making the until then irrelevant GCC, interesting to many organisations not willing to pay for UNIX development SDK.

                    2 - AT&T tried to get control back over UNIX's destiny, and made BSD's future uncertain

              • 8 days ago
                undefined
            • applied_heat8 days ago
              What is Ubuntu doing?
              • WD-428 days ago
                There was an Ubuntu engineer recently talking about using the rust coreutils which are bsd licensed instead the old gpl ones. But he made it clear it was more about “rust is better” than anything to do with the license.
      • palata8 days ago
        > but I know that choosing non-GPL licenses when available was one of the guiding principals

        Sure, but in this case Apple has chosen, for 20 years, to not go with GPLv3 when there was no alternative.

        • sbuk8 days ago
          You could also say the same of the Linux kernel too. After all, they have chosen, for 20 years, to not go with GPLv3…
          • palata8 days ago
            It's different. You are talking about the Linux kernel changing their licence to GPLv3. We were talking about macOS shipping a GPLv3 program.
          • stephen_g8 days ago
            Which is a fair choice, since so much of Linux development and driver development is driven by commercial interests - there would very likely be a fork from the last GPLv2 commit which all the vendors would switch to...
      • jillesvangurp8 days ago
        I've had similar training back in the day. This was when my employer (Nokia) was making Linux based phones and they needed to educate their engineers on what was and wasn't legally dodgy to stay out of trouble. Gplv2 was OK with permission (with appropriate measures to limit its effect). Particularly with Java, you had to be aware of the so-called classpath exception Sun added to make sure things like dynamic linking of jar files would not get you into trouble. Permissive licenses like Apache 2.0, MIT, and BSD were not considered a problem. GPLv3 was simply a hard no. You'd get no permission to use it, contribute to it, etc.

        Apple, Nokia, and many other large companies, employ lawyers that advice them to steer clear of things like GPLv3. The history of that particular license is that it tried to make a few things stricter relative to GPLv2 which unintentionally allowed for things like commercial Linux distributions mixing closed and open source. That's why Android exists and is Linux based, for example. That could not have happened without the loopholes in GPLv2. In a way that was a happy accident and definitely not what the authors of that license had in mind when they wrote the GPL.

        It's this intention that is the problem. GPLv3 might fail to live up to its intentions in some respects because of untested (in court), ambiguous clauses, etc. like its predecessor. But the intention is clearly against the notion of mixing proprietary and OSS code. Which, like it or not, is what a lot of big companies do for a living. So, Apple is respecting licenses like this by keeping anything tainted by it at arms length and just not dealing with it.

        • pjmlp7 days ago
          As someone on the Networks side, I had the pleasure to write multiple Excel files with all the dependencies of our product listing all the relevant facts for every single jar file.
      • ants_everywhere8 days ago
        I'm curious if you remember any of the specifics.

        At a big company I worked for, GPL licenses were strictly forbidden. But I got the vibe that was more about not wanting to wind up in a giant court case because of engineers not being careful in how they combined code.

        I'd be super curious if there are explicit intentional acts that people generally think are okay under GPL but where lawyers feel the risk is too high.

        • squiggleblaz8 days ago
          Linking against GPL code on a backend server which is never distributed - neither in code or binary form. (Because what might happen tomorrow? Maybe now you want to allow enterprise on prem.)
      • ndiddy8 days ago
        > Some of the things I thought to be completely safe were not as definitive to the lawyers.

        Can you elaborate?

    • ndegruchy8 days ago
      In all likelihood they just don't want to broach the idea of having to fight (and potentially lose) the GPL3 in court. Given the case history on the GPL2, it seems like more work than it's worth. They can just replace the parts that are "problematic" in their eyes and avoid a whole class of issues.
    • gtsop7 days ago
      Because they are clearly against users' rights, the only thing GPL is trying to protect. The exact technicalities I believe are not very important (yet they might be interesting).
    • Arnt8 days ago
      Apple doesn't say. IMO you should not trust other people's statements about Apple's reasoning.
    • m4638 days ago
      "Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software." -- https://www.gnu.org/licenses/gpl-3.0.en.html
    • 8 days ago
      undefined
    • toast08 days ago
      They're respecting the terms of the license.

      Especially when a piece of software changes from GPLv2 to GPLv3, it's asking Apple to stop updating, and they do as asked.

    • pjmlp8 days ago
      Not only Apple, everyone.

      I never worked at any company that allows for GPLv3 dependencies, and even GPLv2 aren't welcomed, unless validated by legal team first.

      • jeroenhd8 days ago
        But Apple did ship GPLv2. They're shipping open source tools with infectious licenses like they always did.

        This isn't like the normal "take someone else's work for free but don't give anything back" approach most companies follow when they decide to avoid GPL code.

    • quotemstr8 days ago
      Companies develop idiosyncratic cultures and either learn to live with them or die. Apple's learned to live with a legal culture deathly afraid of the GPLv3. Some influential director or someone made a decision 20 years ago and the GPLv3 superstition became self perpetuating, reality be damned. Outside incentives never became strong enough to override it.

      Every company has its stupid superstitions.

    • WD-428 days ago
      Probably because they are working towards a future where they don’t have to worry about releasing source code for anything, while being free to make any modifications they want. They just need time to code around all the FOSS they’ve leeched off of the last couple decades or wait for BSD licensed projects like this pop up to do that work for them.
    • banqjls8 days ago
      The TiVo clause.
      • rbanffy8 days ago
        It wouldn’t apply to the kernel. Also, a lot of the command line tools are not distributed as part of the OS.
        • jhasse8 days ago
          [flagged]
          • mistrial98 days ago
            from the point of view of the GPL side of the aisle, yes agree they are evil. Shareholders who want returns are on the other side of the aisle, so to speak, and definitely see "risk" and "no" when it comes to anything close to GPL. OK no problem, except that the code that Apple Computer profits mightily with, substantially originates in the former.

            recall the John Gilmore camp handing out "don't tread on me Apple" buttons 35 years ago.. it has been going on that long. Apple knows very well what they are doing.

          • Eggpants8 days ago
            lol. You must be a hoot at parties.
  • secure8 days ago
    I looked at openrsync when I was writing my own https://github.com/gokrazy/rsync implementation (in Go!) and it’s good code :)

    It’s a shame that openrsync is not 100% compatible with rsync — I noticed that Apple was starting to switch to openrsync because my own tests broke on macOS 15.

  • Symbiote8 days ago
    > openrsync is written as part of the rpki-client(1) project, an RPKI validator for OpenBSD. openrsync was funded by NetNod, IIS.SE, SUNET and 6connect.

    Could anyone suggest why these organizations would want to fund this development?

    https://github.com/kristapsdz/openrsync?tab=readme-ov-file#p...

    • jimsmart8 days ago
      This comment explains the reason for its existence quite well:

      https://news.ycombinator.com/item?id=43605846

      Companies fund things because they're useful or necessary. My guess is that some of the companies listed might use BSD — and perhaps wanted/needed an implementation of rsync that was not GPL3 licensed.

      And/or they simply have an interest in funding Open Source projects / development.

      • Squossifrage8 days ago
        Three out of four aren't even companies. SUNET is the Swedish NREN, NetNod is a non-profit that manages Internet infrastructure services (like DNS and NTP) in Sweden, IIS is the non-profit that manages the Swedish TLDs.
        • jimsmart8 days ago
          Feel free to substitute my use of the word "company", with "company / organisation / foundation". Plus others I'm surely forgetting.

          I meant 'company' in the sense of a legal entity, probably paying some kind of tax, probably having to register/file their accounts every year. Here in the UK, all of these various different types of 'companies' all have to register with Companies House, and file tax returns to HMRC. 'Company' is the overarching legal term here.

          — But sure, my bad: the post I was replying to actually used a term that is arguably better, 'organisations'. And I should have used that.

          But my point still stands, whether a private limited company, or a non-profit of some kind, or an organisation, or a foundation, or a charity, or whatever — they're all legal entities of some kind — and they're all able to fund anything they please, if they see value in it.

          - NetNod is actually a private limited company according to Wikipedia [1]. Corporate identity number: 556534-0014.

          - Swedish Internet Foundation, formerly IIS, have corporate identity number: 802405-0190 (on their website [2])

          - Sunet is a department of the Swedish Research Council, and uses the Swedish Research Council’s corporate identity number 2021005208, according to their website [3]

          So they are all registered with the Swedish Companies Registration Office. Which I assume is their equivalent of Companies House here in the UK.

          Maybe if you still think that they're not 'companies' — of some kind — then perhaps take it up with the Swedish Companies Registration Office! ;)

          [1] https://en.wikipedia.org/wiki/Netnod

          [2] https://internetstiftelsen.se/en/

          [3] https://www.sunet.se/en/contact

  • emmelaich8 days ago
    For a while, (up to including Sequioa 15.3) both rsync_samba and rsync_openrsync were available, via /var/select/rsync or the env variable CHOSEN_RSYNC.

    One particular annoyance of openrsync is that it claimed to support the /./ magic path element for --relative. I sent a bug report to Apple for this about a month ago.

    rsync_samba is gone as of Sequoia 15.4.

    I've installed rsync from homebrew.

  • 0x08 days ago
    I recently ran into an issue with this because building an iOS .ipa from the command line with xcodebuild apparently ends up shelling out to call rsync to copy some files between local directories, and because I had homebrew rsync earlier in $PATH, it would end up running homebrew rsync, but xcodebuild passed an openrsync-only command line argument "--extended-attributes" that homebrew rsync doesn't understand and would exit with a failure.
  • abotsis8 days ago
    I continue to be happy that Apple continues to enhance and embrace the posix side of osx vs gradually stripping it away in some kind of attempt to make it more like iOS.
  • fmajid8 days ago
    Just like they replaced bash with zsh. Most Big Tech firms are allergic to GPL3.
    • 7e8 days ago
      GPLv3 is a legal landmine. In fact, GPL itself is wildly unpopular compared to more open licenses. The FSF is getting what it deserve here. Open source predates the FSF and will remain long after the FSF is dead.
      • anthk8 days ago
        Thanks to the FSF we have cheap Unix clones with easy installs. Even Android should thank the FSF for its existence.
      • mcstafford8 days ago
        Whose popularity do you champion, and what sorts of motive bring deservedness in to the discussion?
      • wanderingmind8 days ago
        Can you show examples of impactful open software that predates fsf and stallman?
        • donnachangstein8 days ago
          BSD predates the Stallman Utilities (kernel sold separately) by about a decade.*

          * in "shared source" form

          • hollerith8 days ago
            The BSD releases did not form a complete OS and were not runnable except in combination with source code from ATT Unix, which was emphatically proprietary software. The first release of BSD that was unequivocally legal for anyone to acquire and run without getting ATT's permission was 4.4BSD-Lite in June 1994. (Yes, organizations did create OSes from BSD Networking Release 2 (Net/2) released in June 1991, but legal uncertainty hung around them for years.)

            In contrast, by 1984, Stallman had already formed a close working relationship with a competent lawyer (Eben Moglen) to devise a legal strategy to maximize the probability that everyone will continue to enjoy a list of freedoms (chosen by Stallman) around any software put under the GPL.

            • mustache_kimono8 days ago
              > The BSD releases did not form a complete OS and were not runnable except in combination with source code from ATT Unix, which was emphatically proprietary software.

              Is that the measure: a complete OS? When exactly did GNU ship a complete OS?

              IMHO none of the above is relevant to the question which was first. IMHO both were not first. IBM, among others, were shipping source code with their product, until they didn't. OSS is and was a reaction to an only object model. And there were seeds at Berkeley and MIT.

              And Stallman isn't strictly responsible for the MIT strain. As Keith Packard said in his "A Political History of X", the X11 project chose not use the GPL license, because Stallman was simply too annoying.

              • hollerith8 days ago
                >Is that the measure: a complete OS?

                The fact that BSD was incomplete is relevant because it illustrates the fact that the only people who could run BSD were shops that had a source-code license for the proprietary AT&T Unix.

                • mustache_kimono8 days ago
                  So... >> When exactly did GNU ship a complete OS?

                  > the only people who could run BSD were shops that had a source-code license for the proprietary AT&T Unix.

                  So -- finally! -- that's the measure of OSS? It must run on non-proprietary systems? Not simply the source code? OSS that runs on Windows or MacOS or VMS is not actually OSS?

                  You figure that Linux is the first non-proprietary system in 1991? Not 4.3BSD released in 1989?

                  I think you can understand my and others reluctance to state definitively Stallman was first, when by a dozen different metrics he wasn't. I'm still trying to understand what he was supposedly first at? First to find a lawyer?

                  Linux is important. GNU is important. BSD is important. And they remain important. I don't think any of them are made more important by distinguishing only one and not the others. Like -- as much as it pains me to say it, because of how I loath Stallman and the FSF, GCC was more than important to the entire ecosystem for years. Until LLVM, it was required. Etc, etc.

                  • dagw8 days ago
                    >> When exactly did GNU ship a complete OS?

                    I want to say around 2006 or 2007 was the first time a 'normal' *nix hacker could install and boot[0] a complete GNU OS[1] and get something resembling work done (ie edit and compile C code in vi or emacs). (yes I know the question was rhetorical)

                    [0] without having to to a bunch of bootstrapping steps and other hackery

                    [1] Technically 'shipped' by Debian rather than GNU/FSF

              • anthk8 days ago
                GNU+Linux was good enough. Meanwhilke, BSD in early 90's was rotting until the BSD 4.4 forks arise.
                • mustache_kimono8 days ago
                  > GNU+Linux was good enough. Meanwhilke, BSD in early 90's was rotting until the BSD 4.4 forks arise.

                  Was this necessary? What exactly are we arguing about again?

            • donnachangstein8 days ago
              > but legal uncertainty hung around them for years.

              I mean if we're going to split hairs and play this game, SCO claimed ownership to alleged Unix code in Linux which wasn't initially resolved until 2008 or so (and further continued for another decade). That never stopped anybody.

              • hollerith8 days ago
                Yes, but not having a copy of the source code for ATT Unix stopped everyone from using BSD or any system based on BSD till 1991. Again, before then BSD was very far from being a complete OS.

                So BSD has severe shortcomings as an answer to the question that started this thread, namely, "Can you show examples of impactful open software that predates fsf and stallman?"

          • ndiddy8 days ago
            The first time any BSD code was made publicly available was Networking Release 1 (just contained the networking stack) in 1989, or around 5 years after Stallman started the GNU project. It took until Networking Release 2 in 1991 for the code for a runnable BSD operating system to be made publicly available. Prior to that, BSD was based on proprietary UNIX source code, and anyone who wanted to run it had to purchase a source code license from AT&T.
            • donnachangstein8 days ago
              > or around 5 years after Stallman started the GNU project.

              So 5 years after he started with an empty repo and some political ramblings?

              GNU did not have a working system until Linus released Linux in 1992. They had pieces and components which were worthless on their own.

              There is a lot of deliberate ignorance of public domain code being posted on BBSes at the time. I'm not discounting anything Richard did but let's not rewrite history here.

              • ndiddy8 days ago
                > So 5 years after he started with an empty repo and some political ramblings?

                Or around 4 years after the first public GNU Emacs release, 4 years after the first public GNU Bison release, 3 years after the first public GDB release, and 2 years after the first public GCC release.

                > GNU did not have a working system until Linus released Linux in 1992. They had pieces and components which were worthless on their own.

                Correct, just like how the initial public BSD release was just the networking stack (worthless on its own).

                > There is a lot of deliberate ignorance of public domain code being posted on BBSes at the time.

                Not sure where you got that from. Nobody claims that Stallman was the first one to come up with publicly releasing source code. I will say that a lot of the "public domain" software from back then lacks the uniformity you see from later movements like free software or open source. Some of it isn't even public domain, and has a license like "this is copyright me, any modified copies must have my copyright statement preserved, this software may not be used for commercial purposes".

              • squiggleblaz8 days ago
                > GNU did not have a working system until Linus released Linux in 1992. They had pieces and components which were worthless on their own.

                People were installing GNU onto existing Unix systems because GNU was better than they were distributed with. Maybe they did that with components of BSD Net/1 - no one has ever told me they did but it probably happened - but that was definitively post GNU.

                Anyway, I'm not sure if this matters so much to the debate. Stallman was reacting to a change. He rambled politically and wrote some code to back it up because he used to be able to do things, and now he could only do them if he would write some code and win some allies.

              • pessimizer8 days ago
                > GNU did not have a working system until Linus released Linux in 1992. They had pieces and components which were worthless on their own.

                Whether or not GNU had an OS or would ever have an OS has nothing to do with anything, though. What are you trying to illustrate? Those "pieces and components" are some of the most used pieces of software in history.

                • mistrial98 days ago
                  agree - portability across *nix was the point, not a complete product
        • emmelaich8 days ago
          Sharing (typically via tape) of software utilities use to be very common in every user group from the start (1960s). It was just the culture, and expected. Especially IBM mainframe users, DEC VMS.

          Of course the answer to your question depends on the definition of 'open source' and 'impactful'.

      • handsclean8 days ago
        “Pesticide wildly unpopular with pests.”
        • gtsop7 days ago
          Which is why I dislike non gplv3 open source software so much. It allows the pests to live on.
      • man48 days ago
        [dead]
      • donnachangstein8 days ago
        [flagged]
  • tiffanyh8 days ago
    Looks like OpenBSD maintains openrsync.

    https://github.com/kristapsdz/openrsync

    • emchammer8 days ago
      Apple could do worse than importing tools from the OpenBSD Project. Now there are several more commands that would be helpful...
  • DeathArrow8 days ago
    I see most people are discussing BSD VS GPL v3 and some wonder why on earth do we need more than one implementation of the rsync protocol.

    My view is that having more than one choice is good. It is good for both people and companies that we have BSD and Linux. It is good we have both BSD and GPL.

    Sometimes, having too many choices is bad because it leads to fragmentation, creates support and technical issues and leads to analysis paralysis and procrastination. But it's not the case here.

  • steeeeeve7 days ago
    This is one of those decisions that confuses you when you're on a mac and things just don't work how you'd expect.
  • pjmlp8 days ago
    As I keep telling, GNU/Linux had a lucky moment sidestepping the whole issue of AT&T trying to get control back from UNIX, had this not taken place and everyone would be using classical UNIXes, with some BSD code running on them.

    You see the same on embedded as well, all new kids on the block as embedded FOSS OSes or bare metal libraries, are either Apache or MIT licensed.

  • shmerl8 days ago
    To not like GPLv3 one has be a DRM proponent. That checks out for Apple.

    In practice though authors of GPLv3 see it as a clarification of GPLv2, i.e. they should have the same practical intent.

    • shagie8 days ago
      Linus’s views on GPLv3 would be something to watch https://youtu.be/PaKIZ7gJlRU?si=263GyZd9YaPu4-PC
      • shmerl8 days ago
        I agree that it could be a separate license, but that doesn't really contradict the point that GPLv2 was intended to prevent DRM scenarios that in practice violate basic idea of being able to run the changes. It's a natural thing to want, even if Linus doesn't find it important.
  • firecall8 days ago
    I feel compelled to comment just to note the vintage WordPress Theme being used!

    Worth a click just to see how we used to live!

  • chasil8 days ago
    "...Apple decided that while it could comply with the terms of GPLv2 license with regards to rsync 2.x, it could not comply with the terms of GPLv3 license with regards to rsync 3.x."

    This is due to the software patent terms that appeared in GPLv3.

    https://news.ycombinator.com/item?id=21645618

    https://lobste.rs/s/8lbh1k

    https://archive.ph/AeMTz

    • IAmLiterallyAB8 days ago
      I think its actually the anti-Tivoization stuff that they take issue with
      • chasil8 days ago
        The GPL wiki lists software patents prior to the Tivo clauses, and links to this:

        "Early drafts of version 3 of the GNU General Public License (GPLv3) contained several patent retaliation clauses that varied in scope, some of which were later removed due to concerns about their efficacy. The final published version of GPLv3 contains a patent retaliation clause similar to those in the Apache License and Mozilla Public License, which terminates rights granted by the license in response to litigation alleging patent infringement in the software."

        https://en.wikipedia.org/wiki/Software_patents_and_free_soft...

        That would be a significant factor in avoiding GPLv3.

  • linsomniac8 days ago
    Am I the only one that has had some hard to pin down problems with rsync? I'm excited about this because I'd love to have an alternative implementation. In particular, "rsync --compress" over SSH seems to have a rare and hard to track down issue. I've used rsync for decades doing hundreds of nightly system backups, and maybe once a month one of them goes out to lunch (IIRC it just hangs). The rarity of it makes it hard to isolate or come up with a reproduction. Removing the "--compress" resolves it. Anyone else ever come across something like that?
  • INTPenis8 days ago
    This hardly matters as any power user will keep their own toolset maintained from brew I guess.

    And relying on open source CLI tools in Macintosh for helping end users is not a good idea.

    What this signals to me most of all is "oh we can't steal from GNU anymore so we'll steal from openbsd".

    Because even if it is a neglible part of the appeal of Macintosh computers, they still make an effort to ship these tools with their OS and they make a lot of money doing it.

  • kt515 days ago
    It looks like there are quite a few glitches in Apple's openrsync. Deletion of two files resulted in four deletion entries.

        rsync -av --delete --dry-run a/ b/
    
        Transfer starting: 2 files
        folder1/file2: deleting
        folder1/file1: deleting
        ./folder1/file2: deleting
        ./folder1/file1: deleting
    
    Also, if a directory has extended attributes, `rsync -av --extended-attributes` resulted in `._` folders in the destination directory.

    It is not an acceptable replacement for the original rsync.

  • DeborahEmeni_7 days ago
    Apple's shift away from GPL has been showing for years, so this doesn't surprise me. But dropping rsync 2.6.9 this late without full 3.x support feels like a step backwards, especially for anyone relying on metadata preservation. I get the licensing angle, but the lack of feature parity in opensync might catch a lot of people off guard.
  • wewewedxfgdf8 days ago
    Good - MacOS had an old and crappy version of rsync
    • procaryote8 days ago
      They should just have updated to the recent version of rsync. Their GPL fears are overblown
    • dima558 days ago
      [flagged]
      • modeless8 days ago
        It's really not. The hardware is so good that I put up with it, but it is a bad OS in so many ways. My dream laptop would be a MacBook with a normal keyboard layout and running a well supported version of Linux.
        • wenc8 days ago
          macOS is a BSD.

          If you're used to Linux (I am) it feels there are lots of quality of life changes, but I realized it's because I'm used to Linux.

          The OS itself is fine.

          • modeless8 days ago
            The problem is not the kernel. It's the anti-user hostility to open source (GPL3 utilities e.g. modern Bash and rsync etc) and open standards (e.g. OpenGL and Vulkan) that stem from an over-active legal department. And the GUI that's stuck in the past. The top menu hasn't made sense since screens got bigger than the original Macintosh. The Dock has always sucked. Window management is primitive and saddled with interminable animations. And then there are the random unconfigurable things like the stupid camera gesture recognition popup or the inability to use natural scrolling on touchpads without reversing the mouse wheel too. MacOS needs an overhaul.
          • echelon8 days ago
            [flagged]
            • sbuk8 days ago
              > The GUI / UX is horrible.

              Coming from someone who would "prefer to be in Gnome or KDE", this is a hard criticism to take seriously. KDE is obviously designed by committee and is therefore UI elements are all over the place, and Gnome is just horrible.

              > Finder is annoying as hell. The icons / layouts do not snap to resizing, proper navigation requires arcane keyboard shortcuts, it's difficult to open new instances in the expected way, tabs suck, navigation sucks. Finder is made for non-power users.

              Those "arcane keyboard shortcuts" have been around for 40 years. Or is it the Emacs keybinding that you don't like? Opening a new instance of a Finder window is a Cmd N away. New tab? Cmd T - I do not understand what is hard about that. Oh, and tabs are a feature of the system, and native apps (some non-native too) automatically implement them. It seems to me that someone hasn't RTFM'ed. Some "power user" you seem to be...

              The rest is rant that macOS isn't $MyFavouriteDistro and that is the only way computers should work.

              > I do not want to "define" terms with the shitty built in dictionary tool, yet that option eats up context window space in every tool.

              Oh no! A whole line in a context menu! The horror.

              • echelon8 days ago
                Everything you've said is an opinion except for one thing: you're telling the user they're wrong. Classic Apple behavior.
                • sbuk7 days ago
                  What makes my opinion you any less valid than your opinions? Classic arrogant troll behaviour. I merely pointed out that you are exaggerating. Like I said, you want everything to work like it does with your favourite platform, and if it doesn’t then it’s clearly broken or shit or user hostile. Power user? Don’t make me laugh.
        • mrlonglong8 days ago
          They already do. Asahi Linux.
          • goosedragons8 days ago
            Still missing a bunch of features like USB-C displays. Isn't ready for newer CPUs yet either.
            • echelon8 days ago
              Not to mention that the leads are no longer working on it. Asahi Lina and Hector Martin are gone.
              • mrlonglong7 days ago
                People constantly badgering for things was why he quit.

                Patience goes a long way for a harmonious relationship.

                They were whiny shits.

                • goosedragons7 days ago
                  I thought he quit because he got in a row with a Linux kernel maintainer?
        • jbverschoor8 days ago
          cli? Just run orbstack..
      • dghlsakjg8 days ago
        Why?

        This is an included package from a 3rd party that was kept at a previous version for licensing reasons.

        If you want the latest version of rsync, you can just install it.

        Are you upset that MacOs doesn’t include a copy of Libre Office, or every other bit of 3rd party software?

        • yjftsjthsd-h8 days ago
          > Are you upset that MacOs doesn’t include a copy of Libre Office, or every other bit of 3rd party software?

          I'd be kind of unhappy if my OS shipped an old version of LO.

      • rbanffy8 days ago
        It’s adequate. You can use MacPorts to install a more modern Unix environment.

        Much better than Windows.

      • ndegruchy8 days ago
        Eh, I swap between the big three every day and they're all terrible in their own unique manners. macOS certainly has problems, and Apple's adversarial relationship with open source is not helping anything, but I wouldn't call macOS bad, just not suited for everyone's needs.
      • 8 days ago
        undefined
        • 8 days ago
          undefined
  • ak0072 days ago
    It broke the `--log-files` CLI switch that I used to backup my local files to shared storage :-(. Had to reinstall rsync via homebrew to get it back
  • larusso8 days ago
    How is it these days for other developers actually banking on certain tools to be present / working. What I mean is that apple isn’t shipping rsync etc to help us developers but because the system needs it. It was already mentioned that this issue also exists for the other cli tools because they ship the BSD not the Gnu versions. Which brings me to POSIX which was introduced back then to tackle just that. Make sure that a set of tools has defined options / behavior etc. it seems to me that we lapsed here because more and more systems ship very custom setups which are not compatible. Or see the Linux and the issue with binary compatibility and the state of flatpack vs Snap vs others. I fear it becomes harder and harder to create cross platform solutions by using the system provided packages. Writing a shell script is already a challenge.
  • yonran8 days ago
    I ran into this issue too since I implemented an --rsh wrapper script (based on https://github.com/kubernetes/kubernetes/issues/13776) and the options passed to ssh are different (samba rsync passes in -l user host command, openrsync passes in user@host command).

    > openrsync accepts only a subset of rsync’s command line arguments.

    I have not upgraded to MacOS Sequoia yet so I cannot verify but from the source (https://github.com/apple-oss-distributions/rsync/blob/rsync-...) it appears that there is a wrapper and they ship both samba rsync and openrsync and fall back to samba if you use an unsupported option?

    • stephenr8 days ago
      Interestingly, I see evidence of that wrapper on my Sonoma machine, but `/usr/libexec/rsync/` only contains the `rsync.samba` binary,

      On my Sequoia machines, there is no `/usr/libexec/rsync`, and the `rsync` binary at /usr/bin seems to just be the regular `openrsync` binary.

  • numbsafari8 days ago
    I stopped treating Mac OS as a Unix and I started sleeping at night. It’s a great platform for running a Unix in a VM.
    • latexr8 days ago
      macOS is a certified UNIX.

      https://www.opengroup.org/openbrand/register/

      Perhaps what you mean is that the definition of what it means to be a UNIX has changed for you and you prefer to run UNIX-inspired or UNIX-compatible systems.

      • numbsafari8 days ago
        I think "certified unix" is meaningless to most people who use "a unix".
        • wpm4 days ago
          I think a lot of people who use "a unix" just use "a linux"
    • pjmlp8 days ago
      It helps not to mistake UNIX with GNU/Linux, as first step into tranquility.
  • egorfine8 days ago
    Thanks for bringing attention to it. Did `brew install rsync` immediately, problem solved, fuck Apple.
  • p0w3n3d8 days ago
    Strange, I've always thought about GPLv3 as of an upgrade (i.e. better license) to GPLv2
    • wkat42428 days ago
      It's an upgrade for us, yes. But not for the companies that want to pretend they're doing FOSS. Like Apple these days.

      They had a good run where they really were open, like when they created OpenCL and Darwin was kept up to date. However these days most of their APIs are closed and Darwin is always lagging behind. Not that anyone actually uses that anyway, but still..

    • sbuk8 days ago
      It seems to me that it has actually been harmful to F/LOSS in the longer term. Where before, there were companies that were contributing to and using F/LOSS, GPLv3 put a stop to that virtually overnight. Now we see comments like "$MEGACORP should stop leaching" or "$MEGACORP should sponsor/ contribute more" - which of course they did, with time and people before the GPLv3 was released.
  • ndegruchy8 days ago
    Huh, interesting. I hadn't noticed when I upgraded, but I don't use many of the features of `rsync` to begin with. I ended up installing the real `rsync` shortly thereafter.
    • jethro_tell8 days ago
      Why?
      • ndegruchy8 days ago
        Force of habit, I usually install the set of tools I need from homebrew because I know that either Apple has the BSD variants, or has old versions.
  • NelsonMinar8 days ago
    Does openrsync work?

    The problem with Apple's ancient userspace is so many of the utilities are outdated and don't support things like files bigger than 4GB. So switching to a tool updated in the last 19 years may be an improvement. But then rsync is such a standard, is openrsync 100% compatible?

    The need to install and maintain Homebrew was a big part of why I switched from MacOS to Windows. WSL is a very good Unix environment, being just Ubuntu or Debian.

    • commandersaki8 days ago
      don't support things like files bigger than 4GB

      citation needed

      • NelsonMinar7 days ago
        The specific example I was thinking of was unzip: MacOS couldn't handle zip files over 2GB. At least it couldn't back in 2016, with MacOS 10.11.4. I don't know if it's fixed now. if so, yay! But mainline unzip had fixed this in February 2005, so it took at least 11 years for that to propagate to Apple.

        I asked a couple of AIs just now and they think most MacOS utilities now work with big files.

    • procaryote8 days ago
      I imagine it will kind of work, with some weird traps, much like the ancient version of bash bundled, or the buggy bsd grep bundled or the weird mktemp bundled etc

      Mac OS userland is slowly rotting away because they're terrified of GPL. It's strange, as installing a modern version of rsync seems like it would be specifically allowed by GPL without "infecting" any other parts of the OS

      • pasc18788 days ago
        The important word here is "seems". Where is the case law that backs up your statements. Apple does not want to be part of a legal case re GPL.
    • alphabettsy8 days ago
      How is maintaining two operating systems simpler?
      • tymscar8 days ago
        To me this post can be framed under dissonance in the dictionary.

        Installing a package manager and a package from it apparently is harder than installing an OS and the installing a package from its package manager.

        And lets be honest here. It’s not like homebrew is a set in stone necessity.

        I use Nix, theres macports, you can build the package from source. All with less complexity than running what is in the end a whole os in a vm

        • hughw8 days ago
          I'm mystified at Homebrew's dominance. It seemed to come unglued for me every few months. I switched to Macports years ago and my cli world has been stable and up-to-date.
          • pasc18788 days ago
            I suspect because users and the originbal auithor did not understand multi user UNIX and so don't like the idea of having to use sudo and also they use Apple tools as much as possible rather than controlling the versions of libraries that they use which would be what commercial users were doing with Unix over 20 years before. It also uses /usr/local which is for locally compiled software so you get in a mess if you have a compiled version of a library that is also in Homebrew.

            Macports and nix and fink will build under a new user id and install as root as per any other Unix. Thus the build can be controlled to only use known versions of other libraries.

            Homebrew installs as the current user. - Try using it when you do have multiple users on a mac (which is uncommon).

            • latexr8 days ago
              > It also uses /usr/local

              That is only true on Intel—on Apple Silicon it uses /opt/homebrew—so it will only become less true as time goes on.

              The only reason it used /usr/local in the first place was that at the time many of the software packages it supported were broken if installed anywhere but /usr/local. In other words, Homebrew didn’t pick that location to annoy you or because they didn’t know better, but for a real practical reason.

              That location was always an optional default, you’re free to change it. If you do, everything will be compiled from source rather than using compiled binaries and won’t be offered support. But it does work.

              • pasc18787 days ago
                macports and fink managed in another directory for 8+ years before

                Anything using normal Unix build tools like autotools allow easy change of install directory.

      • NelsonMinar8 days ago
        I don't think at all about the Windows host. It's purely a Linux system when I interact with it. Homebrew sort of gives you that too in its setup by virtue of putting all its stuff in a particular path.
      • emmelaich8 days ago
        You don't really do maintenance on either. It's just clicky stuff or apt|dnf update. Do your classic gui apps stuff in MacOS and everything cli-oriented or development in Linux.
  • system7rocks8 days ago
    In general, Apple has had such a positive influence on both hardware and software that I welcome their particular approach. It may not be ideal from a pure Linux perspective, but it does open the door to a variety of approaches. And truly, that is the key - there should always be multiple licenses and approaches to the work of open source.

    So, thank you, Apple.

    But please open source System 7.

  • ernst_mulder7 days ago
    Beware if you want to copy files with extended attributes, ACLs or resource forks to another Mac.

    The short syntax used to be "rsync -Eva"

    Since openrsync this is now "rsync -va ---extended-attributes"

    I will need to go through a bunch of scripts now I'm afraid. Bye bye Eva.

  • brian_herman8 days ago
    Can someone do an analysis of the packages of macOS Sequoia that are still GPL licenced?
  • simongray8 days ago
    I wonder if this is why Time Machine started taking up all my CPU resources after I upgraded? I had to shut off automated Time Machine backups because it literally makes my M1 MBA unusable for few minutes every hour.
  • DeathArrow8 days ago
    Is good that we see BSD software thriving.
  • ikmckenz8 days ago
    And yet I don't see Apple on the list of contributors of the OpenBSD Foundation (https://www.openbsdfoundation.org/contributors.html), shame.
    • OsrsNeedsf2P8 days ago
      The BSD ecosystem benefits from MacOS plenty. Apple doesn't need to be a monetary donator, I for one would be grateful if Apple used my tools at all
      • WD-428 days ago
        Have some self respect lol
  • gausswho8 days ago
    Should I be embarassed for my bash alias?

    alias rsy="rsync -avP"

    I do this with many unix utils that have insensible (imo) defaults

    • emmelaich8 days ago
      I use -i (itemize) or even --ii (itemize everything) rather than -v. Also be aware than -a can conflict with other options -- in classic rsync gotcha fashion.
  • SuperSandro20008 days ago
    Yeah, more MacOS utilities that accept slightly different arguments.
    • latexr8 days ago
      Technically it’s a BSD utility, so still open-source and you can contribute to it.
  • 8 days ago
    undefined
  • keepamovin8 days ago
    Why can’t the developers just release a licensable corporate version and Apple just agree to pay the corporate license fee?
    • DeathArrow8 days ago
      Because for most open source packages there are thousands of contributors and all have to agree?
    • saagarjha8 days ago
      Why would they want to?
      • keepamovin8 days ago
        Why would who want to? I consider each of the participants below. But first let me answer generally:

        Why? Because it's right to. If you create good karma, the world will get better. If you do bad things, the world and your world (ie, your karma) will get worse. Paying for software you use extensively is good karma. Not doing so is bad karma that erodes the world (and your world), because it severs the exchange of value and erodes the justice that arises from that, which then reflects back on you inevitably.

        For the participants in this archetypal case:

        Apple - because it's not right to not pay the developers of software you use a lot, even if it was released under permissive licenses. Apple paying rsync producers for their software is just and right. Apple wants to be a good company, so they want to do this, too. Plus they could get a tailored custom license that works for them, and gives them standard good rsync.

        The rsync developers - so they get the just reward for the value they produce, as is right and absolutely correct. They can choose to allocate that however they want, which is them expressing their good interest. What's good for them, is good for what they produce. Everything gets better. Happy cycle.

        Everybody else - to participate in that just and right exchange of value, which nourishes the good of both the software, the developers, Apple, and everybody else, supporting the karma of the world, rather than participating in an exploitative abuse that erodes it.

        More generally, using software extensively that is permissively licensed is not piracy, but it has the effect of piracy in that value consumed is severed from value rewarded to the producers. This is fundamentally exploitative and abusive, in the limit leads to poor software quality by eroding productive capacity.

        One caveat is large well-organized ad-hocracies that maintain giant FOSS projects, like the FOSS or FOSS-like Linux distributions. These are sort of hybrid volunteer, corporate volunteer forces that are large enough to make such fossonomics work. But there's plenty of hyperuseful software built by tiny, single-person or single-company teams for whom those economies don't work as they don't have that scale nor fractional-corp-labor.

        To conclude: normalize improving the world and spreading good karma by normalizing paying for the software you use. Even if a given developer team is yet to realize how to bank the value they created for you, as a savvy and responsible software consumer it's your responsibility to seek out and initiate opportunities to pay them, and not to seek out what you can take and exploit. If they make payment available, use it.

        Basically, it's fairly simple. Don't be evil. And respond to and create opportunities to do good!

        • ranger2078 days ago
          The rsync project has set a higher price than Apple seems willing to pay: allow users to run whatever they'd like on the hardware they've bought. Apple is free to pay that price and use rsync, but chooses not to
          • keepamovin7 days ago
            That’s a funny way to look at it, but that’s not a price that’s a policy. It doesn’t have the freedom from obligation that an exchange on price does.

            Though I totally get what you mean and license terms are part of the cost that’s why a commercial offering with terms that can be negotiated and a price set to compensate for that seems a great option enough slack to capture the value for both sides.

  • palata8 days ago
    Am I the only one finding that "openrsync" sounds like "rsync" is not open source? I find it a bit confusing because rsync is GPL.

    Just like I would find it weird for a project to be called openlinux or librelinux...

    Still it's great to have multiple implementations, of course!

    • ronsor8 days ago
      It's called openrsync because it's developed for and by OpenBSD.
      • bentley8 days ago
        Fun fact, the “open” in OpenBSD doesn’t refer to open source licensing, but to an open development process, including the ability to anonymously checkout the CVS repository without an account, which was a novelty in the 90s.
      • palata8 days ago
        I truly respect OpenBSD, but I hope they won't end up writing openopenssl :-)

        (I will admit: I had to check and openssh is actually "the OpenBSD Secure Shell" project, so I guess it makes sense :-) ).

        • yjftsjthsd-h8 days ago
          They called their openssl fork libressl - I assume because of exactly that naming conflict - but most of their "exports" follow the convention; OpenBSD, OpenSMTPD, OpenNTPD, OpenSSH. Possibly others that I don't know off the top of my head.
        • ronsor8 days ago
          Sorry, OpenBSD already wrote libressl and libtls
          • palata8 days ago
            Well "libressl" doesn't sound like an open-source rewrite of the proprietary "openssl" :-).

            I don't really get the point about libtls, though.

            But I get it, OpenBSD has been using Open* as a prefix for many projects, I didn't know it :-).

      • 8 days ago
        undefined
    • tredre38 days ago
      linux-libre is a thing: https://en.wikipedia.org/wiki/Linux-libre

      Their intent with the naming is indeed to say that Linux isn't open-source enough.

    • jitl8 days ago
      "free" != open

      open != "free"

      • thayne8 days ago
        The venn diagram of licenses that are "free software" and "open source software" is practically a circle.

        Both rsync and openrsync are both free software and open source software.

      • palata8 days ago
        Are you trying to imply that GPL is not open source?
        • jitl8 days ago
          one is software anarchism, the other is software communism
          • palata8 days ago
            Can you give me one open-source license that is not "free software"?
            • kube-system7 days ago
              According to .... the FSF? They disapprove of a lot of licenses that meet OSI's definition of open source.

              There's quite a number in this list that meet OSI's definition (or are even explicitly approved by OSI), but are explicitly labelled non-free by FSF:

              https://www.gnu.org/licenses/license-list.html#NonFreeSoftwa...

              Two clear examples are:

              Apple Public Source License 1.x

              NASA Open Source Agreement

  • banqjls8 days ago
    You don't have to embed a github gist to show 5 lines of console output that are not even highlighted. You can use the HTML <code> tag.

    https://developer.mozilla.org/en-US/docs/Web/HTML

    • bentley8 days ago
      <samp> would be more semantically appropriate.
  • toutiao67 days ago
    [dead]
  • tonetheman8 days ago
    It has differences in the command lines / behavior. We discovered this last week.
  • maggiepatel3758 days ago
    [flagged]
  • wkat42428 days ago
    Lol. Apple's war on GPL 3 again. Same reason they replaced bash with zsh.

    I'm glad I'm no longer using their stuff.

    And yes I would need a complete implementation obviously.

  • procaryote8 days ago
    I already replace the bundled rsync on mac with the proper one, as the bundled one is ancient and is missing some features I like. Same for grep, awk, sed, find

    Mac OS is getting a bit worse every release, clearly trending towards an iOS world where we have to ask apple for permission to run anything, even in a sandbox

  • brunorsini8 days ago
    After decades using rsync for my local backups, I recently switched to ChronoSync Express. It's simple to use, with a sensible GUI and well-laid-out customization options.

    And btw, it's included on Setapp subscriptions.