36 pointsby accrual4 hours ago7 comments
  • c0nsumeran hour ago
    Ha. That's a throwback.

    I did the same thing, but with a more detailed writeup, in 2009: https://nuxx.net/blog/2009/12/06/time-machine-for-freebsd/

    It was really handy, but I now use borg as it just works better.

  • nightshift13 hours ago
    Seems similar to https://rsnapshot.org/
    • pmontraan hour ago
      I've been using snapshot for backups since so many years that I forgot when I started. I keep the last 14 snapshots.
    • kunjanshah2 hours ago
      This is the more robust way to go. Uses rsync under the hood.
  • orev3 hours ago
    The original post that introduced this idea into general public: http://www.mikerubel.org/computers/rsync_snapshots/

    I’m sure others will chime in that they used hard links like this before then, however as noted in that page, it’s the one that made it popular enough that rsync was updated to support the idea natively.

  • nine_k3 hours ago
    If we spoke about a Linux box, one that prudently ran ZFS, or XFS on top of LVM, it would be possible to make a snapshot before the diffing and sending, so that the snapshot would be indeed point-in-time. IDK if whatever macOS uses for the filesystem supports snapshots.

    Otherwise, I think, restic or kopia are better for proper backups, and Syncthing for keeping a mirror copy. But the simplicity of this script in charming.

  • hughc3 hours ago
    I've been using a bash encapsulation of this for a long time, works great.

    https://github.com/laurent22/rsync-time-backup

  • mrtesthah3 hours ago
    Isn’t restic better for backups overall?

    Anyone have a good script for macOS triggered by launchd, ideally something that uses FSEvents to check for directory changes?

    • crossroadsguy2 hours ago
      Yes, it is; among a few other great cross-platform FOSS tools that are built just for backups, and they do it really well. But most of them do periodic scans (as opposed to file change trigger-based backup runs, which I guess is what you might be looking for, I assume by your second sentence).
  • EGreg2 hours ago
    Why make hard links when you can use rsync or syncthing to just make an actual copy on an external hard drive eg via wifi or just remotely?
    • c0nsumeran hour ago
      Hard links are file level dedupe.

      And then once all references to the inode are removed (by rotating out backups) it's freed. So there's no maintenance of the deduping needed, it's all just part of how the filesystem and --link-dest work together.

    • kej2 hours ago
      The hard links are to the most recent backup before the one happening now in the script, so that you aren't storing full copies of files that haven't changed between backups.