Things that are not cozy:
1) There's no way to monitor your monthly spend per host/credit left on the account/etc. apart of logging into your account in a browser and manually keeping a spreadsheet. There's no web API to do it. You get an email warning when you have about 7 days of credit left. That's it.
2) Nothing is "a precious few megabytes" anymore. What seems like a negligible monthly spend at first can quickly grow up on you and soon you're spending highly non-trivial amounts. Which you might not notice due to 1) unless you are diligent in your accounting.
3) tarsnap restores are slow. Really really slow. A full restore can take days if you have non-trivial amounts of data (and make sure you have enough credit in your account to pay for that server-to-client bandwidth!) My understanding is that throughput is directly related to your latency to the AWS datacenter where tarsnap is hosted. Outside of north America you can be looking at nearly dial-up speeds even on a gigabit link.
Again, a problem that can surprise you at the most inconvenient time. Incremental backups in a daily cronjob tend to transfer very small amounts of data, so you won't notice the slowness until you try to do a full restore. And you generally don't test that very often because you pay for server-to-client transfers.
There are some workarounds for 3) and there's a FAQ about it, but look at the mailing list and you'll see that it's something that surprises people again and again.
Amazon has Pre-Pay in a semi-open beta.
CloudFront has 1TB/month free- knocking a large chunk of a restore's cost. (Note- you should have either encrypted your stuff yourself and/or S3 authorization/access control still works over CF)
At what seems to be <$2/mo per TB ($1/TB glacier Deep archive + 9cent/gb for metadata on S3 frequent access), no other solution comes close. The big issue is the lump cost of a restore. Which, is quickly worn down by being > $5/TiB/mo cheaper than anybody else.
Tarsnap, in contrast, has an explicit first-class ability to prevent a compromised client from damaging old backups.
It’s pretty simple to enable versioning and object lock on your S3 bucket, but it is another step if you’re using restic. Sure, if you just want all of that taken care of for you, you can use tarsnap, but you’re paying a 5x+ premium for it.
The other nice thing about restic is that since it’s just the client-side interface, it allows others to provide managed storage. Borgbase.com is a storage backend that is supported by Restic that supports append-only backups, and is cheaper than tarsnap.
https://restic.readthedocs.io/en/stable/030_preparing_a_new_...
I would like to see an explicit discussion of what permissions are needed for what operation. I would also like to see a clearly specified model in which backups can be created in a bucket with less than full permissions and, even after active attack by an agent with those same permissions, one can enumerate all valid backups in the bucket and be guaranteed to be able to correctly restore any backup as long as one can figure out which backup one wants to restore.
Instead there are random guides on medium.com describing a configuration that may or may not have the desired effect.
If you don’t understand S3 or don’t want to learn, then that’s fine, and you can pay the premium to tarsnap for simplifying it for you. But that’s your choice, not an issue with restic.
If you think differently, have you submitted a PR to restic’s docs to add the information you think should be there?
I think people are frequently trapped in some way of thinking (not sure exactly) that doesn't allow them to think of storage as anything other than Block based. They repeatedly try to reduce S3 to LBA's, or POSIX permissions (not even modern ACL type permissions), or some other comparison that falls apart quickly.
Best I've come up with is "an object is a burned CD-R." Even that falls apart though
For that matter, suppose an attacker modifies an object and replaces it with corrupt or malicious contents, and I detect it, and the previous version still exists. Can the restic client, as written, actually manage the process of restoring it? I do not want to need to patch the client as part of my recovery plan.
(Compare to Tarsnap. By all accounts, if you backup up, your data is there. But there are more than enough reports of people who are unable to usefully recover the data because the client is unbelievably slow. The restore tool needs to do what the user needs it to do in order for the backup to be genuinely useful.)
Tarsnap's deduplication works on the archive level, not on the particular files etc within the archive. Someone can set up a write-only Tarsnap key and trust the deduplication to work. A compromised machine with a write-only Tarsnap key can't delete Tarsnap archive blobs, it can only keep writing new archive blobs to try to bleed your account dry (which, ironically, the low sync rate helps protect against - not a defense for it, just a funny coincidence).
restic by contrast does do its dedupe at the file level, and what's more it seems to handle its own locks within its own files. Upon starting a backup, I observe restic first creates a lock and uploads it to my S3 compatible backend - my general purpose backups actually use Backblaze B2, not AWS S3 proper, caveat emptor. Then restic later attempts to delete that lock and syncs that change too to my S3 backend. That would require a restic key to have both write access and some kind of delete access to the S3 backend, at a minimum, which is not ideal for ransomware protection.
Many S3 backends including B2 have some kind of bucket-level object lock which prevent the modification/deletion of objects within that bucket for, say, their first 30 days. But this doesn't save us from ransomware either, because restic's own synced lock gets that 30 day protection too.
I can see why one would think you can't get around this without restic itself having something to say about it. Gemini tells me that S3 proper does let you set delete permissions at a granular enough level that you can tell it to only allow delete on locks/, with something like
# possible hallucination.
# someone good at s3 please verify
{
"Sid": "AllowDeleteLocksOnly",
"Effect": "Allow",
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::backup-bucket/locks/*"
}
But, I have not tested this myself, and this isn't necessarily true across S3 compatible providers. I don't know how to get this level of granularity in Backblaze, for example, and that's unfortunate because B2 is about a quarter the cost of S3 for hot storage.The cleanest solution would probably be to have some way for restic to handle locks locally, so that locks never need to hit the S3 backend in the first place. I imagine restic's developers are already aware of that, so this seems likely to be a much harder problem to solve than it first appears. Another option may be to use a dedicated, restic-aware provider like BorgBase. It sounds like they handle their own disks, so they probably already have some kind of workaround in place for this. Of course, as others have mentioned, you may not get as many nines out of BB as you would out of one of the more established general-purpose providers.
P.S.: Thank you both immensely for this debate, it's helped me advance the state of my own understanding a little further.
restic, and my own computers and storage, and the occasional rented device (VPS or similar, typically)
i find that the hassle of setting up my stuff is still preferable than having to worry about managing bills, subscriptions, and third parties just changing their policies
I'm carefully monitoring plakar in this space, wondering if anyone has experience with it and could share?
Looks like much for both Colin and us could be solved moving this away from AWS
Using something like restic or borgbackup+rclone is pretty much the same experience as tarsnap but a fraction of the price.
$3000 per TB-year is accurate to my knowledge, and yes, it is at least one, and probably two, orders of magnitude what you can get with more general purpose systems. Backblaze B2 is $72 per TB-year; AWS Glacier is $12 per TB-year I believe; purchasing two 20 TB Seagate drives for $300 apiece, mirroring them, and replacing them every 3 years gives you about $10 per TB-year (potentially - most of us don't have 20 TB to back up in our personal lives). Those are the best prices I've been able to find with some looking [2].
To me, when I was building out the digital resiliency audit, the pricing and model just seemed to tell me that tarsnap was for very specific kinds of critical data backups, and was not a great fit for general purpose stuff. Like a lot of other people here I also have a general-purpose restic based 3-2-1 backup going for the ~150 GB in /home I back up. [3] My use of tarsnap is partly a cheap hedge for the handful of bytes of data I genuinely cannot afford to lose against issues with restic, Backblaze B2, systemd, etc.
[1]: https://hiandrewquinn.github.io/tarsnap-calculator/
[2]: https://andrew-quinn.me/digital-resiliency-2025/#postscript-...
[3]: https://andrew-quinn.me/digital-resiliency-2025/#general-bac...
All the granular calculations (picodollars) on storage used plus time are fine. But tarsnap was always very expensive for larger amounts of data, especially data that cannot be well deduplicated.
> Tarsnap uses a prepaid model based on actual usage: Storage: 250 picodollars / byte-month of encoded data ($0.25 / GB-month) Bandwidth: 250 picodollars / byte of encoded data ($0.25 / GB)
I use Backblaze B2 myself for most of my general purpose backup needs. It's actually $6/month, I believe.
Tarsnap fills but one niche in my overall system. It's a very important niche for which I haven't found any other providers who do anything similar (keyfiles, prepaid, borderline anonymous etc), but it's not where I store the vast majority of my stuff.
One use case: I don't like the idea of having any accounts at all which I log into without the aid of a password manager. That creates a bootstrapping problem - how am I supposed to log into Google Drive to get my Google Drive password? A prepaid keyfile-based model is one particularly robust way of solving this. You stick your e.g. 100 kB password database in there, print out and shred the keyfile, stick the printout in a fireproof safe, and be virtually certain that whatever you put in Tarsnap has been untouched however many years you come back to it later. Print it on archival paper with some silica gel packets and it might survive for millennia in your weird subterranean vampire family castle.
"The business won't survive that long." I'm not so sure. Its ongoing costs appear minimal, and it generates eye watering amounts of float. $5 paid today is >$200 fifty years from now when compounded at 8% real interest. That very fact makes it much more likely that Tarsnap actually will survive for those 50 years, which should make us more likely to trust it, which... You see where this is going. This is one of those things where aggressively pricing too close to the bare metal costs might actually be a bad thing to a very important subset of users. One might even make the argument that, if the margins are as good as I'm supposing they are, then depending on the goals of the founder, Tarsnap is more likely to outlive S3 than S3 Tarsnap.
But again: Primarily a hobby.
https://support.google.com/accounts/answer/1187538?sjid=3244...
print those and password, stick the printout in a fireproof safe
Caution may be justified when it comes to doing this for something with as wide a surface area as a Google account. For me, if I'm going to have to compromise on 2FA somewhere anyway, I might as well go full hog and get an honest to goodness keyfile.
[1]: https://andrew-quinn.me/digital-resiliency-2025/#wait-what-a...
Maybe it's good for storing stuff that's illegal to possess?
If there's an simple but "solid" GUI backup tool with (true) PAYG I'd migrate away from Tarsnap, but there isn't one.
And Restic is good quality software.
You might be tempted to think: it's a popular service, it can't be that bad.
But, it really can be, and if you've not tried it yourself, you'll only find out when you need it. Which could be way too late.
I'm backing up about 8TiB of data nightly using BorgBackup[0] + InterServer[1] and pay $240/yr.
This gives me differential encrypted rotating backups that are 100% mine and do not lock me into any specific storage vendor.
will tell you the compressed size of your deduplicated data, which gives you the upload cost and first-month cost. 4GB of files usually works out to 3GB of dedup/compressed archive data for most people, less for people with many similar files.
The cheapest I can find for a consumer buying e.g. 20TB Seagate hard drives and rotating them every 3 years or so is about $5 per TB-year, without mirroring. So if raw storage cost optimization is what you're after that's what I'd go for to start. Even AWS Glacier doesn't come close to that, although you do get other things with it.
through you want at least one backup of yours to be off site, and your want your backups robust, so comparing hard drive cost seem strange as if you run the backup server yourself you need a decent raid and for the offline backup you need to compare with idk. S3 storage cost or similar
it's still more expensive but if you only need to backup some folders of documents or similar it might anyway be the simpler and cheaper solution
if you want to backup huge photo/video/vm image collections it probably isn't the best choice for you
but if you need to backup you photo
I think tarsnap was a good service about 20 years ago when it had little competition, but using it now makes very little sense IMHO. You can donate to its awesome FreeBSD maintainer, or to FreeBSD, directly.
Also you can back up to the hard drive under your friend's bed, and they can back up to the hard drive under your bed.
If you're even slightly technical, or have a friend who is, I'd recommend both of you buying the cheapest Kirkwood NASes you can find on ebay, throwing Debian on them, and becoming each other's backup buddies.
Borgbase had a week long (IIRC) outage due to a failed attempt to add new drives to an array. As far as I know they never published a post-mortem on this and have never discussed how they're going to improve their disaster recovery so it can't happen again. It's difficult to recommend when they could leave you without working backups for an entire week.
I can't read the founder's mind, but if I were them I would probably have some Kongō Gumi style designs on making it a 1000-year company just because that's a fun intellectual exercise. [1]
[1]: https://www.tofugu.com/japan/oldest-businesses-in-japan/
The only real security feature missing is write-only access to the repository (Borg backup in theory supports it, but in practice it's impossible to use it in a way that prevents a compromised host from deleting it's backups - like tarsnap does).
In theory it is less reliable than tarsnap (AWS S3 compared to a single copy on a Hetzner's drive).
Storage Box is significantly cheaper for any kind of real-life backup sizes in my experience.
Borg requires more work to setup and configure compared to tarsnap. There's typically some scripting involved that's unique to your setup and I found that I had more documentation to study before I understood how to use Borg correctly.
A know a few people that have very low opinion of Borg's code quality and stay away from it because of it (I haven't studied it first hand)
Storing one terabyte of data in tarsnap costs $250 per month.
Also cozy if your data fits. No monthly fee, just the cost of new/recycled thumbies
It also doesn't require a UL Class 125 fireproofed safe to survive a house fire, but that's splitting hairs and getting into hobbyist territory.
Tarsnap is very resilient; it doesn't do a lot, but what it does is solid. The mailing list is helpful, and you can reach out to its creators directly for prompt, useful responses if that is something you don't want on the mailing list (where names and email addresses are in the clear; use marc.info to search in it).
But if you are trying to start with Tarsnap, you should note a few things from the beginning:
- If you are looking for a completely (or even almost) frictionless backup experience - this is not it. Also, it doesn't have tons of features - which might be a good thing, but you should know and accept it.
- If you're used to tools like Backblaze, CrashPlan, Restic, or Borg, the limited feature set might frustrate you.
- Knowing this in advance will help you set expectations within its feature set. The doc/man pages are great resources once you actually read it.
- It has some quirks (may or may not be bugs) that require tinkering with your settings, env etc. Getting your hands dirty with sample data first is a great way to know Tarsnap.
- Set up your logs and scripts such that you can know/debug things later.
- Naming of your archives is important.
- You'll need at least two keys: a master key with read, write, and delete access on your archives/Tarnsap storage, and a un-passphrased regular key with only "write" permission for backups. Keep both safe, especially the master key. There's "nuke" as well…
- I used its GUI for the longest time but would absolutely not recommend it. It hides a lot, which might come back to bite you, and is not the most polished tool of all. Its last release was 7 years ago.
OP says:
> … If you use it solely to back up the few megabytes of “crown jewels” data we all have lying around"
and I actually use Tarsnap exclusively for my "crown jewels," which are in the early three digit MBs.
- So, unlike what many say, I do believe it is costly for today's storage/bw prices, especially if your data isn't very compressible. Tarsnap's compression is great, but not magic. However, i doesn't cost an exponential bomb either. Killer de-dupe though.
- You must have a plan for what and how much you want to back up, and the expected growth of that data.
- It is definitely not a "fire and forget" tool (and you should never forget your backups anyway).
I was frustrated with it until I gave up on the GUI, embraced the CLI/cron, reduced the amount of data being backed up and excluded (using copy and delete) some data being stored, and accepted what it can't do. Which is not really great but that's what it is.
Glaring omissions, IMHO: very few maintenance features (the scripts listed are not easy to work with), (almost?) no way of knowing what file changed in a certain archive, slow restores (may matter for a bigger data set), and the lack of an updated, polished GUI tool which I think is very important for personal data backup.
My request to cperciva would be: please consider this - while it's inspired by tar and stays close to it, it's also a cloud backup tool. Treating it a bit more like a modern cloud backup tool could be useful. Just my two cents.