This solution instead seems to rely on using 1MB blocks and store those directly as objects, alleviating the intermediate caching and indirection layer. Larger number of objects but less local overhead.
DelphiX's rationale for 16 kB blocks was that their primary use-case was PostgreSQL database storage. I presume this is geared for other workloads.
And, importantly since we're on HN, DelphiX's user-space service was written in Rust as I recall it, this uses Go.
Why would I use zfs for this? Isn't the power of zfs that it's a filesystem with checksum and stuff like encryption?
Why would I use it for s3?
You have it the wrong way around. Here, ZFS uses many small S3 objects as the storage substrate, rather than physical disks. The value proposition is that this should be definitely cheaper and perhaps more durable than EBS.
See s3backer, a FUSE implementation of similar: https://github.com/archiecobbs/s3backer
See prior in kernel ZFS work by Delphix which AFAIK was closed by Delphix management: https://www.youtube.com/watch?v=opW9KhjOQ3Q
BTW this appears to be closed too!
zfs-share already implements SMB and NFS.
Not sure what is the use case out of my ignorance, but I guess one can use it to `zfs send` backups to s3 in a very neat manner.