9 pointsby bucket_13 days ago6 comments
  • bucket_12 days ago
    Update: Just squashed a race condition in the OTP worker that was affecting some early logins. If you had trouble an hour ago, try again! Source has also been updated on Git repo to align with the latest release - thanks to mdmelo for reporting that.
  • cbenson070712 days ago
    How does this compare to something like magic-wormhole?

    Is the CLI able to write to stdout (to pipe into other commands, like tar/ssh) ?

    • bucket_12 days ago
      Great question. Magic-wormhole is perfect for live 'hand-off' sharing, and there are plenty of use cases where that’s exactly what you need. I've used it too many times to count.

      The difference here is mainly architectural. Wormhole follows a more traditional P2P sharing workflow and is strictly synchronous — it requires the receiver to be connected at the same time to complete the handshake. Bucket is completely asynchronous and state-agnostic: recipients don’t have to be online or even install anything to receive a file. They can simply click the link your CLI generates and enter the secret key (within the file expiration period).

      Your question about writing to stdout is also a good one. Currently, the CLI (client.go) forces a write to disk, but I am actively working on stdin/stdout support. It will land in a release soon. The intent is to support commands like:

          $ bucket pull --secret "$SECRET" bkdf1e848ea23e | ssh rocky@10.10.10.2 "cat > redhat92_minimal.iso"
      
          $ ssh rocky@10.10.10.2 "tar zcvf - /var/ /etc/ /usr/" | bucket push --filename="10.10.10.2_backup.tgz"
  • dripNripple13 days ago
    Love it!! Easily helped me move some large files! Super easy to use as well.
  • breadhat9112 days ago
    how are you handling the encryption keys? is it encrypted client-side before upload or handled on the backend?

    basically, if your db gets breached can someone access my decrypted files?

    • bucket_12 days ago
      A database breach wouldn't compromise your files because we never store the secrets needed to access them.

      When you start an upload, the API generates a unique bID and secret, then provides a short-lived presigned URL. Your file transfers directly from your terminal to S3—it never passes through our servers. Once the upload completes, the bID and secret are returned to your CLI.

      The secret is immediately discarded by our backend. Only its cryptographic hash is stored in our database.

      • lurkersince201312 days ago
        Wait, but where’s the file encryption happening?
        • bucket_12 days ago
          In the current release, files rely on S3's server-side encryption. The bID + secret function as access control mechanisms rather than encryption keys. The zero-knowledge aspect refers to us never storing the secrets themselves - only cryptographic hashes - so we can't authorize downloads even if our database is compromised.

          I've been actively working on adding client-side encryption to the CLI so files are encrypted by the binary before upload. The CLI implementation is ready, but I'm working through aligning the web dashboard with these changes. Should have it deployed in the next few days.

  • Svan1613 days ago
    Timesaver and easy to use
  • fsck_init12 days ago
    [dead]