59 pointsby charlieirish2 days ago7 comments
  • rockwotja day ago
    > Direct I/O means no more fsync: no more complexity via background flushes and optimal scheduling of syncs. There's no kernel overhead from copying and coalescing. It essentially provides the performance, control, and simplicity of issuing raw 1:1 I/O requests.

    Not true, you still need fsync in direct I/O to ensure durability in power loss situations. Some drives have write caches that means acknowledged writes live in non-volatile memory. So maybe the perf is wildly better because you’re sacrificing durability?

    • rraucha day ago
      Looks like the author is well aware:

        /// Even when using direct I/O, `fsync` is still necessary, as it ensures the device itself has flushed any internal caches.
        async fn sync(&self) {
          let (fut, fut_ctl) = SignalFuture::new();
          self.sender.send(Request::Sync { res: fut_ctl }).unwrap();
          fut.await
        }
      
      Full code here:

      https://github.com/wilsonzlin/blobd/blob/master/libblobd-dir...

    • You mean in volatile memory?
  • amlutoa day ago
    That’s a lot of work creating a whole system that stores data on a raw block device. It would be nice to see this compared to… a filesystem. XFS, ZFS and btrfs are pretty popular.
    • bionsystema day ago
      I don't quite understand the point, why would anybody use S3 then ?
  • Scaevolusa day ago
    Similar systems include Facebook's Haystack and its open source equivalent, SeaweedFS.
  • tuhgdetzhh15 hours ago
    When you have a service and really care about shoving of S3 latencies in the millisecond range, then you propably have enough users that all the tiny images are cached @ edge anyways.
  • bob1029a day ago
    > Despite serving from same-region datacenters 2 ms from the user, S3 would take 30-200 ms to respond to each request.

    200ms seems fairly reasonable to me once we factor in all of the other aspects of S3. A lot of machines would have to die at Amazon for your data to become at risk.

  • grenran21 hours ago
    S3's whole selling point is 11 9s of durability across the whole region which is probably why it's slow to begin with.
  • stackskiptona day ago
    Interesting project but lack of S3 protocol compatibility and fact it seems to YOLO your data means it's not acceptable for many.
    • moi2388a day ago
      And means it is acceptable for many others. There is a whole world outside of s3 you know.
      • Unroasted6154a day ago
        It's a bit weird to present it as an alternative to S3 when it looks like a persistent cache or k/v store. A benchmark against Redis would have been nice for example. The benchmark for rocks DB is also questionable as the performance depends a lot on how you configure it, and the article's claim that it doesn't support range read doesn't give me confidence in the results.

        Also for the descried issue of small images for a frontend, nobody would serve directly from S3 without a caching layer on top.

        It's a interesting read for fun, but I am not sure what it solves in the end.