65 pointsby charlieirish3 months ago7 comments
  • rockwotj3 months ago
    > Direct I/O means no more fsync: no more complexity via background flushes and optimal scheduling of syncs. There's no kernel overhead from copying and coalescing. It essentially provides the performance, control, and simplicity of issuing raw 1:1 I/O requests.

    Not true, you still need fsync in direct I/O to ensure durability in power loss situations. Some drives have write caches that means acknowledged writes live in non-volatile memory. So maybe the perf is wildly better because you’re sacrificing durability?

    • rrauch3 months ago
      Looks like the author is well aware:

        /// Even when using direct I/O, `fsync` is still necessary, as it ensures the device itself has flushed any internal caches.
        async fn sync(&self) {
          let (fut, fut_ctl) = SignalFuture::new();
          self.sender.send(Request::Sync { res: fut_ctl }).unwrap();
          fut.await
        }
      
      Full code here:

      https://github.com/wilsonzlin/blobd/blob/master/libblobd-dir...

    • actionfromafar3 months ago
      You mean in volatile memory?
  • amluto3 months ago
    That’s a lot of work creating a whole system that stores data on a raw block device. It would be nice to see this compared to… a filesystem. XFS, ZFS and btrfs are pretty popular.
    • bionsystem3 months ago
      I don't quite understand the point, why would anybody use S3 then ?
  • Scaevolus3 months ago
    Similar systems include Facebook's Haystack and its open source equivalent, SeaweedFS.
  • bob10293 months ago
    > Despite serving from same-region datacenters 2 ms from the user, S3 would take 30-200 ms to respond to each request.

    200ms seems fairly reasonable to me once we factor in all of the other aspects of S3. A lot of machines would have to die at Amazon for your data to become at risk.

  • tuhgdetzhh3 months ago
    When you have a service and really care about shoving of S3 latencies in the millisecond range, then you propably have enough users that all the tiny images are cached @ edge anyways.
  • stackskipton3 months ago
    Interesting project but lack of S3 protocol compatibility and fact it seems to YOLO your data means it's not acceptable for many.
    • moi23883 months ago
      And means it is acceptable for many others. There is a whole world outside of s3 you know.
      • Unroasted61543 months ago
        It's a bit weird to present it as an alternative to S3 when it looks like a persistent cache or k/v store. A benchmark against Redis would have been nice for example. The benchmark for rocks DB is also questionable as the performance depends a lot on how you configure it, and the article's claim that it doesn't support range read doesn't give me confidence in the results.

        Also for the descried issue of small images for a frontend, nobody would serve directly from S3 without a caching layer on top.

        It's a interesting read for fun, but I am not sure what it solves in the end.

  • grenran3 months ago
    S3's whole selling point is 11 9s of durability across the whole region which is probably why it's slow to begin with.