3 pointsby zmalik2 hours ago2 comments
  • clarity_hacker2 hours ago
    The interesting tension here is that lazy-pulling optimizes deployment metrics but degrades runtime behavior. Faster pull times look great in dashboards, but you've moved the latency to first-request, which is often the metric that actually matters to users. The registry becomes a runtime dependency instead of a build-time one, so a registry outage now takes down running services, not just deployments. This pattern shows up everywhere: optimizing for the observable metric while shifting the cost somewhere less visible.
  • zmalik2 hours ago
    We’ve all seen the benchmarks: "Lazy-pulling reduces container startup from 5 minutes to 500ms!" It looks great on a chart, but it hides a dangerous trade-off.

    I built a benchmark to measure Readiness—the actual time until a container can serve an HTTP request, rather than just pull time. The results were surprising. While lazy-pulling (eStargz/FUSE) made pulls 65x faster, it made the application's first successful response 20x slower compared to a local registry full-pull.

    Why? Because lazy-pulling doesn't remove the cost of downloading bytes; it just shifts it to the runtime. Your registry becomes a runtime dependency, and every uncached import torch becomes a network round-trip. In my latest post, I dive deep into:

    - The OCI file format limits (DEFLATE chains) that make this hard.

    - Why containerd’s snapshotter is the bottleneck.

    - The operational risks of FUSE on your GPU nodes.