2 pointsby m_panknin5 hours ago1 comment
  • m_panknin5 hours ago
    Why?

    WebGPU volume rendering examples do exist, but they all assume fully GPU-resident datasets — the entire volume is uploaded to VRAM upfront, which caps practical dataset sizes at a few hundred megabytes at best.

    Kiln overcomes this limitation: it treats GPU memory as a fixed-size cache and streams only the bricks the current view actually needs, making gigabyte-scale datasets viable in the browser.

    Virtual texturing and out-of-core streaming are well-established in native graphics, but there is almost no prior WebGPU-native implementation of these techniques applied to volumetric data.

    WDYT?