9 pointsby ani178 hours ago4 comments
  • ani177 hours ago
    Author here. A bit more context: By day I'm a systems engineer building AI networking infrastructure. So I kept ending up in conversations where I'm not exactly able to wrap my head on the latest inference magic trick.

    Like when someone mentioned vLLM's paged attention, I knew virtual memory paging, but had no idea someone had applied the same idea to KV cache allocation on GPUs.

    Github link to the project: https://github.com/Anirudh171202/WhiteLotus

    • ani177 hours ago
      The blog walks through why your first token is always the slowest, why output tokens cost 5x more, and how stuff like speculative decoding and chunked prefill actually work, from the perspective of a systems engineer!
  • 7 hours ago
    undefined
  • brownianmotion17 hours ago
    > float bodyWeight = 67.5f; // who needs 32 bits to store a weight??

    UHHHH...

  • bruh123457 hours ago
    [dead]