3 pointsby halflings9 hours ago1 comment
  • halflings9 hours ago
    LLMs generate their output one token at a time. The first thought when you learn this is that this is a huge performance bottleneck, as we are used to highly parallelized systems.

    However, a large part of what makes LLMs feel so magical comes from this bottleneck.