6 pointsby csressel7 hours ago2 comments
  • storystarling4 hours ago
    I'm curious if the 16ms budget is actually driven by the token stream here. Even with the fastest providers right now, you are looking at maybe 100-150 tokens per second. That is orders of magnitude below the 30MB/s terminal parsing limit mentioned in the article.

    Unless I am missing something about how they handle the diffs, the bottleneck is surely the inference latency and not the render loop. It seems like a lot of architectural complexity for a data stream that is inherently slow.

    • csressel4 hours ago
      That tokens per second you're describing is the token bandwidth to and from the API provider of the model, and that particular throughput doesn't cause the UI flicker. It likely only affects the interactive portion within the UI at the very bottom. Since most of the actual context tokens aren't shown (thinking tokens, structured json for tool calls or outputs, etc), it's likely much less than 60 FPS

      The problem here was that before the December update, any time contents in the transcript history would change, they would include the entire history as part of the render loop, and completely clear and then completely reprint it on ever frame tick. For one brief rewrap of history, it's just a quick stutter, but when anything offscreen was changing for multiple seconds at a time, this created a constant strobe effect. Not a good look! https://github.com/anthropics/claude-code/issues/1913

      This diagram explains the nature of the new vs old architecture a bit more visually https://x.com/trq212/status/2001439021398974720

  • theahura6 hours ago
    > We apparently live in the clown universe, where a simple TUI is driven by React and takes 11ms to lay out a few boxes and monospaced text

    im not on twitter much these days, but damn people were not kind to anthropic

    • csressel6 hours ago
      yeah Friday on Xitter was a pretty disproportionate response imo!