Unless I am missing something about how they handle the diffs, the bottleneck is surely the inference latency and not the render loop. It seems like a lot of architectural complexity for a data stream that is inherently slow.
The problem here was that before the December update, any time contents in the transcript history would change, they would include the entire history as part of the render loop, and completely clear and then completely reprint it on ever frame tick. For one brief rewrap of history, it's just a quick stutter, but when anything offscreen was changing for multiple seconds at a time, this created a constant strobe effect. Not a good look! https://github.com/anthropics/claude-code/issues/1913
This diagram explains the nature of the new vs old architecture a bit more visually https://x.com/trq212/status/2001439021398974720
im not on twitter much these days, but damn people were not kind to anthropic