5 pointsby leonardcser5 hours ago2 comments
  • speakingmoistly5 hours ago
    Did you identify the kind of performance problems you were solving for? Curious to hear whether the source of the lag is known.

    The local / "runs entirely on my machine" claim should probably come with an asterisk: the TUI part is local, but this still relies on an LLM API existing somewhere outside the machine (unless you're running an Ollama instance on the same host).

    Nonetheless, this is neat!

    • leonardcser4 hours ago
      Thanks for the feedback. The main performance focus was rendering.

      Claude Code and other TUIs (except Codex) use a layer of abstraction over the raw terminal escape sequences.

      I directly used `crossterm`, which gave me more control and lower latency.

      For example if nothing is going on, I don't render anything in the terminal. Or only render at keypress.

  • leonardcser5 hours ago
    Hello HN

    Claude Code is great, but I find it a bit laggy sometimes. I built a local alternative that's significantly more responsive with local models. Just wanted to share :)