8 pointsby luthiraabeykoona day ago3 comments
  • This is a really thoughtful direction. The “overlay instead of another workspace” idea resonates a lot, especially the screen-as-context inversion.

    Curious about one thing: where local LLMs feel “good enough” today vs where you still fall back to remote models

    The perf work + installers make this feel way more real than most agent demos. Nice job shipping something people can actually try.

    • Good question. Local LLMs are already “good enough” for most in-context work: short-to-medium writing, refactors, reasoning over what’s on screen, and multi-step agent plans where latency and privacy matter more than raw IQ.

      I still fall back to remote models for very long-context tasks, heavier code synthesis, or when you want best possible reasoning over large codebases, the goal is to default local, then escalate only when it actually adds value.

  • One small note: I’m finally at a place where I’m genuinely happy with where this landed, so I’ll probably pause active development for a bit.

    That said, I’m excited to see how people use it, and I’ll still be around to answer questions and fix issues if they come up.

  • moderngamera day ago
    what llm do you recommend using locally?