Curious about one thing: where local LLMs feel “good enough” today vs where you still fall back to remote models
The perf work + installers make this feel way more real than most agent demos. Nice job shipping something people can actually try.
I still fall back to remote models for very long-context tasks, heavier code synthesis, or when you want best possible reasoning over large codebases, the goal is to default local, then escalate only when it actually adds value.
That said, I’m excited to see how people use it, and I’ll still be around to answer questions and fix issues if they come up.