I like that agent-shell just uses comint instead of a full vterm, but I find myself missing a deeper integration with claude that claude-code-ide has. Like with claude-code-ide you can define custom MCP tools that run Emacs commands.
I find myself spending much more time in OpenCode than in nvim these days. With mcp-neovim-server, it's super easy to keep vim open & ask OpenCode to show me, to open files, go to lines. This didn't require any nvim tweaking at all, it's just giving the LLM access to my nvim. It is absolutely wild how good glm-4.7 has been at opening friendly splits, at debugging really gnarly wild nvim configuration problems that have plagued me for years. It knows way way way more nvim than I do, and that somehow surprised me. https://github.com/bigcodegen/mcp-neovim-server
Definitely interest in the ACP angle. I feel like we're in a weird spot where ACP is this protocol where the thing you do use talks to the headless thing you don't ever see. I'd love to know or see more than that. These connections feel 1:1, but I want to see human interaction in every agentic system, not for there to be this me -> ide -> ACP agent flow with the ide intermediating all, being the sole UI. It should be able to do that yes!! But I also want an expectation that there can be multiple forces "driving" an ACP service.
I've watched the video now. It's still not crystal clear to me architecturally is going on, but it does seem like a fairly robust emacs shell experience that wraps the agent flow. I really enjoy the idea of having this overlayed compose buffer, that is your editor style input. I'd love to know how that is wired to the agents; is that input sent over ACP? Is that just sending to the shell? This compose buffer feels like it may be a broader emacs pattern. One I'd love to see in nvim! Years ago I had a plugin that would take the selection or current line and send it to a buffer. That was my very crude compose buffer.
I have 10 months of chats, and now I can analyze them. I even had claude code write me up a program do that: https://github.com/ryanobjc/dailies-analyzer - the use of gptel-mode allows me to know which parts of the file are LLM output and which I typed in, via a header in the file.
Keeping your own data as plain text has huge benefits. Having all my chats persistent is good. It's all private. I could even store these chats into a file.gpg and emacs will auto encrypt-decrypt it. Gptel and the LLM only gets the text straight out of emacs, and knows nothing about the encryption.
I found this better than the 'shell' type packages, since they don't always keep context, and are ultimately less flexible than a file as an interaction buffer. I described how I have this set up here: https://gist.github.com/ryanobjc/39a082563a39ba0ef9ceda40409...
All of this setup is 100% portable across every LLM backend gptel supports, which is basically all of them, including local models. With local models I could have a fully private and offline AI experience, which quality based on how much model I can run.