A persistent issue I have with Cursor et al. is that they hallucinate function arguments when using a function or method from a library. It seems like automatically pulling the library's documentation into the context would be helpful, but I haven't found any tool that does this automatically. Is there any chance that Runner does this?
It doesn't seem like this was the problem you were trying to solve, but reliable use of libraries and APIs is a critical problem to solve if you want LLM-generated code to work.
I think there's probably a lot of value to be gained in tooling for coding agents that codify and enhance the describe -> explore -> plan -> refine -> implement -> verify cycle. With most popular tools (cursor, claude code, roo, augment, windsurf, etc) you have to do this workflow "manually" usually by having the model write out .md files, it isn't super smooth.
Completely agree. I basically built Runner to codify the process I was already using manually with Claude Code and Gemini. A lot of developers seem to be settling on a similar workflow, so I'm betting that something like Runner or Traycer will be useful for a lot of devs.
I'll be curious to see how far the existing players like Cursor push in this direction.
Could you explain why both / why not also Claude (why not all three?)