Code is very well structured. Based on a starting point (current cursor, current file, or results of an embedding search result) you would probably fair better to traverse the code tree up and down building a tree or using Abstract Syntax Trees (ASTs) as described in this blog post [4]. It's like a tree search in order to get relevant code pieces for a certain task, and it imitates what human coders do. It would integrate well into an agent loop to search relevant code.
Aren't there any open source code assistants and plugins that do this? All I see are embedding searches for the big projects such as cursor, cline or continue.
All I ever found were a few research efforts such as RepoGraph [1], CodeGraph [2] and one one codebase open sourced by Deutsche Telekom called advanced-coding-assistant [3]
1 https://github.com/ozyyshr/RepoGraph
2 https://arxiv.org/abs/2408.13863
3 https://github.com/telekom/advanced-coding-assistant-backend
4 https://cyrilsadovsky.substack.com/p/advanced-coding-chatbot...
Aider exploits all this structure, using a "repository map" [0]. It use tree-sitter to build a call graph of the code base. Then runs a graph optimization on it with respect to the current state of the AI coding chat. This finds the most relevant parts of the code base, which aider shares with the LLM as context.
Your first link to RepoGraph is an adaptation of the actual aider repo map implementation. In their source code [1], they have some acknowledgements to aider and grep-ast (which is part of aider).
[0] https://aider.chat/docs/repomap.html
[1] https://github.com/ozyyshr/RepoGraph/blob/79861642515f0d6b17...
Similarly, you can't use the LSP to determine all valid in-scope objects for an assignment. You can get a hierarchy of symbol information from some servers, allowing selection of particular lexical scopes within the file, but you'll need to perform type analysis yourself to determine which of the available variables could make for a reasonable completion. That type analysis is also a bit tricky because you'll likely need a lot of information about the type hierarchy at that lexical scope-- something you can't get from the LSP.
It might be feasible to edit an open source LSP implementation for your target language to expose the extra information you'd want, but they're relatively heavy pieces of software and, of course, they don't exist for all languages. Compared to the development cost of "just" using embeddings-- it's pretty clear why teams choose embeddings.
Also, if you assume that the performance improvements we've seen in embeddings for retrieval will continue, it makes less sense to invest weeks of time on something that would otherwise improve passively with time.
Clangd does, which means we could try this out for C++.
There's also tree-sitter, but I assume that's table stakes nowadays. For example, Aider uses it to generate project context ("repo maps")[0].
> If you want to know whether a given import is valid, to verify LLM output, that's not possible.
That's not the biggest problem to be solved, arguably. A wrong import in otherwise correct-ish code is mechanically correctable, even if by user pressing a shortcut in their IDE/LSP-powered editor. We're deep into early R&D here, perfect is the enemy of the good at this stage.
> Similarly, you can't use the LSP to determine all valid in-scope objects for an assignment. You can get a hierarchy of symbol information from some servers, allowing selection of particular lexical scopes within the file, but you'll need to perform type analysis yourself to determine which of the available variables could make for a reasonable completion.
What about asking an LLM? It's not 100% reliable, of course (again: perfect vs. good), but LLMs can guess things that aren't locally obvious even in AST. Like, e.g. "two functions in the current file assign to this_thread::ctx().foo; perhaps this_thread is in global scope, or otherwise accessible to the function I'm working on right now".
I do imagine Cursor, et. al. are experimenting with ad-hoc approaches like that. I know I would, LLMs are cheap enough and fast enough that asking them to build their own context makes sense, if it saves on the amount of time they get the task wrong and require back&forth and reverts and tweaking the prompt.
--
[0] - https://aider.chat/docs/languages.html#how-to-add-support-fo...
This makes so much intuitive sense - voyage please release an insurance focused model.
1. They are recommended by Anthropic on their docs
2. They're focused on embeddings as a service, I somehow prefer this other than spread-thin large orgs like OpenAI, GoogleAI, etc.
3. They got good standing on the huggingface mteb leaderboard: https://huggingface.co/spaces/mteb/leaderboard
For our general-purpose embedding model (voyage-3-large), embedding vectors with 2048 dimensions outperform 1024 across the board: https://blog.voyageai.com/2025/01/07/voyage-3-large/
The neat thing about Voyage was besides the speed of the service.
I think I had 250 million tokens and Voyage was the fastest. It took a couple of days on and off. I believe my napkin calculation showed that OpenAI would have taken months.
i can be constructive but i dont have space in the margins of this paper
voyage-3-large will work better for almost all real-world production use cases.