When an LLM suggests a change based on a local file, it's essentially 'programming by accident' if it doesn't understand the underlying theory. We're seeing a shift where 'Industrial-Grade' agents are trying to solve this by rebuilding that theory through semantic RAG and AST parsing (e.g., using tree-sitter to map out the 'mental model' of function signatures and struct definitions).
The goal isn't just to generate code, but to verify that the code aligns with the existing theory of the system -- which is why loops that include 'cargo check' and test verification are so critical. It turns the agent from a stochastic parrot into something closer to a junior engineer who is at least trying to build a theory before they commit.