What worked for me and my users was leaning on instructions + AGENTS.md + metadata as a lighter abstraction layer — structured enough for trust, but flexible enough to keep the model useful.
If you’ve been exploring similar ideas or trying to productionize AI analysts, I’d love to hear how you’re approaching it
- Role scoped calls: data modeling, code gen, are separate calls where each gets its own tailored context
- Context is divided into sections (tables, dbt, instructions, code) and each is getting a hard limit budget (required some experimentation, liked Cursor’s priompt project)
- agentic retrieval: agents can call tools to fetch or search data/metadata when needed
- summaries for different objects: messages, widgets; reports, data samples/profiles.
I wrote some more about how the agent and context work in the docs