Two observations from similar tooling attempts I've seen:
1. The hardest part isn't generating the map - it's keeping it accurate. Every tool that promises "live view of what's running" eventually drifts from reality because infrastructure changes faster than discovery runs. The teams that made this work treated the map as the source of truth and pushed changes through it, not around it.
2. Re: your feedback about write access - the "prototype to production-ready AWS" use case is interesting. That's where the value of context is highest (greenfield) and the risk is lowest (nothing to break yet). Much easier trust equation than "let it modify my production K8s cluster."
How are you handling the drift problem? Auto-discovery polling, change events from cloud providers, or something else?
>How are you handling the drift problem? Auto-discovery polling, change events from cloud providers, or something else?
We built a pretty awesome approach to handling the drift problem. We do a combination of indexing, change even capture and then user behavior. So if a user is looking for a information we pull the live value first.
People want it to be significantly more proactive over time, things like root cause analysis, security-style probing, or guided investigations rather than just visibility.
There’s interest in going deeper on telemetry and using it to surface higher-level insights, not just raw data or links out to other tools.
A lot of people ask whether it can eventually write to environments. The direction that’s resonated most is doing this first for new or greenfield environments. For example, going from a prototype to a production-ready AWS setup in a more agentic way. For existing environments, trust and safety are still the gating factors.
My takeaway is that read-only context earns trust first, and write access has to be very deliberate and staged.