Agents like claude code/openclaw save secrets in plaintext within config files, which makes a big attack vector for a local compromise becoming a cloud compromise.
We empirically verified to stop AI coding agents from leaking secrets by intercepting tool calls and handling secrets entirely outside the model’s visibility. Using Claude Code’s hook system.
Paired with open source repo for cleanup, it shows that most leakage can be eliminated by treating secrets as a runtime dataflow problem rather than a static scanning issue