- Requiring human approval before sensitive actions go through (as @guyb3 mentioned in the post)
- Managing short-lived JWT tokens (refresh/access) with tight TTLs.
- Scoping permissions per-session rather than per-service
Auth-proxying solves the "don't give the box your API key" part. But the approval layer and token lifecycle management are what make this agent-specific, not just "SSO proxy repackaged."
vault_get.sh: https://gist.github.com/sathish316/1ca3fe1b124577d1354ee254a...
vault_set.sh: https://gist.github.com/sathish316/1f4e6549a8f85ac5c5ac8a088...
Blog about the full setup for OpenClaw: https://x.com/sathish316/status/2019496552419717390
The agent sees the output of the service, it does not directly see the keys. In OpenClaw, it’s possible to create the skill in a way that the agent does not directly know about or use vault_get command.
We're going to see this reinvented thousands of times in the next few months by people whose understanding of security is far poorer than HashiCorp's, via implementations that are nowhere near as well-tested, if tested at all.
1) Not all systems respect HTTP_PROXY. Node in particular is very uncooperative in this regard.
2) AWS access keys can’t be handled by simple credential swap; the requests need to be resigned with the real keys. Replicating the SigV4 and SigV4A exactly was bit of a pain.
3) To be secure, this system needs to run outside of the execution sandbox so that the agent can’t just read the keys from the proxy process.
For Airut I settled on a transparent (mitm)proxy, running in a separate container, and injecting proxy cert to the cert store in the container where the agent runs. This solved 1 and 3.
I essentially run a sidecar container that sets up ip tables that redirect all requests through my mitm proxy. This was specifically required because of Node not respecting HTTP_PROXY.
Also had to inject a self signed cert to ensure SSL could be proxied and terminated by the mitm proxy, which then injects the secrets, and forwards the request on.
Have you run into any issues with this setup? I'm trying to figure out if there's anything I'm missing that might come back to bite me?
The model is solid. It feels like the right way to use YOLO mode.
I've been working on making the auth setup more granular with macaroons and third party caveats.
My dream is to have plugins for upstreams using OpenAPI specs and then make it really easy to stitch together grants across subsets of APIs.
I think there's a product in here somewhere...
Another thing I did was to allow configuring which hosts each credential is scoped to. Replacement /resigning doesn’t happen unless host matches. That way it is not possible to leak keys by making requests to malicious hosts.
I have few questions:
- How can a proxy inject stuff if it's TLS encrypted? (same for IronClaw and others)
- Any adapters for existing secret stores? like maybe my fake credential can be a 1Password entry path (like 1Password:vault-name/entry/field and it would pull from 1P instead of having to have yet another place for me to store secrets?
Re TLS: OneCLI itself runs in a separate container, acting as an HTTPS proxy. The SDK auto-configures agent containers with proxy env vars + a local CA cert. When the agent hits an intercepted domain, OneCLI terminates TLS, swaps placeholder tokens for real creds, and forwards upstream. Containers never touch actual keys.
More here: https://www.onecli.sh/docs/sdks/node#how-it-works
Re 1Password adapters: not yet, but on the roadmap.
This should be solved by the vaults (hashi corp / AWS Secrets Manager).
The one thing that I did build was based on a service that AWS provides (AWS STS) which handles temporary time bound creds out of the box.
It seems the only sound solution is to have a sidecar attached to the agent and have the sidecar authenticate with the gateway using mTLS. The sidecar manages its own TLS key - the agent never has access to it.
So how does that help exactly? The agent can still do exactly what it could have done if it had the real key.
Otherwise this is cool, we need more competition here.
https://github.com/onecli/onecli/blob/942cfc6c6fd6e184504e01...
Sorry but am I missing something here?
But that’s not the biggest risk of giving credentials to agents. If they can still make arbitrary API calls, they can still cost money or cause security problems or delete production.
If you’re worried about creds leakage only because your credentials are static and permanent, well, time to upgrade your secrets architecture.
---
If this is of interest, I also recommend looking into: https://github.com/loderunner/scrt.
To me, it's a compliment to 1password.
I use it to save every new secret/api key I get via the CLI.
It's intentionally very feature limited.
Haven't tried it with agents, but wouldn't be surprised if the CLI (as is) would be enough.
What are you suggesting? The program makes a call to retrieve the secret from AWS? Then has full access to do with it what they want? That's exactly the risk and the problem this, and related solutions mentioned in this thread, is trying to solve.
Vault protects keys at rest, but the agent still gets them at runtime. The proxy keeps the key away from the agent entirely, which closes key leakage. But a prompt-injected agent can still exfiltrate data it reads through the proxy. The trust boundary shifts, it doesn't disappear.
Looks like OneCLI combines both into one tool, which is the right call.