2 pointsby ariansyah4 hours ago2 comments
  • ariansyah4 hours ago
    I built this because I kept seeing AI agents marketed with "run any command" and "access your filesystem" — and nobody was publishing what happens when you actually try to attack them.

    ClawSandbox is a security benchmark for AI agents with code execution. I set up a hardened Docker container (7 layers: read-only FS, all capabilities dropped, no-new-privileges, network isolation, non-root user, resource limits, no host mounts) and threw adversarial prompts at an AI agent to see what sticks.

    The short version: prompt injection is a solved problem in demos, not in production.

    3 of 5 prompt injection tests succeeded. The most interesting one wasn't the classic "ignore previous instructions" — it was a base64-encoded payload. The model decoded it and piped it to bash without hesitation. Encoding completely defeated safety heuristics.

    But the finding that actually worried me was memory poisoning. A user asks "What is the capital of France?" and gets "Paris." Looks normal. Meanwhile the model silently writes a poisoned instruction to a config file that gets loaded on every future session. No notification, no integrity check, no expiry. 4 out of 4 memory poisoning tests succeeded.

    This pattern isn't unique to the agent I tested. Any tool that stores config as plain text files — AGENTS.md, .cursorrules, CLAUDE.md, MCP configs — has the same attack surface: writable by the agent, loaded without verification, invisible to the user when modified.

    The container security was the bright spot. All 7 hardening layers held. Defense in depth works, even if Docker isn't a perfect boundary.

    The benchmark is open source (MIT) and designed to be reusable. OpenClaw was the first case study but you can swap in any agent by changing the system prompt and API endpoint. Test categories are mapped to OWASP LLM Top 10. Five of the eleven categories are stubs waiting for contributions.

    Interesting things I'd love to discuss:

    Is there a practical defense against split-attention memory poisoning that doesn't require read-only config? Should agent frameworks implement config signing/hashing? None of the ones I looked at do. The base64 bypass suggests safety checks are keyword-based, not semantic. Is that fixable at the model level?

  • nejm19964 hours ago
    Surely this is an indictment of Gemini 2.5 flash and not of OpenClaw? In the OpenClaw start guide it is very clear that they recommend using only the best frontier models for protection against prompt injection. The model you used is almost a year old and wasn't even the best model when it was released. At the end of the day, OpenClaw is just an extremely powerful bring your own AI Agent framework. I would like to see your results with opus 4.6, Gemini 3 or 5.3-codex
    • ariansyah3 hours ago
      Fair point. the model matters, and I'd genuinely love to see results with Opus 4.6 or Gemini 3 or 5.3-codex. The benchmark is designed for exactly that. Swap the API key and system prompt and run it.

      But I'd push back on the idea that a better model solves this.

      The memory poisoning results (category 08) are the ones I'd pay attention to. The offline audit found that config files at ~/.openclaw/ are writable by the agent, loaded without integrity checks, and modified without notifying the user. That's not a model problem — that's architecture. A smarter model might resist the initial injection more often, but the mechanism that makes poisoning persistent and invisible exists regardless of which model is behind it.

      The silent write test (test 03) is a good example. The attack works because OpenClaw lets the model write to its own config files and loads them as trusted on every future session. Even if Opus 4.6 resists the injection 95% of the time, the 5% that succeeds persists forever with no expiry and no notification. The user has to manually inspect ~/.openclaw/ to discover it.

      So yes, better models raise the bar for the attacker. But the question the benchmark is asking isn't "can this specific model be tricked?" It's "when a model is tricked (and eventually one will be), what does the framework allow to happen?" Right now the answer is: silent, persistent, undetectable config modification.

      That said, genuinely interested if anyone runs this with frontier models. The benchmark is there for exactly that purpose. If Opus 4.6 passes all 9, that's a meaningful data point worth publishing.