The Interceptor: We hook into OpenClaw's before_tool_call execution loop. The LLM has no idea the security layer exists. The Sidecar Gate: The tool request is routed to the local Rust daemon, which evaluates the intent against a deterministic YAML policy (e.g., blocking rm -rf, allowing fs.read only in ./src). It fails closed by default. The TUI: The daemon ships with a terminal UI to monitor all agent requests, allows, and denies in real-time. I built this in Rust to get strict memory safety with <1ms of latency overhead. It compiles to a static binary and drops into existing projects with zero friction.
Link to GitHub Repo: https://github.com/PredicateSystems/predicate-claw
Demo (GIF): https://github.com/PredicateSystems/predicate-claw/blob/main...
We already use deterministic post-execution verification for our web agents (DOM snapshot diffing, strictly avoiding the 'LLM-as-judge' trap). Next on the roadmap is bringing that same verifiable state-hashing to the OS level. I’d love to hear your thoughts on the architecture and how you're currently handling local agent sandboxing. Note: If you aren't using OpenClaw, our core engine also supports Python frameworks like LangChain and browser-use in 3 lines of code.
You can read the full architecture and see our enterprise fleet management here: https://predicatesystems.ai/docs/vault