1 pointby echo_os3 hours ago1 comment
  • echoos3 hours ago
    I kept running into the same issue building agents with tool access: nothing enforces what an agent can or can't execute. The LLM picks a tool, and it runs. Prompt-level restrictions are suggestions at best. So I built a small guard that sits between LLM output and tool execution. You define a YAML policy per agent identity , allowed actions only. Anything not listed gets denied before it touches runtime. pip install agent-execution-guard Key decisions:

    Default DENY for unknown agents and actions No model inference in the enforcement path, purely deterministic Every evaluation returns an auditable decision record Offline, no external calls

    No dependencies beyond cryptography. Interested in feedback on whether the policy schema makes sense and what's missing for real use cases.