Other's do checks at the tool level, systems like openfga can help make that easier by centralizing the authorization policies.
The key insight: policy evaluation has to happen outside the agent's context. If the agent can reason about or around the policy, it's not really enforcement. So we treat it like a firewall - deterministic rules, no LLM in the decision path.
What we've found works: - Argument-level rules, not just tool-level ("github.delete_branch is fine, but only for feature/* branches") - Rate limits that reset on different windows (per-minute for burst, per-day for cost) - Explicit rule priority for when constraints conflict
The audit trail piece is critical too. Being able to answer "why was this blocked?" after the fact builds trust with teams rolling this out.
Curious what failure modes people have actually hit - is it more "agent tried something it shouldn't" or "policy was too restrictive and blocked legitimate work"?