Core components: EnforceAuth uses a distributed control plane that stores policies in Git and compiles them to WebAssembly. Sidecars or SDKs fetch compiled policies via gRPC and cache them locally. Decisions are evaluated in milliseconds and include context (identity, resource, action, environment). If the control plane is unreachable, sidecars keep enforcing the last known policy.
Migration: Existing OPA or Styra DAS policies can be imported directly enforceauth.com . Our migration layer mirrors requests to EnforceAuth while your current system stays in place; when you’re comfortable, flip traffic over and remove the old system. No rewrites required.
AI guardrails example
We model AI agents as identities with roles and attributes. Here’s a simple Rego example showing how we permit admin users or AI agents with a trust level above 2:
default allow = false
# Admins always allowed allow { input.user.role == "admin" }
# Role‑based permissions allow { some perm perm := data.permissions[input.user.role][_] perm.action == input.action perm.resource == input.resource }
# AI agent guardrail allow { input.agent != null input.agent.trust_level > 2 some perm perm := data.permissions[input.agent.role][_] perm.action == input.action perm.resource == input.resource }
Observability & integrations
All decisions are exported to Prometheus/OpenTelemetry. You can send logs to your SIEM or data lake for analytics. Our SDKs are available for Go, Python and Java; Rust and Node are on the roadmap.
Questions for the community
How are you approaching authorization for AI agents? Are you using OPA or home‑grown logic?
Would a gradual migration path help you adopt unified authorization?
What languages/frameworks should we prioritise for SDK support?
Thanks for reading; I’m keen to hear your experiences.