1 pointby Soenke_Cramme4 hours ago1 comment
  • Soenke_Cramme4 hours ago
    Hi HN, I built AgentBouncr because I couldn't find a governance layer for AI agents that actually enforces rules at runtime, rather than just observing what happened after the fact. The problem: AI agents call tools autonomously. Most frameworks (LangChain, CrewAI, Vercel AI SDK) focus on making agents capable — but none of them answer "what is this agent allowed to do?" with deterministic enforcement. AgentBouncr sits between the agent and its tools. Every tool call goes through a policy check before execution. Every decision is logged in a tamper-evident audit trail (SHA-256 hash chain). If something goes wrong, a kill switch stops all agent actions in <100ms. Core components:

    Permission Layer — agents can only use tools their policy allows Policy Engine — JSON-based rules with conditions, rate limits, approval workflows Audit Trail — append-only, hash-chained, W3C Trace Context Injection Detection — pattern-based, bilingual (EN/DE), configurable Kill Switch — deterministic, works even when the LLM API is down

    Tech: TypeScript, 1,264 tests (671 framework + 593 enterprise), SQLite default (zero config), PostgreSQL for production. Vendor-agnostic — works with any agent framework. The core framework is Source-Available (Elastic License v2) — you can use it, modify it, embed it. The enterprise layer (dashboard, API server, compliance reports) is commercial. I built this as a solo founder. The EU AI Act (enforcement starts August 2026) was the initial motivation, but the governance gap exists globally.

    Site: https://agentbouncr.com Code: https://github.com/agentbouncr/agentbouncr npm: npm install @agentbouncr/core

    Happy to answer any questions about the architecture, licensing decisions, or the EU AI Act angle.