1 pointby damienhauser6 hours ago1 comment
  • damienhauser6 hours ago
    Hey,

    I'm an IT infra consultant (cloud, k8s, enterprise automation). Started using Claude Code last year and I love it but a got fed with the permission approval and I did not want to use --dangerously-skip-permissions.

    At the same time a lot of my customer shared their concerns about coding agent like Claude code and the potential security risk for the enterprise.

    So I built Veto.

    A hook for Claude Code. Plugs in directly, evaluates tool calls against your rules before they execute. Safe stuff gets auto-approved — no more clicking Allow a hundred times. Whitelisting/Backlisting rules and opt-in automatic AI scoring and auto approval.

    An LLM firewall. A proxy that sits in front of any LLM API. Works with any AI coding agent that uses OpenAI or Anthropic endpoints. Same rules engine, same audit trail. Like a WAF but for AI agents. This is is probably more for the enterprise.

    Everything gets logged with full context. Exportable audit trail for compliance. Optional AI risk scoring for the edge cases. Team features: RBAC, shared rules, analytics.

    Been using it daily on my own projects for the last month.

    Now I want beta testers. If you use AI coding agents professionally and you share the same problem with the permission approval or you've also thought about the security side of things, try it out and tell me what you think.

    Disclaimer: this was built with the help of Agentic Coding.