3 pointsby Mitchem9 hours ago3 comments
  • Mitchem9 hours ago
    Hey HN, author here.

    The thing that makes this work is where in the loop the review happens. CodeRabbit, Greptile, etc review at the PR level after the agent is done. The findings go to a human who has to interpret them. The agent that wrote the code never sees the critique. We find that most people just spin up a new agent and ask "Are these review findings correct?" anyways.

    Saguaro reviews during the agent's session and sends findings back to the same agent. Because the agent still has its full context window, it knows why it made each decision, it can evaluate the findings intelligently. "I made this choice for X reason, but this review shows a gap in my thinking, let me fix that." Or "This finding isn't relevant because of Y." The agent has the context to make that judgment call. That's why false positives are lower.

    The daemon is completely invisible to the user. It self-spawns from the Claude Code stop hook, runs a SQLite-backed job queue on localhost, and auto-shuts down after 30 minutes idle. The review happens in the background while the user keeps working. We feed context from the original programming session into the review process. The findings surface on the next stop hook, your agent just starts fixing things.

    For teams that want more precision, there's a rules engine: markdown files with YAML frontmatter that enforce specific patterns (architectural boundaries, security invariants, etc). But the daemon works with zero rules out of the box. The rules engine works great for teams with well-defined rules.

    Some technical decisions: - SQLite (via better-sqlite3) as job queue, right amount of infrastructure for a local dev tool. - The daemon reviewer gets the original agent's summary ("the developer described their work as...") for context - Agent gets read-only tools (Read, Glob, Grep) with up to 15 tool calls per review, it can inspect the full codebase for context but can't edit.

    Limitations: - The daemon review is async. Findings arrive on the next stop hook, not the current one. Fast iterations may miss a cycle. - Review quality depends on the model. We default to your configured model but you can override for daemon specifically. - Cost is your normal AI provider usage. `sag stats` tracks it.

    Happy to answer technical questions about the architecture.

  • A7OM9 hours ago
    Nice work! Claude Code users will love knowing they can also add our MCP server to get live inference pricing directly in their workflow. Useful for cost-aware agent development. a7om.com/mcp
  • prallo9 hours ago
    co creator of Saguaro here. just wanted to add that you can configure saguaro to run just a rules review, just a daemon bg review, or both after every pass your coding agent makes. We've seen the rules review complete anywhere between 1 to 5 seconds on average.