2 pointsby colinfly12 hours ago1 comment
  • colinfly12 hours ago
    Built an eval toolkit for AI agents that goes beyond pass/fail scoring. Define test suites in YAML, use Claude as an LLM judge, then automatically analyze why your agent fails and turn those failures into training data.

    The main loop:

    Define test cases with expected answers and weighted criteria Run against any agent (HTTP endpoint, CLI command, or Python callable) Claude judges each response on your criteria (0-100 per criterion) Root cause analysis finds patterns across failures (knowledge gaps, prompt issues, missing sources) Failure mining classifies each failure and uses LLM to rewrite bad answers Export as DPO/SFT/OpenAI fine-tuning JSONL The RCA piece is what I think is most useful. Instead of just seeing "5 tests failed," you get things like "Agent consistently fabricates refund policies because no refund documentation exists in the knowledge base" with specific fix recommendations.

    CLI:

    pip install cane-eval cane-eval run tests.yaml cane-eval rca tests.yaml --threshold 60 cane-eval run tests.yaml --mine --export dpo

    GitHub: https://github.com/colingfly/cane-eval

    MIT licensed, pure Python, uses the Anthropic API. Happy to answer questions about the approach.