The main loop:
Define test cases with expected answers and weighted criteria Run against any agent (HTTP endpoint, CLI command, or Python callable) Claude judges each response on your criteria (0-100 per criterion) Root cause analysis finds patterns across failures (knowledge gaps, prompt issues, missing sources) Failure mining classifies each failure and uses LLM to rewrite bad answers Export as DPO/SFT/OpenAI fine-tuning JSONL The RCA piece is what I think is most useful. Instead of just seeing "5 tests failed," you get things like "Agent consistently fabricates refund policies because no refund documentation exists in the knowledge base" with specific fix recommendations.
CLI:
pip install cane-eval cane-eval run tests.yaml cane-eval rca tests.yaml --threshold 60 cane-eval run tests.yaml --mine --export dpo
GitHub: https://github.com/colingfly/cane-eval
MIT licensed, pure Python, uses the Anthropic API. Happy to answer questions about the approach.