2 pointsby Convia5 hours ago4 comments
  • Convia5 hours ago
    OP here.

    A concrete eval method: run it on a file that caused trouble in the past, then on the same file after patch/refactor. Before vs. after tends to be clearer than looking at scores in isolation.

    Tech notes: multi-language via tree-sitter (Python, JS/TS, Java, Go, Rust, C/C++, etc.), 100% local (no telemetry / no external APIs), deterministic (no LLMs for the scores).

    Happy to answer questions about the metrics.

  • tongro4 hours ago
    Glad to finally show this to the HN community! We've been dogfooding this internally for a while to find structural risks that tests often miss. It’s still an evolving map, so we’re really curious about the "edge cases" you might find in different languages.
  • Convia5 hours ago
    Installation is done via Docker download (on-prem).
  • Convia4 hours ago
    [dead]