1 pointby jorgegalindo8 hours ago1 comment
  • jorgegalindo8 hours ago
    We’ve been working on a software verification platform designed for AI-generated code. As code assistants accelerate development, verification becomes the bottleneck.

    In this post, we outline the core principles we’re using to design a trustworthy verification system:

    • Explicitly model semantics when possible • Make approximations and assumptions visible • Avoid confident false positives and false negatives • Surface uncertainty instead of hiding it

    Our main idea is that trustworthiness isn’t about absolute certainty. It’s about being explicit about assumptions and limitations so engineers can make informed decisions.

    Would love feedback from people working on formal methods, static analysis, or AI-assisted development.