In this post, we outline the core principles we’re using to design a trustworthy verification system:
• Explicitly model semantics when possible • Make approximations and assumptions visible • Avoid confident false positives and false negatives • Surface uncertainty instead of hiding it
Our main idea is that trustworthiness isn’t about absolute certainty. It’s about being explicit about assumptions and limitations so engineers can make informed decisions.
Would love feedback from people working on formal methods, static analysis, or AI-assisted development.