I feel, with more and more tools crowding that space, a common specification language would make sense. Sure, every tool has its own unique selling points but there is considerable overlap. For example, if all I want is to express that I expect a function not to panic, there should be one syntax that works with all tools.
(Just throwing ideas here, but there could be `#[never_panic]` for simple cases where the compiler can clearly see that panic is not possible, or error otherwise, and `#[unsafe(never_panic)]` for more involved cases, that could be proven with 3rd party tools or by reasoning by the developer like normal unsafe blocks.)
For more complicated guarantees, it's harder to see if there's enough common ground for these tools to have some kind of common ground.
Any recent link about that? Specially one that calls it effect system rather than the old name keyword generics
Things like `#[no_panic]` make sense, but it also doesn't require a spec language at all, the compiler already has support for these kinds of annotation and anyone could catch it. Though I cannot think of a single verification use case where all I want to check is the absence of panic.
Basically any decoder/deserializer. It might be sufficient to handle the correctness in tests but panics are the most severe thing you want to avoid.
How well `#[no_panic]` actually works in practice?
There might be cases where e.g. index access violation never happen but compiler might still think that it happes. I could be impossible to restructure code without adding some performance overhead.
Index access violation that “never happens” is the root of every buffer overflow, so I’m absolutely OK with the minimal overhead behind the bounds check for actual safety
Not for verification but in terms of ease of use, having no panic in a program means it would be fine and safe to have pointers to uninitialized memory (it's currently not because panics means your destructors can be run anywhere in the code, so everything must be initialized at all time in safe rust).
>Though I cannot think of a single verification use case where all I want to check is the absence of panic.
You can reduce any static verification task to a check for a condition that produces a panic. In short, sprinkle your pre, post and intermediate conditions all over your code in a way that produces a panic (known as asserts) and the tool will do the heavy lifting.
- The various "modes" are going to be needed either way, because side-effectful functions at the type level are a research problem that probably isn't worth the effort.
- The in the pure functional "promotable" fragment, it probably also makes sense to relax aliasing rules / have infinite number types / etc. because all the stuff is going to compile away anyways.
I hope projects like this catch on, and incentivize Rust getting a stronger type system, because the benefits will flow in both directions.
I think having guardrails like this is going to incredibly important as AI code gen starts taking a bigger role. Hopefully, as a separate comment mentioned, there can be a standard created so that AI tools can learn it more easily.
It is The Dream in some ways, but it is much, much easier said than done.
Or, to put it another way, there is no dependently-typed language I can even consider saying to my manager "Hey, you asked me to do X and I think I'll use language Y which is dependently typed", and as far as I can see, the problem isn't just that "nobody has built the standard library for it yet" or any thing else, the problem boils down to, they just aren't easy enough to use to call them practical.
I'd also say that "hey, you can use this dependently-typed language, just don't try to actually use the dependently-typed features" is also not what people are pining for.
Well, to be fair, I would not cry if this happened.
> Or, to put it another way, there is no dependently-typed language I can even consider saying to my manager "Hey, you asked me to do X and I think I'll use language Y which is dependently typed", and as far as I can see, the problem isn't just that "nobody has built the standard library for it yet" or any thing else, the problem boils down to, they just aren't easy enough to use to call them practical.
I wrote a tic-tac-toe in Lean. https://github.com/Ericson2314/lean-tic-tac-toe/. Yes, it took me far longer than I would have in Haskell, but that is just because:
1. Unfamiliarity with the standard library, tooling etc. I didn't expect Applicative to be in mathlib not batteries, for example.
2. I tried to prove things I wouldn't have in the Haskell version
If I knew the standard library, and did plane old arbitrary-length lists and partial (bounds-checked) indexing with arbitrary nats, I would have had less fun, but also done things way faster.
I am excited for the latest Machine-Learning tactics or whatever too, but the idea that "everything is going to do something they didn't before (proofs) and it will be no harder" is not a goal I am aiming for. Proofs will always be a trade-off between up-front costs and fearless refactoring on an ongoing basis. That's OK! I don't expect magic, I just want my programming language to give me the full spectrum of options, and then I can make my own economic decisions.
I have to disagree with this, since fully general dependent types seem to inherently involve a kind of compile-time evaluation. You can recover a sort of phase distinction (i.e. a post-compile "run time" phase) but only AIUI through an "extraction" step that dispenses with the actual dependently typed parts of the program.
Yes the literature says how to do this. It's not hard. Any program that could be written in something weaker like System F will have the same erasure.
> inherently involve a kind of compile-time evaluation.
compile-time evaluation doesn't pose a phase-separation problem. Indeed, nothing to the right of a `:` will ever need to be evaluated on runtime.
> Developers write specifications of what their code should do ... Verus statically checks ... the specifications for all possible executions of the code
This is what tests are for.
Rust is indeed a safe language, in terms of memory safety. Vulnerabilities are still very possible within a rust program, they just need to not rely on memory exploits, and the borrow checker won't catch them. That is why formal verification exists. If you have a really critical, high security application then you should ensure the maximum amount of safety and reliability.
Formal verification enables the developer to write a mathematical proof that the program behaves correctly in all situations, something that the borrow checker cannot do.
both have their place.
Buffer overflows etc. are absurd things that should not be possible, but preventing them is the first step towards security.
I guess you're asking why that wasn't built into Rust from the start; after all, there are programming languages with the formal verification and theorem-proving built-in, e.g. for imperative languages, the SPARK extension to Ada, as well as ATS, or for a functional one, Idris. My guess is that Rust never would have become popular if you needed to write actual formal proofs to guarantee some degree of safety, since satisfying the borrow checker is easier in comparison, and it also probably would have been a lot harder to develop Rust after that. The borrow checker simply eliminating use-after-free errors and data races in safe code was good enough.
A verification of a property is stronger than a mere test of a property.
Not only is static verification more powerful, there is also a massive usability difference. You define your pre and post conditions in each function (also known as specification or design contract) and the tool will automatically check that you do not violate these conditions. It solves a very different problem from classic unit or integration tests.
This is why tools like formal verification and symbolic analyses can help you establish that for all possible integers X, your function does the right thing (for some definition of “right”). You get this assurance without having to actually enumerate all X.
Right now the Rust stdlib is being verified using Kani, a model checker, https://model-checking.github.io/verify-rust-std/
In Kani, a proof looks like this
https://github.com/model-checking/verify-rust-std/blob/00169...
#[kani::proof_for_contract(NonNull::new_unchecked)]
pub fn non_null_check_new_unchecked() {
let raw_ptr = kani::any::<usize>() as *mut i32;
unsafe {
let _ = NonNull::new_unchecked(raw_ptr);
}
}
It looks like a test, but actually it is testing that every possible usize, when converted to a pointer to i32 and built with NonNull::new_unchecked, will follow the contract of NonNull::new_unchecked, which is defined herehttps://github.com/model-checking/verify-rust-std/blob/00169...
#[requires(!ptr.is_null())]
#[ensures(|result| result.as_ptr() == ptr)]
Which means: if the caller guarantees that the parameter ptr is not null, then result.as_ptr() is the same as the passed ptrThat's a kind of trivial contract but Kani tests for all possible pointers (rather than some cherry picked pointers like the null pointer and something else), without actually brute-forcing them but instead recognizing when many inputs test the same thing (while still catching a bug if the code changes to handle some input differently). And this approach scales for non-trivial properties too, a lot of things in the stdlib have non-trivial invariants.
You can check out other proofs here https://github.com/search?q=repo%3Amodel-checking%2Fverify-r...
It's not that different from writing a regular test, it's just more powerful. And you can even use this #[requires] and #[ensures] syntax to test properties in regular tests if you use the https://crates.io/crates/contracts crate.
Really if you have ever used the https://proptest-rs.github.io/proptest/intro.html or the https://crates.io/crates/quickcheck crate, software verification is like writing a property test, but rather than testing N examples generated at random, it tests all possible examples at once. And it works when the space of possible examples is infinite or prohibitively large, too.
A verification would be the equivalent of that. In practice that matters since the input space is often much larger than just one byte.
Well, Rust doesn't yet have taint checking or effects, so there's two things lacking.