Did I misread or miss something?
"We prove that for any non-zero stage success probability, the system reaches the verified state almost surely"
What's the point if its still stochastic?
So like, imagine that you had some finite list of integers, and you were picking a random number from 0 to infinity - because the domain is infinite, any finite set has 0 probability, but that doesn't mean it doesn't exist.
The ability to deterministcly identify that code eventually reaches a halting state, implies that we can use these stochastic tools to generate deterministic outcomes reliably in the future doesn't it?
From what I understood, this validates the output correctness, not that the output aligns with the user goals, so there's still room for the LLM to get the user goals wrong, and this is only to validate the mathematical consistency between the output code and the formal specification (in this paper, within the ESBMC framework for C++ code).
So it's kind of tight scoped, in this case, but I think it points in the right direction for coding assistants, which usually get some language primitives wrong.
/? TLA LLM https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... : 1 submission, ~20 comments
"AI will make formal verification go mainstream" (2025-12) https://news.ycombinator.com/item?id=46294574