The language is different (not necessarily better) in comparison to Lean, but I do agree with some of the points on dependent types. It seems both languages mostly just made different tradeoffs, which imo, were fair and have shaped them into quite efficient tools for their domains. The domain of "proofs" is large and different paradigms just have different strengths/weaknesses, Lean just specialized for a different part of this space.
Sledgehammer is nice but probably just a question of time until an equivalent can be ported/created for Lean. It might also be nice to use for explorative phases but is a resource hog, it also makes proofs concise but I would usually rather see the full chain of steps directly for a proof in the published code instead of a semi-magic "by sledgehammer".
Working on Isabelle itself however is painful (especially communicating with developers) in comparison to Lean. Things like "we don't have bugs just unexpected behaviour" on the mailing list just seems childish/unprofessional. The callout to RAM consumption of Lean and related systems is also a bit of joke when looking at Isabelle's gluttony for RAM.
One issue with this is that coming up with a quickly-checkable certificate for UNSAT is not exactly a trivial problem. It's effectively the same as writing a formal proof.
I am learning Lean myself so forgive me as I have an overly rosy picture of it as a beginner. My current goal is to write and prove the kind of code normal programmers would write, such as real-world compression/decompression algorithms as in the recent lean-zip example: https://github.com/kiranandcode/lean-zip/blob/master/Zip/Nat...
Then, I foresee 2 other obstacles, 1 of which may disappear:
1. Actually knowing what the software is supposed to do is hard. Understanding what the users actually want to do and what the customers are paying to do --which aren't necessarily the same thing--is complex
2. The quality of the work of many software developers is abysmal and I don't know why they would be better at a truth language than they are at Java or some other language.
Objection 2 may disappear to be replaced with AI systems with the attention to do what needs to be done. So perhaps things will change in that to make it worthwhile...
[1] for those unfamiliar with math lingo, classical logic has a lot of powerful features. One of those is the law of the excluded middle, which says something can't be true and false at the same time. That means not not true is true, which you can't say in intuitionistic logic. Another feature is proof by contradiction, where you can prove something by showing that the alternative is unsound. There's quite a few results that depend on these techniques, so trying to do everything in intuitionistic logic has run into a lot of roadblocks.
The computer science folks are now working on their own CSLib. https://www.cslib.io https://www.github.com/leanprover/cslib Given that intuitionistic logic is really only relevant to computational content (the whole point of it is to be able to turn a mathematical argument into a construction that could in some sense be computed with), it will be interesting to see how they deal with the issue. Note that if you write algorithms in Lean, you are already limited to some kind of construction, and perhaps that's all the logic you need for that purpose.
Classical logic was rejected in computer science because the non-constructive nature made it inappropriate for an ostensibly constructive domain. Theoretical mathematics has plenty of uses to prove existences and then do nothing with the relevant object. A computer, generally, is more interested in performing operations over objects, which requires more than proving the object exists. Additionally, while you can implement evaluation of classical logic with a machine, it's extremely unwieldy and inefficient, and allows for a level of non-rigor that proves to be a massive footgun.
It's not that classical logic is useless, it's just that it's not particularly appropriate to choose as the basis for a system built on algorithms. This goes both ways. Set theory was taken as the foundation of arithmetic, et al. because type theory was simply too unwieldy for human beings scrawling algebras on blackboards.
The thing is it can be quite useful to always know what a value is, and there's some cool things you can do when you know how to get a value (such as create an algorithm to get said value). I'm still learning this stuff myself, but inuitionistic logic gets you a lot of interesting properties.
It's not as simple as that. Classical mathematics can talk about whether some property is computationally decidable (possibly with further tweaks, e.g. modulo some oracle, or with complexity constraints) or whether some object is computable (see above), express decision/construction procedures etc.; it's just incredibly clunky to do so, and it may be worthwhile to introduce foundations that make it natural to talk about these things.
In classical logic, a proof of "A or not A" requires nothing, a proof out of thin air.
Obviously, we want to stick with useful data structures, so we use constructive logic for programming.
It can also find commonalities between different abstract areas of maths, since there are a lot of exotic interpretations of intuitionistic logic, and doing mathematics within intuitionistic logic allows one to prove results which are true in all these interpretations simultaneously.
At the moment, I'm not sure if intuitionism has a "killer app" yet, but you could say the same about every piece of theory ever, at least over its initial period of development. I think the broad lesson is that the rules of logic are a "coordinate system" for doing mathematics, and changing the rules of logic is like changing to a different coordinate system, which might make certain things easier.
Intuitionist/Constructivist `true` means, "provable".
The question you are asking determines what answers are acceptable.
Why build an airplane, if you already know it must be possible?
That would be the law of non-contradiction (LNC). The law of the excluded middle (LEM) says that for every proposition it is true or its negation is true.
LEM: For all p, p or not p.
LNC: For all p, not (p and not p).
Classical logic satisfies both, intuitionistic logic only satisfies LNC.
As far as lean is concerned, this isn't an example of classical logic. It's just the definition of "not" - to say that some proposition implies a contradiction, and to say that that proposition is untrue, are the same statement.
Most "something"s that you'd want to prove this way will require a step from classical logic, but not all of them. (¬p ⟶ F) ⟶ p is classical; (p ⟶ F) ⟶ ¬p is constructive.
For every "well of course, just...X, that's what everybody does" group-think argument there's a cogent case to be made for at least considering the alternatives. Even if you ultimately reject the alternatives and go with the crowd, you will be better off knowing the landscape.
Every time you go off the beaten path, you're locking yourself into less documentation, more bugs (since there's less exploration of the dark corners), and fewer people you can go to for help. If you've got 20+ choices to make, picking the standard option is the right choice on average, so you can just do it and move on. You have finite attention, so doing a research report on every dependency means you're never actually working on the core problem.
The exceptions to this are when a) it becomes apparent that the standard tool doesn't actually fit your use case, or b) the standard tool significantly overlaps the core problem you're trying to solve.
I of course fully support reinstating logicism, but the same dogmatics love putting up a fight over that as well.
I'm going to take a formal logic class in the fall, and my professor said something akin to "definitely take it if you're interested, just be aware that it probably won't come in use in most of the mathematics done today." The thing is the foundations are mostly laid, and people are interested in using said foundations for interesting things, not for constantly revisiting the foundations.
I think this is one reason most mathematicians don't see a need for formal proof assistants, since from their perspective it's one very small part of math, and not the interesting one.
This is not to say that proof assistants are a dead end—I find them fascinating and hope they continue to grow—but there's a reason that they haven't had a ton of traction.
Putting that aside, to make things more clear: computer science is mathematics. Computer scientists are mathematicians. That was a categorization decided long before you and I ever lived. In the sense that you mean, you're only referring to a very small fraction of what "mathematics" refers to In the true sense of the word. It is just as irreconcilably disjointed as Logic is, not unified and fundamentally non-unifiable.
I too think it would be better if "mathematics" was reserved for the gated suburb of ZFC. But that's not the world we live in, courtesy of the same people who pushed ZFC as a foundation to begin with.
No. There are truths about the subject not captured in any single formal system. Which is why objects are studied form many perspectives.
> Computer scientists are mathematicians.
Some are and some aren’t.
What you've done here is an overgeneralization. "People who like math" and "people who like computers" are massive demographics with considerable overlap.
Maybe. But more clearly one of the most popular online.
also, https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon... i.e. there's no reason it should be as you say.
For mathematicians a proof is a means to an end, or a medium of expression - they care about what they say and why.
The correspondence isn’t about C programs corresponding to proofs in math papers. It’s a very a specific claim about kinds of formal systems which don’t resemble how math or programming is done.
Not always.
If it is NOT true, they sometimes simply play "what if" and construct a new system where it could be true.