F ⊢ (Prov_F(“s”) → s) → s
Which is strange because you might think that proving “if s is provable, then s” would be possible regardless of whether s is actually provable. But Löb’s theorem shows that such self-referential statements can only be proven when s itself is already provable.
This is called "Löb’s hypothesis" and it's an incredible piece of logical machinery[1]. If you truly understand it, it's pretty mind-blowing that it's actually a logically sound statement. It's one of my favorite ways to prove Gödel's Second Theorem of Incompleteness.
[1] https://categorified.net/FreshmanSeminar2014/Lobs-Theorem.pd...
> The teacher says one day he'll give a quiz and it will be a surprise. So the kids think "well, it can't be on the last day then—we'd know it was coming." And then they think "well, so it can't be on the day before the last day, either!—we'd know it was coming." And so on... and they convince themselves it can't happen at all.
> But then the teacher gives it the very next day, and they're completely surprised.
The students convince themselves that it can't happen at all... and that's well and good, but once they admit that as an option, they have to include that in their argument—and if they do so, their entire argument falls apart immediate.
Consider the first time through: "It can't be on the last day, because we'd know it was coming, and so couldn't be a surprise." Fine.
Now compare the second time through: "If we get to the last day, then either it will be on that day, or it won't happen at all. We don't know which, so if it did happen on that day, it would count as a surprise." Now you can't exclude any day, the whole structure of the argument fell apart.
Basically, they start with a bunch of premises, arrive at a contradiction, and conclude some new possibility. But if you stop there, you just end up with a contradiction and can't conclude anything.
So you need to restart your argument, with your new possibility as one of the premises. And now you don't get to a contradiction at all.
Sit down and make the argument really rigorous as to the definition of "surprise" and the fuzz disappears. You can get several different results from doing so, and that's really another way of saying the original problem is inadequately specified and not really a logical conundrum. As "logical conundrums" go, equivocation is endlessly fascinating to humans, it seems, but any conundrum that can be solved merely by being more careful, up to merely a normal level of mathematical rigor, isn't logically interesting.
The word "surprise" here means that the prisoner won't know his date of execution until he is told.
[Edited]
Those are the instructions he was given: he won’t know the date of his execution until he is told. He performs some reasoning, and concludes that he can’t get executed any day that week: therefore he will go free.
But if “he will go free” is a possibility, then his chain of reasoning falls apart. Previously he had argued “if I survive to the last day, I will be executed today. That won’t be a surprise. Therefore I can’t be executed on the last day.”
But once he has “…or I won’t get executed at all” as an option, then his reasoning would begin “if I survive to the last day, then either I’ll get executed today, or I won’t get executed at all” … and that’s as far as he can go. He can’t use that to conclude he won’t get executed on the last day, and he can’t then use that to conclude he won’t be executed on the second last day, and so on. The entire argument breaks apart immediately.
I referenced that there were many ways to "resolve the paradox" which isn't really "resolving" anything, based on how you carefully define the terms. It is certainly valid to define the terms in such a way that the prisoner is logically correct. In that case, there is no paradox, just perhaps lies. You can define it such that the prisoner is simply in error. You can also define it such that the answer is "indeterminate"... but that's not a paradox either. "Indeterminate" comes up in logic all the time and if you run around yelling "paradox! paradox!" every time that happens you're going to get hoarse pretty quickly.
The only "paradox" is that people insist on not being careful with their definitions, and any time anyone tries, someone else flips to a different definition (without being clear about it) and then starts arguing from that new point of view. That's not a paradox either. That's just lifting unclear thinking to the level of moral imperative. I have no patience or sympathy for that.
It's the same reason that 0.333... = 1/3. It's an infinite series that converges on 1/3.
Students learn repeating decimals before they understand infinite series.
The correct notations to express exactly what one means feature the infinity symbol [1] - but if you use them, well, of course the students will see the trap.
I think teasing apart numerals and numbers is a good first step on one's journey in mathematical logic.
In particular, he discusses what he calls the meta-paradox:
> The meta-paradox consists of two seemingly incompatible facts. The first is that the surprise exam paradox seems easy to resolve. Those seeing it for the first time typically have the instinctive reaction that the flaw in the students’ reasoning is obvious. Furthermore, most readers who have tried to think it through have had little difficulty resolving it to their own satisfaction.
> The second (astonishing) fact is that to date nearly a hundred papers on the paradox have been published, and still no consensus on its correct resolution has been reached. The paradox has even been called a “significant problem” for philosophy [30, chapter 7, section VII]. How can this be? Can such a ridiculous argument really be a major unsolved mystery? If not, why does paper after paper begin by brusquely dismissing all previous work and claiming that it alone presents the long-awaited simple solution that lays the paradox to rest once and for all?
> Some other paradoxes suffer from a similar meta-paradox, but the problem is especially acute in the case of the surprise examination paradox. For most other trivial-sounding paradoxes there is broad consensus on the proper resolution, whereas for the surprise exam paradox there is not even agreement on its proper formulation. Since one’s view of the meta-paradox influences the way one views the paradox itself, I must try to clear up the former before discussing the latter.
> In my view, most of the confusion has been caused by authors who have plunged into the process of “resolving” the paradox without first having a clear idea of what it means to “resolve” a paradox. The goal is poorly understood, so controversy over whether the goal has been attained is inevitable. Let me now suggest a way of thinking about the process of “resolving a paradox” that I believe dispels the meta-paradox.
One resolution is that what the teacher stipulates is impossible. It should really be
"You'll have a test within the next x days but won't know which day it'll be on (unless it's the last day)"
"You’ll have a test within the next X days but won’t know which day it’ll be on"
is impossible
It’s amusing that you stopped here without giving an actual solution. Please do tell us, which day is the test on?
Your critical thinking is bad. The first paradox happens when the prisoner concludes that the judge lied, using a rational deduction. A second paradox happens when it transpires the judge told the truth.
So in the end, the judge was telling the truth, and the prisoner was mistaken, and then dead.
> So, conceivably, the concept of 'standard' natural number, and the concept of 'standard' model of Peano arithmetic, are more subjective than most mathematicians think. Perhaps some of my 'standard' natural numbers are nonstandard for you! I think most mathematicians would reject this possibility... but not all.
It's probably worth elaborating why the majority of logicians (and likely most mathematicians) believe that standard natural numbers are not subjective (although my own opinion is more mixed).
Basically the crux is, do you believe that statements of the form using all/never quantifiers such as "this machine will never halt" or "this machine will always halt" have objective truth values?
If you do, then you are implicitly subscribing to a view that the standard natural numbers objectively exist and do not depend on subjective preferences.
Imo, this just comes down to the fact that most people would consider "standard" to be a floating signifier. I don't think the idea that mathematical concepts changing definitions when you change axioms is at all controversial in itself.
Definitely worth spending time on.
While you can't have the territory road, plat, and topographical maps may be incomplete, but all have their uses.
We usually assume that (a) the entire universe is computable and (b) even stronger than that, the entire universe is _learnable_, so we can just approximate everything using almost any function as long as we use neural networks and backpropagation, and have enough data. Clearly there's more to the story here.
I don't think the assumption is that strong. The assumption is rather that human learning is computable and therefore a machine equivalent of it should be too.
I don't think the assumption is even that strong! The skills that really set us humans above mere machines - i.e. causal inference, creativity, critical analysis, self-awareness - I don't think are assumed to be computable (IMHO there's no evidence as yet to suggest otherwise). The only skill that AI really currently possesses, is the ability to apply an ever-more-elaborate statistical aggregate function to data. The assumption is just that anything that can be encoded as data, can be operated on to produce an ever-more-elaborate aggregate result.
It is all there in what you would have been taught, but hidden because we tend to avoid the hay in the haystack problems and focus on the needles, because we don't have access to the hay.
As an example that can cause huge arguments, if for analysis you used Rudin, go look for an equally binary operator in that book, as every construction of the reals is a measure zero set, it is actually impossible to prove the equality of two real numbers. ZFC uses constructablity, Spivik uses Cauchy sequences etc...
If you look at the paper[1], about the equivalence of PAC/VC dimensionality it is all there, framing learning as a decision problem, set shattering etc.
Richardson's theorm, especially with more recent papers is another lens.
With the idea that all models are wrong, but some are useful, it does seem that curriculums tend to leave these concepts to postgraduate classes, often hiding them or ignoring them, in descriptive complexity theory they should have been front and center IMHO.
Assuming something is computable or learnable is important for finding pathological cases that are useful, don't confuse the map with the territory.
We have enough examples to know the neuronic model is wrong, but the proposed models we have found aren't useful yet, and with what this post provided shows that may always hold.
Physics and other fields make similar assumptions, like physics assumptions that laplacian determinism is true, despite counterexamples.
Gödel, Rice, Turing. etc... may be proven wrong some day, but right now Halt ~= open frame ~= symbol grounding ~= system identification in the general case is the safe bet.
But that doesn't help get work done, or possibly find new math that changes that.
p-adic numbers will save us. the alternative completion of the rationals we did not know we needed
the only problem is making sense of p-adics in terms of counting seems rather difficult