I liked how this talk addressed the topic yet have questions about this talk's overarching argument. It's kind of an argument yet more like a philosophical position about CS--I know at least one paper that critiques the philosophical commitments made in something like the Church Turing thesis, etc., that paper called Aaronson the strong American style perspective or something, I forget, at any rate the issue is how transparent these philosophical premises are which those of us from computer science may take for granted.
I also liked-disliked those one slide proofs. I couldn't follow them, I know they are right (and are great exercises for a student to go over offline) and that they are in service of showing that modern understanding can do the results in one slide and so forth, but yet there's an accepted style of academic presentation that kind of misleadingly "performs" in a way that is cognitively impossible for the audience to actually follow unless they already know it, technical talks are kind of a backwards lie in that way. There was a recent popular book by a French mathematician who said as much, that 99.9% of the time we can hardly understand mathematics talks and books and it's kind of an open secret how professionals deal with this phenomenon.
Edit: I took undergrad CS theory but never "got" the exposure/fascination with Busy Beaver, but after this talk it became clearer to me that the reason for the Busy Beaver function is that it's a meta-function, it is a function about Turing machines, which is how the BBF gets the special properties/results that it does. Which immediately reminds me of Chaitin's constant which also tries to encode/talk about the properties of Turing machines, e.g. Kolmogorov style complexity which was not explicitly mentioned in the talk.
The properties of the Busy Beaver function carry over to any other computational model you could base it on. E.g. the lambda calculus based BBλ[1] shares most of its properties.
> Which immediately reminds me of Chaitin's constant which also tries to encode/talk about the properties of Turing machines,
Chaitin's work on Algorithmic Information Theory (aka Kolmogorov Complexity) was actually focussed on custom Lisp languages of his own design rather than Turing Machines. One could be confused though by some of his articles on LISP program-size complexity like [2] where he calls his LISP interpreter a UTM (Universal Turing Machine).
[2] https://www.jucs.org/jucs_2_5/the_limits_of_mathematics/Chai...
So there should be no confusion there, the "big four models of computation" are all equivalent and we first encounter/learn this in undergrad TCS class.
IIRC That is in reference to the work around the Church–Turing–Deutsch principle which just classic physics.
Basically Turing machines only have access to the computable reals, which is a zero measure set, thus cannot simulate, but only approximate Newtonian physics. The claim is that quantum computers have access to all the reals, and thus could possibly simulate those real valued functions.
IMHO, "simulating the laws of physics" is an example of a common way of avoiding a very common and challenging issue of getting people to accept that we do not have access to almost all of the reals from algorithms, books, pens, papers, computers, etc...
* In math, we are finite beings trying to apprehend the infinite
* The Busy Beaver function actually quantifies that (!)
* Even the finite can exceed the scope of the cosmos. That's where we need physics and complexity theory
* Quantum computers look like they're already slightly expanding the scope of what mathematical statements we can know
* Can we know even more than that? Depends what the ultimate laws of physics are
Watched the entire lecture. Fascinating. An amusing aside is that this is the first time I've encountered that statement, where it wasn't just an empty assertion, but actually had some meat on the bones.
I really want a view of HN that is something like the upvotes divided by the number of comments (although, not necessarily linear). Those aren't necessarily the "best" by every metric, not saying this is the "best" view or anything, but it would be an interesting one.
Most cases where infinity is invoked are done to simplify things. Forget whether infinity exists in the real world or not: ask only whether it's useful. After all, does the number 17 really exist? All models are wrong; some models are useful.
Why not pick 10^180 or something as the highest number/states then?
Yes even natural numbers are infinite, so I don’t quite buy at face value it’s a simplification when we could cutoff these infinities because no computation will run 10^180 for attoseconds or have that many states. And I would think all the mysteries of ZFC for example being ousted would be simpler.
I think for your simplification it may have been clearer to say abstraction.
Because then we have to find all sorts of limits to all of our proofs, and qualify everything, and waste time proving that our results are under this limit? If we prove something about an algorithm parameterized by some N, and we rely on a Turing Machine in our proof, then we'd also have to figure out which N hits our limit of 10^180 states on the tape, so we can qualify the limits of our proof. This could be very difficult, since the translation to a Turing machine model can be arbitrarily complex. Why bother? It doesn't help anyone.
> To understand a finite computation, we (some amount of scientists and mathematicians) must understand at least ZFC?
Turing machines are for proving things about the nature of computation itself. You seem to be imagining that you need to understand Turing machines before you'll be allowed to make statements about some specific computation. That's simply not true. If Turing machines help you prove something, use them; if not, ignore them. Putting an arbitrary limit on them only makes them worse at proving things.
I'm not really sure why you're so focused on ZFC. On one hand, the vast majority of math proofs (and therefore CS proofs that are based on them) assume ZFC, usually without even bothering to mention it. On the other hand, the axiom of choice seems completely irrelevant for Turing machines. Although the tape is infinite, it's almost always implicitly assumed that the state is finite at every step (the rest of the tape is 0). You'd have to initialize the tape with some infinite pattern to get around this, and that's definitely out of the ordinary. It's probably better to think of the state as "arbitrarily large" rather than infinite. I suspect you could get away with assuming Peano arithmetic for most CS proofs anyone cares about.
But also, yes, some number of scientists and mathematicians are expected to understand the basic axioms of math if they're going to prove mathematical theorems.
All computations are finite, like you said. In general it’s a mystery to me how infinites of math are indispensable to the best sciences. In a sense it is simple, sure-we just use it when it helps us like you said. But since the indispensability of modern math is still hotly talked about in philosophy, I don’t know that I would call the finitely bound physical using infinity, simple. Do I think the difficulties of a mathematical and computational system with a biggest number (one we can pluck from cosmology as the largest number of useable states before total heat death) are simpler than one that uses infinite infinities at the lowest level, spawning philosophical challenges? I don’t know. I just can’t confidently call the status quo simple.