the math of infinity isn't very relevant to the finite programs that humans use. Even today's astronomically large computing systems have size approximately equal to 3 compared to infinity.
> All of modern mathematics is built on the foundation of set theory
That's ignoring most of formalized mathematics, which is progressing rapidly and definitely modern. Lean and Rocq for example are founded on type theory, not set theory.
Lawrence Paulson is a great person to clarify those topics (Isabelle/HOL is not based on types yet it can proof most maths).
usually you're more interested in better ergonomics: can you do X with less work?
it's like picking a programming language - depending on what you're attempting, some will be more helpful.
and ZFC is a lot more low level than what day-to-day mathematics usually bothers with. So most mathematians actually work in an informally understood higher-order wrapper, hoping that what they write sufficiently explains the actual "machine code"
the idea then behind adopting alternative foundations is that these come with "batteries included" and map more directly to the domain language.
actual, having faith that what they write could compile, run, and pass tests, but never doing so.
But in many cases, it's extra effort for not much reward. The patterns which most mathemematicians are interested in are (generally) independent of the particular foundations used to realize them. Whether one invests the effort into formal verification depends on how hard the argument is and how crucial the theorem.
The former is almost entirely sets, while the later is necessarily types.
Been a long way towards it. \o/
https://www.theregencyballroom.com/events/detail/?event_id=1...
This is enormously off topic but if one person sees this and ends up not missing the show then it was worth mentioning. I think there’s a reasonable crossover between math, discrete math, hacking, and mathcore :)
let x = x in x
Completely encapsulates a countable infinity.
um... no... computer science is very concerned with the infinite. I'm surprised quanta published this. I always think highly of their reporting.
I'm really torn about this, because I think they're providing a valuable service. But their general formula doesn't diverge a whole lot from run-of-the-mill pop-science books: a vague, clickbaity title and then an article that focuses on personalities and implications of discoveries while glancing over a lot of important details (and not teaching much).
I feel like I’m being a bit curmudgeonly, but I don’t read them much any more.
But I have come to think, well actually, approximate is doing some heavy lifting there AND I have never used infinity for anything except to say "look I don't know how high this should go, so go as far as you can go, and double that, which is really saying, you are bound by finite boundaries, you'll have to work within them, and the uncountable thing that I was thinking about is really finite.
Edit: Think of it like this
We know that it's most likely that the universe is infinite, but we can only determine how big it is by how far we can see, which is bounded by the speed of light, and the fact that we can only see matter emitting light (I'm being careful here, if the big bang theory is right, and I am understanding it correctly, there is a finite amount of matter in the universe, but the universe itself is infinite)
Also, a small aside: there’s a finite amount of matter in the visible universe. We could have infinite matter in an infinite universe.
> Set theorists use the language of logic, computer scientists the language of algorithms.
Computer science doesn’t use logic? Hello, Booleans.
So lazy, especially when you can ask an AI to tell you if you’re saying something stupid.
However, I also am starting to believe that infinity doesn't exist.
Or more specifically, I want to argue that infinity is not a number, it is a process. When you say {1, 2, 3, ... } the "..." represents a process of extending the set without a halting condition.
There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
Sure, but ordinal numbers exist and are useful. It's impossible to prove Goodstein's theorem without them.
https://en.wikipedia.org/wiki/Ordinal_number
https://en.wikipedia.org/wiki/Goodstein%27s_theorem
The statement and proof of the theorem are quite accessible and eye-opening. I think the number line with ordinals is way cooler than the one without them.
I went down the rabbithole, and as far as I can tell, you have to axiomatically assume infinities are real in order to prove Goodstein’s theorem.
I challenge the existence of ordinal numbers in the first place. I’m calling into question the axioms that conjure up these ordinal numbers out of (what I consider sketchy) logic.
But it was a really fun rabbithole to get into, and I do appreciate the elegance of the Goodstein’s theorem proof. It was a little mind bending.
For example, S(0) is 1, S(S(0)) is 2, S(S(S(0))) is 3, and so on.
There is no end of a number line. There are lines, and line segments. Only line segments are finite.
> There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
You misunderstand the concept of infinity. Cantor's diagonal argument proves that such a bigger number must always exist. "Infinity'th" is not a place in a number line; Infinity is a set that may be countable or uncountable, depending on what kind of infinity you're working with.
There are infinities with higher cardinality than others. Infinity relates to set theory, and if you try to simply imagine it as a "position" in a line of real numbers, you'll understandably have an inconsistent mental model.
I highly recommend checking out Cantor's diagonal argument. Mathematicians didn't invent infinity as a curiosity; it solves real problems and implies real constraints. https://en.wikipedia.org/wiki/Cantor's_diagonal_argument
S is a function symbol. S(0) (in PA) is not a function. It is an expression involving one.
(Granted, there are other objects that may seem ugly which can only be constructed by reference to infinite things.)
You can do a lot of things without assuming that the natural numbers keep going, but it is plain awful to work with.
Or in the other direction, if numbers stop at 3, that certainly won't falsify "every number above 7 is the sum of two other numbers". It will prove that it's true. And the extremely strong conjecture immediately proves that the extremely weak one is false.
It is very tempting to use it in place of a number, and so mathematicians (being humans) did that.
Yes, mathematicians hack too. Maybe they even invented it.
When mathematicians are working with infinite objects, it is not by plugging in "infinity" somewhere a number should go, in order to imagine that the rule that would construct an object if that were a number, constructs an object. No. Rather, (in a ZF-like foundations) the axiom of infinity assumes that there is a set whose elements are exactly the finite ordinals. (Or, assumes something equivalent to that.) From this, various sets are constructed such that the set is e.g. in bijection with a proper subset of itself (there are a handful of different definitions of a set being "infinite", which under the axiom of choice, are equivalent, but without AoC there are a few different senses of a set being "infinite", which is why I say "e.g.").
In various contexts in mathematics, there are properties relevant to that specific context which correspond to this notion, and which are therefore also given the name "infinite". For example, in the context of von Neumann algebras, a projection is called a "finite projection" if there is no strict subprojection that is Morray-von Neumann equivalent to it. Or, in the context of ordered fields, an element may be called "infinite" if it is greater than every natural number.
Usually, the thing that is said is that some object is "infinite", with "infinite" an adjective, not saying that some object "is infinity" with "infinity" a noun. One exception I can think of is in the context of the Surreal Numbers, in which the Gap between finite Surreal Numbers and (positive) infinite Surreal Numbers, is given the name "infinity". But usually objects are not given the name "infinity", except as like, a label for an index, but this is just a label.
I suppose that in the Riemann sphere, and other one-point compactifications, one calls the added point "the point at infinity". But, this kind of construction isn't more "make believe" than other things; one can do the same "add in a point at infinity" for finite fields, as in, one can take the projective line for a finite field, which adds a point that is doing the same thing as the "point at infinity" in the complex projective line (i.e. the Riemann sphere).
Reality does, stuff happens, etc. But physics is an abstract model we use to make predictions about reality — and triangles are part of that abstract model, not things that actually exist.
You can’t, for instance, show me a triangle. Just objects that are approximated by the abstract concept in physics.
In a way, the axiom of infinity seems to behave much like other axioms that assert the existence of even larger mathematical "universes": it's worth being aware of what parts of a mathematical development are inherently dependent on it as an assumption, which is ultimately a question of so-called reverse mathematics.
https://en.wikipedia.org/wiki/Finitism
https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_...
I agree finitism and ultrafinitism are worth more development. Infinity is a hack in the sense that it effectively represents a program that doesn't halt, and when you start thinking of it this way, then all sorts of classical mathematical arguments start looking a little fishy, at least when applying math to the real world. For instance, I do think this is at the heart of some confusions and difficulties in physics.
I claim the reason is that 5 is prime, while 10 is composite (10 = 5 times 2).
Therefore, 5 and 10, and 2, exist.
In any case existence of mathematical objects is a different meaning of existence to physical objects. We can say a mathematical object exists just by defining it, as long as it doesn't lead to contradiction.
You try answering the question without speaking of 5 or 10.
That is my argument.
What you’ve pointed out is that the interactions of your cards, when confined to a particular set of manipulations and placements, is equivalent to a certain abstract model.
I think Baez's paper, Struggles with the Continuum, shows a lot of past difficulties we've had that resulted from this:
What the hell. What about Type Theory?
See this mathoverflow response here https://mathoverflow.net/a/437200/477593
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.
The notion of algorithms and computer science were invented to discuss the behavior of infinite sequences: examining if there’s descriptions of real numbers. This was extended by connecting complicated infinite structures like the hyperreals with decisions theory problems.
That descriptions of other infinite sets also corresponds to some kind of algorithm seems like a natural progression.
That it happens to be network theory algorithms rather than (eg) decision theory algorithms is worth noting — but hardly surprising. Particularly because the sets examined arose from graph problems, ie, a network.
From my vantage, there’s two strains that make such discoveries unsurprising:
- Curry-Howard generally seems to map “nice” to “nice”, at least in the cases I’ve dealt with;
- modern mathematics is all about finding such congruences between domains (eg, category theory) and we seem to find ways to embed theories all over; to the point where my personal hunch is that we’re vastly underestimating the “elephant problem”, in which having started exploring the elephant in different places, we struggle to see we’re exploring the same object.
Neither of those is a technical argument, but I hope it helps understand why I’d be coming to the question from a different perspective and hence different level of surprise.
To me this is quite surprising. Distributed systems were not designed to solve measure theory problems.
an object doing something that it wasn’t designed for seems to me to be the definition of “surprising”
That you can express the constraints of network colorings (ie, the measure theory problem) as network algorithms strikes me as a “well duh” claim — at least if you take that Curry-Howard stuff seriously.
Nothing in the result in the article talks about types, and even if it could be, it’s not clear that the CH isomorphism would be a good lens to do so.
I’m not “sprinkling magic powder”, but using the very core of the correspondence:
A proof that your network has some property is an algorithm to construct an instance of appropriate type from your network.
In this case, we’re using algorithms originally designed for protocols in CS to construct a witness of a property about a graph coloring. In the article, it details exactly his realization this was true — during a lecture, seeing the types of things constructed by these algorithms corresponding to the types of objects he works with.
The networks on the measure theory side and on the algorithmic side are not the same. They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes.
The correspondence outlined is also extremely subtle. Measurable colorings are related to speed of consensus.
You make it sound like this is a result of the type: "To prove that a coloring exists, I prove that an algorithm that colors the network exists." Which it is not, as far as I understand.
It seems to me you are mischaracterizing CH here as well:
> A proof that your network has some property is an algorithm to construct an instance of appropriate type from your object.
A proof that a certain network has some property is an algorithm that constructs an instance of an appropriate type that expresses this fact from the axioms you're working from.
This is the crux of the proof, as I understand it: to classify network coloring measurability, I classify algorithms that color the network.
Which I can do because there’s a correspondence between network colorings (in graph theory) and algorithms that color networks (in CS). Which I’m arguing is an instance of CH: they’re equivalent things, so classifying either is classifying both.
> They are not even the same cardinality. One has uncountably many nodes, the other has countably many nodes. […] Measurable colorings are related to speed of consensus.
Yes — this is why the work is technically impressive, because proving the intuition from above works when extending to the infinite case is non-trivial. But that doesn’t change that fundamentally we’re porting an algorithm. I’m impressed by the approach to dealing with labels in the uncountable context which allows the technique to work for these objects — but that doesn’t mean I’m surprised such a thing could work. Or that work on network colorings (in CS) turned out to have techniques useful for network colorings (in math).
> It seems to me you are mischaracterizing CH
You then go on to make some quibble about my phrasing that, contrary to your claim about mischaracterizing, doesn’t conflict with what you wrote.
Edit: removing last paragraph; didn’t realize new author.
It makes perfect sense to me.
You can define an additive group $\frac{1}{n}\mathbb{Z}$ if you like. However (for $n > 1$) it would not even be a ring, because it would not be closed under multiplication. (It's closure under multiplication would be $\mathbb{Z}[\frac{1}{n}]$, which would not have a smallest positive element, contrary to your design criterion.)
(Of course, you could define a partial multiplication on it. I don't think there's a good name for such a thing. I guess you could just call it "a subgroup of the rational numbers under addition, equipped with a partial multiplication operation that is defined and agrees with the usual multiplication on rational numbers when the result would still be in the subgroup")
It wasn’t just that.. all over the article..
I am getting really strong LLM article vibes.
…that existed in the world of descriptive set theory — about the number of colors required to color certain infinite graphs in a measurable way.
To Bernshteyn, it felt like more than a coincidence. It wasn’t just that computer scientists are like librarians too, shelving problems based on how efficiently their algorithms work. It wasn’t just that these problems could also be written in terms of graphs and colorings. Perhaps, he thought, the two bookshelves had more in common than that. Perhaps the connection between these two fields went much, much deeper. Perhaps all the books, and their shelves, were identical, just written in different languages — and in need of a translator.
The reason is simple - numbers are cuts in the continuum while infinity isn't. It should not even be a symbolic notion of very large number. This is not to say infinity doesn't exist. It doesn't exist as a number and doesn't mix with numbers.
The limits could have been defined saying "as x increases without bound" instead of "as x approaches infinity". There is no target called infinity to approach.
Cantor's stuff can easily be trashed. The very notion of "larger than" belongs to finite numbers. This comparitive notion doesn't apply to concepts that can not be quantified using numbers. Hence one can't say some kind of infinity is larger than the other kinds.
Similarly, his diagonal argument about 1-to-1 mapping can not be extended to infinities, as there is no 1-to-1 mapping that can make sense for those which are not numbers or uniquely identifiable elements. The mapping is broken. No surprise you get weird results and multiple infinities or whatever that came to his mind when he was going through stressful personal situations.
This sounds like ranting from someone who doesn't deeply understand the implications of set theory or infinite sets. Cardinality is a real thing, with real consequences, such as Gödel's incompleteness theorems. What "weird results" are tripping you up?
> when he was going through stressful personal situations
Ad hominem. Let's stick to arguing the subject material and not personally attacking Cantor.
https://en.wikipedia.org/wiki/Cantor's_diagonal_argument#Con...
Ernst Zermelo resolved this by stating that his axioms should be interpreted within second-order logic, and as such it doesn't contradict Cantor's theorem since the Löwenheim–Skolem theorem only applies in first-order logic.
Further, all math is idealist bullshit; but it's useful idealist bullshit because, when you can map representations of physical systems into it in a way that the objects act like the mathematical objects that represent them, then you can achieve useful predictive results in the real world. This holds true for results that require a concept of infinities in some way to fully operationalize: they still make useful predictions when the axiomatic conditions are met.
For the record, I'm not fully against what you're saying, I personally hate the idea of the axiom of choice being commonly accepted; I think it was a poorly founded axiom that leads to more paradoxes than it helps things. I also wish the axiom of the excluded middle was actually tossed out more often, for similar reasons, however, when the systems you're analyzing do behave well under either axiom, the math works out to be so much easier with both of them, so in they stay (until you hit things like Banac-Tarsky and you just kinda go "neat, this is completely unphysical abstract delusioneering" but, you kinda learn to treat results like that like you do when you renormalize poles in analytical functions: carefully and with a healthy dose of "don't accidentally misuse this theorem to make unrealistic predictions when the conditions aren't met")
I can say it can not be extended or applied, because the operation can not be "completed". This is not because it takes infinite time. It is because we can't define completion of the operation, even if it is a snapshot imagination.
However, if you're dealing with a problem where you can't always usefully distinguish between elements across arbitrary set-like objects; then it's not a useful axiom and ZFC is not the formalism you want to use. Most problems we analyze in the real world, that's actually something that we can usefully assume, hence why it's such a successful and common theory, even if it leads to physical paradoxes like Banac-Tarsky, as mentioned.
Mathematicians, in practice, fully understand what you mean with your complaint about "completion," but, the beauty of these formal infinities is the guarantee it gives you that it'll never break down as a predictive theory no matter the length of time or amount of elements you consider or the needed level of precision; the fact that it can't truly complete is precisely the point. Also, within the formal system used, we absolutely can consistently define what the completion would be at "infinity," as long as you treat it correctly and don't break the rules. Again, this is useful because it allows you to bridge multiple real problems that seemingly were unrelated and it pushes "representative errors" to those paradoxes and undefined statements of the theory (thanks, Gödel).
If it helps, the transfinite cardinalities (what you call infinity) you are worried about are more related to rates than counts, even if they have some orderable or count-like properties. In the strictest sense, you can actually drop into archimedian math, which you might find very enjoyable to read about or use, and it, in a very loose sense, kinda pushes the idea of infinity from rates of counts to rates of reaching arbitrary levels of precision.
I can't do that for real numbers.
The more I learn about this stuff, the more I come to understand how the quantitative difference between cardinalities is a red herring (e.g. CH independent from ZFC). It's the qualitative difference between these two sets that matter. The real numbers are richer, denser, smoother, etc. than the natural numbers, and those are the qualities we care about.
I don't think this definition is that weird, for example by 'larger' I might say I can easily 'fit' all the rational numbers in the real numbers, but cannot fit the real numbers in the rational numbers.
So, let's inspect pi. It's a fraction, precision of which depends on how much you zoom in on it. You can take it as a constant just for having a name for it.
The zooming is finite for every fraction.
How?
> Problem is with the assumption that some numbers such as pi or sqrt(2) can have absolute and full precision.
That's the least of your problems, I couldn't even draw different sized splodges per real number.
Only on hackernews.
Most of math history is stellar, studded with great works of geniuses, but some results were sanctified and prohibited for questioning due to various forces that were active during the times.
Application of regular logic such as comparison, mapping, listing, diagonals, uniqueness - all are the rules that were bred in the realms of finiteness and physical world. You can't use these things to prove some theories about things are not finite.
Could you be kind enough to explain the phrase "set of all integers" when the word all can not apply to an unbounded quantity? I think the word all is used loosely to extend it's meaning as used for finite sets, to a non-existent, unbounded set. For example, things such as all Americans, all particles in universe have a meaning because they have a boundary criterion. What is all integers?
I think one need to first define the realm of applicability or domain for the concepts such as comparison, 1-to-1 mapping, listing, diagonals, uniqueness, all etc.