On one hand, "rational" and "algebraic" are far more pervasive concepts than mathematicians are ever taught to believe. The key here is formal power series in non-commuting variables, as pioneered by Marcel-Paul Schützenberger. "Rational" corresponds to finite state machines, and "Algebraic" corresponds to pushdown automata, the context-free grammars that describe most programming languages.
On the other hand, "Concrete Mathematics" by Donald Knuth, Oren Patashnik, and Ronald Graham (I never met Oren) popularizes another way to organize numbers: The "endpoints" of positive reals are 0/1 and 1/0. Subdivide this interval (any such interval) by taking the center of a/b and c/d as (a+c)/(b+d). Here, the first center is 1/1 = 1. Iterate. Given any number, its coordinates in this system is the sequence of L, R symbols to locate it in successive subdivisions.
Any computer scientist should be chomping at the bit here: What is the complexity of the L, R sequence that locates a given number?
From this perspective, the natural number "e" is one of the simpler numbers known, not lost in the unwashed multitude of "transcendental" numbers.
Most mathematicians don't know this. The idea generalizes to barycentric subdivision in any dimension, but the real line is already interesting.
1: Almost all numbers are transcendental.
2: If you could pick a real number at random, the probability of it being transcendental is 1.
3: Finding new transcendental numbers is trivial. Just add 1 to any other transcendental number and you have a new transcendental number.
Most of our lives we deal with non-transcendental numbers, even though those are infinitely rare.
Even crazier than that: almost all numbers cannot be defined with any finite expression.
I've commented on this several times. Here's the most recent one: https://news.ycombinator.com/item?id=44366342
Basically you can't do a standard countability argument because you can't enumerate definable objects because you can't uniformly define "definability." The naive definition falls prey to Liar's Paradox type problems.
No, this is a standard fallacy that is covered in most introductory mathematical logic courses (under Tarski's undefinability of truth result).
> Define a "number definition system" to be any (maybe partial) mapping from finite-length strings on a finite alphabet to numbers.
At this level of generality with no restrictions on "mapping", you can define a mapping from finite-length strings to all real numbers.
In particular there is the Lowenheim-Skolem theorem, one of its corollaries being that if you have access to powerful enough maps, the real numbers become countable (the Lowenheim-Skolem theorem in particular says that there is a countable model of all the sets of ZFC and more generally that if there is a single infinite model of a first-order theory, then there are models for every cardinality for that theory).
Normally you don't have to be careful about defining maps in an introductory analysis course because it's usually difficult to accidentally create maps that are beyond the ability of ZFC to define. However, you have to be careful in your definition of maps when dealing with things that have the possibility of being self-referential because that can easily cross that barrier.
Here's an easy example showing why "definable real number" is not well-defined (or more directly that its complement "non-definable real number" is not well-defined). By the axiom of choice in ZFC we know that there is a well-ordering of the real numbers. Fix this well-ordering. The set of all undefinable real numbers is a subset of the real numbers and therefore well-ordered. Take its least element. We have uniquely identified a "non-definable" real number. (Variations of this technique can be used to uniquely identify ever larger swathes of "non-definable" real numbers and you don't need choice for it, it's just more involved to explain without choice and besides if you don't have choice, cardinality gets weird).
Again, as soon as you start talking about concepts that have the potential to be self-referential such as "definability," you have to be very careful about what kinds of arguments you're making, especially with regards to cardinality.
Cardinality is a "relative" concept. The common intuition (arising from the property that set cardinality forms a total ordering under ZFC) is that all sets have an intrinsic "size" and cardinality is that "size." But this intuition occasionally falls apart, especially when we start playing with the ability to "inject" more maps into our mathematical system.
Another way to think about cardinality is as a generalization of computability that measures how "scrambled" a set is.
We can think of indexing by the natural numbers as "unscrambling" a set back to the natural numbers.
We begin with complexity theory where we have different computable ways of "unscrambling" a set back to the natural numbers that take more and more time.
Then we go to computability theory where we end up at non-computably enumerable sets, that is sets that are so scrambled that there is no way to unscramble them back to the natural numbers via a Turing Machine. But we can still theoretically unscramble them back to the natural numbers if we drop the computability requirement. At this point we're at definability in our chosen mathematical theory and therefore cardinality: we can define some function that lets us do the unscrambling even if the actual unscrambling is not computable. But there are some sets that are so scrambled that even definability in our theory is not strong enough to unscramble them. This doesn't necessarily mean that they're actually any "bigger" than the natural numbers! Just that they're so scrambled we don't know how to map them back to the natural numbers within our current theory.
This intuition lets us nicely resolve why there aren't "more" rational numbers than natural numbers but there are "more" real numbers than natural numbers. In either case it's not that there's "more" or "less", it's just that the rational numbers are less scrambled than the real numbers, where the former is orderly enough that we can unscramble it back to the natural numbers with a highly inefficient, but nonetheless computable, process. The latter is so scrambled that we have no way in ZFC to unscramble them back (but if you gave us access to even more powerful maps then we could scramble the real numbers back to the natural numbers, hence Lowenheim-Skolem).
It doesn't mean that in some deep Platonic sense this map doesn't exist. Maybe it does! Our theory might just be too weak to be able to recognize the map. Indeed, there are logicians who believe that in some deep sense, all sets are countable! It's just the limitations of theories that prevent us from seeing this. (See for example the sketch laid out here: https://plato.stanford.edu/entries/paradox-skolem/#3.2). Note that this is a philosophical belief and not a theorem (since we are moving away from formal definitions of "countability" and more towards philosophical notions of "what is 'countability' really?"). But it does serve to show how it might be philosophically plausible for all real numbers, and indeed all mathematical objects, to be definable.
I'll repeat Hamkins' lines from the Math Overflow post because they nicely summarize the situation.
> In these pointwise definable models, every object is uniquely specified as the unique object satisfying a certain property. Although this is true, the models also believe that the reals are uncountable and so on, since they satisfy ZFC and this theory proves that. The models are simply not able to assemble the definability function that maps each definition to the object it defines.
> And therefore neither are you able to do this in general. The claims made in both in your question and the Wikipedia page [no longer on the Wikipedia page] on the existence of non-definable numbers and objects, are simply unwarranted. For all you know, our set-theoretic universe is pointwise definable, and every object is uniquely specified by a property.
Here's another lens that doesn't answer that question, but offers another intuition of why "the fact that there are a countable number of strings and an uncountable number of reals" doesn't help.
For convenience I'm going to distinguish between "collections" which are informal groups of elements and "sets" which are formal mathematical objects in some kind of formal foundational set theory (which we'll assume for simplicity is ZFC, but we could use others).
My argument demonstrates that the "definable real numbers" is not a definition of a set. A corollary of this is that the subcollection of finite strings that form the definitions of unique real numbers is not necessarily an actual subset of the finite strings.
Your appeal that such definitions are themselves clearly finite strings is only enough to demonstrate that they are a subcollection, not a subset. You can only demonstrate that they are a subset if you could demonstrate that the definable real numbers form a subset of the real numbers which as I prove you cannot.
Then any cardinality arguments fail, because cardinality only applies to sets, not collections (which ZFC can't even talk about).
After all, strictly speaking, an uncountable set does not mean that such a set is necessarily "larger" than a countable set. All it means is that our formal system prevents us from counting its members.
There are subcollections of the set of finite strings that cannot be counted by any Turing Machine (non-computably enumerable sets). It's not so crazy that there might be subcollections of the set of finite strings that cannot be counted by ZFC. And then there's no way of comparing the cardinality of such a subcollection with the reals.
Another way of putting it is this: you can diagonalize your way out of any purported injection between the reals and the natural numbers. I can just the same diagonalize my way out of any purported injection between the collection of definable real numbers and the natural numbers. Give me such an enumeration of the definable real numbers. I change every digit diagonally. This uniquely defines a new real number not in your enumeration.
Perhaps even more shockingly, I can diagonalize my way out of any purported injection from the collection of finite strings uniquely identifying real numbers to the set of all natural numbers. You purport to give me such an enumeration. I add a new string that says "create the real number such that the nth digit is different from the real number of the nth definition string." Hence such a collection is an uncountable subcollection of a countable set.
Let's start with a mirror statement. Can you exhibit an bijection between definitions and the subset of the real numbers they are supposed to refer to? It seems like any purported such bijection could be made incoherent by a similar minimization argument.
In particular, no such function from the finite strings to the real numbers, according to the axioms of ZFC can exist, but a more abstract mapping might. In much the same way that no such function from definitions to (even a subset of) the real numbers according to the axioms of ZFC can exist, but you seem to believe a more abstract mapping might.
I think your thoughts are maybe something along these lines:
"Okay so fine maybe the function that surjectively maps definitions to the definable real numbers cannot exist, formally. It's a clever little trick that whenever you try to build such a function you can prove a contradiction using a version of the Liar's Paradox [minimality]. Clearly it definitely exists though right? After all the set of all finite strings is clearly smaller than the real numbers and it's gotta be one of the maps from finite strings to the real numbers, even if the function can't formally exist. That's just a weird limitation of formal mathematics and doesn't matter for the 'real world'."
But I can derive an almost exactly analogous thing for cardinality.
"Okay so fine maybe the function that surjectively maps the natural numbers to the real numbers cannot exist, formally. It's a clever little trick that whenever you try to build such a function you can prove a contradiction using a version of the Liar's Paradox [diagonalization]. Clearly it definitely exists though right? After all the set of all natural numbers is clearly just as inexhaustible as the real numbers and it's gotta be one of the maps from the natural numbers to the real numbers, even if the function can't formally exist. That's just a weird limitation of formal mathematics and doesn't matter for the 'real world'."
I suspect that you feel more comfortable with the concept of cardinality than definability and therefore feel that "the set of all finite strings is clearly 'smaller' than the real numbers" is a more "solid" base. But actually, as hopefully my phrasing above suggests, the two scenarios are quite similar to each other. The formalities that prevent you from building a definability function are no less artificial than the formalities that prevent you from building a surjection from the natural numbers to the real numbers (and indeed fundamentally are the same: the Liar's Paradox).
So, to understand how I would build a map that maps the set of finite strings to the real numbers, when no such map can formally exist in ZFC, let's begin by understanding how I would rigorously build a map that maps all sets to themselves (i.e. the identity mapping), even when no such map can formally exist as a function in ZFC (because there is no set of all sets).
(I'm choosing the word "map" here intentionally; I'll treat "function" as a formal object which ZFC can prove exists and "map" as some more abstract thing that ZFC may believe cannot exist).
We'll need a detour through model theory, where I'll use monoids as an illustrative example.
The definition of an (algebraic) monoid can be thought of as a list of logical axioms and vice versa. Anything that satisfies a list of axioms is called a model of those axioms. So e.g. every monoid is a model of "monoid theory," i.e. the axiomos of a monoid. Interestingly, elements of a monoid can themselves be groups! For example, let's take the set {{}, {0}, {0, 1}, {0, 1, 2}, ...}, as the underlying set of a monoid whose monoid operation is just set union and whose elements are all monoids that are just modular addition.
In this case not only is the parent monoid a model of monoid theory, each of its elements are also models of monoid theory. We can then in theory use the parent monoid to potentially "analyze" each of its individual elements to find out attributes of each of those elements. In practice this is basically impossible with monoid theory, because you can't say many interesting things with the monoid axioms. Let's turn instead to set theory.
What does this mean for ZFC? Well ZFC is a list of axioms, that means it can also be viewed as a definition of a mathematical object, in this case a set universe (not just a single set!). And just like how a monoid can contain elements which themselves are monoids, a set universe can contain sets that are themselves set universes.
In particular, for a given set universe of ZFC, we know that in fact there must be a countable set in that set universe, which itself satisfies ZFC axioms and is therefore a set universe in and of itself (and moreover such a countable set's members are themselves all countable sets)!
Using these "miniature" models of ZFC lets us understand a lot of things that we cannot talk about directly within ZFC. For example we can't make functions that map from all sets to all sets in ZFC formally (because the domain and the codomain of a function must both be sets and there is no set of all sets), but we can talk about functions from all sets to all sets in our small countable set S which models ZFC, which then we can use to potentially deduce facts about our larger background model. Crucially though, that function from all sets to all sets in S cannot itself be a member of S, otherwise we would be violating the axioms of ZFC and S would no longer be a model of ZFC! More broadly, there are many sets in S, which we know because of functions in our background model but not in S, must be countable from the perspective of our background model, but which are not countable within S because S lacks the function to realize the bijection.
This is what we mean when we talk about an "external" view that uses objects outside of our miniature model to analyze its internal objects, and an "internal" view that only uses objects inside of our miniature model.
Indeed this is how I can rigorously reason about an identity map that maps all sets to themselves, even when no such identity function exists in ZFC (because again the domain and codomain of a function must be sets and there is no set of all sets!). I create an "external" identity map that is only a function in my external model of ZFC, but does not exist at all in my set S (and hence S can generate no contradiction to the ZFC axioms it claims to model because it has no such function internally).
And that is how we can talk about the properties of a definability map rigorously without being able to construct one formally. I can construct a map, which is a function in my external model but not in S, that maps the finite strings of S (encoded as sets, as all things are if you take ZFC as your foundation) that form definitions to some subset of the real numbers in S. But there's multiple such maps! Some maps that map the finite strings of S to the real numbers "run out of finite strings," but we know that all the elements of S are themselves countable, which includes the real numbers (or at least S's conception of the real numbers)! Therefore, we can construct a bijective mapping of the finite strings of S to the real numbers of S. Remember, no such function exists in S, but this is a function in our external model of ZFC.
Since this mapping is not a function within S, there is no contradiction of Cantor's Theorem. But it does mean that such a mapping from the finite strings of S to the real numbers of S exists, even if it's not as a formal function within S. And hence we have to grapple with the problem of whether such a mapping likewise exists in our background model (i.e. "reality"), even if we cannot formally construct such a mapping as a function within our background model.
And this is what I mean when I say it is possible for all objects to have definitions and to have a mapping from finite strings to all real numbers, even no such formal function exists. Cardinality of sets is not an absolute property of sets, it is relative to what kinds of functions you can construct. Viewed through this lens, the fact that there is no satisfiability function that maps definitions to the real numbers is just as real a fact as the fact that there is no surjective function from the natural numbers ot the real numbers. It is strange to say that the former is just a "formality" and the latter is "real."
For more details on all this, read about Skolem's Paradox.
Whoops I meant monoids. I started with groups of groups but it was annoying to find meaningful inverse elements.
https://en.wikipedia.org/wiki/Constructivism_%28philosophy_o...
So not all infinite sequences can be uniquely specified by a finite description.
Like √2 is a finite description, so is the definition of π, but since there is no way to map the abstract set of "finite description" surjectively to the set of infinite sequences you find that any one approach will leave holes.
Quick answer: math[0]
Slightly longer answer decimal numbers between 0 and 1 can be written as the sum of a_0*10^0 + a_1*10^1 + a_2*10^2 + ... + a_i*10^i + ... where a_i is one of 0,1,2,3,4,5,6,7,8,9. for series in this shape you can prove that the sum of two series is the same iff and only if the sequence of digits are all the same (up to the slight complication of 0.09999999 = 0.1 and similar)
i tried Math.random(), but that gave a rational number. i'm very lucky i guess?
It's true that no matter what symbolic representation format you choose (binary or otherwise) it will never be able to encode all irrational numbers, because there are uncountably many of them.
But it's certainly false that computers can only represent rational numbers. Sure, there are certain conventional formats that can only represent rational numbers (e.g. IEEE-754 floating point) but it's easy to come up with other formats that can represent irrationals as well. For instance, the Unicode string "√5" is representable as 4 UTF-8 bytes and unambiguously denotes a particular irrational.
> representable in a finite number of digits or bits
Implying a digit-based representation.
As the other person pointed out, this is representing an irrational number unambiguously in a finite number of bits (8 bits in a byte). I fail to see how your original statement was careful :)
> representable in a finite number of digits or bits
I can define sqrt(5) in a hard-coded table on a maths program using a few bytes, as well as all the rules for manipulating it in order to end up with correct results.
Of course if you know that you want the square root of five a priori then you can store it in zero bits in the representation where everything represents the square root of five. Bits in memory always represent a choice from some fixed set of possibilities and are meaningless on their own. The only thing that’s unrepresentable is a choice from infinitely many possibilities, for obvious reasons, though of course the bounds of the physical universe will get you much sooner.
You might be interested in this paper [1] which builds on top of this approach to simulate arbitrarily precise samples from the continuous normal distribution.
Infinities defy simple assumptions about maths, while Maxwell's Demon only needs to ignore the Laws of Thermodynamics.
I'm being serious, not glib, here. "And then do it infinitely many times" doesn't automatically enable any possible outcome, any more than the "multiverse of all possible outcomes" enables hot dog fingers on Jamie Curtis.
When you apply the same test to the output of Math.PI, does it pass?
Of course, we can only measure any quantity up to a finite precision. But the fact that we chose to express the measurement outcome as 3.14159 +- 0.00001 instead of expressing it as Pi +- 0.00001 is an arbitrary choice. If the theory predicts that some path has length equal exactly to 2.54, we are in the same situation - we can't confirm with infinite precision that the measurement is exactly 2.54, we'll still get something like 2.54 +- 0.00001, so it could very well be some irrational number in actual reality.
https://en.wikipedia.org/wiki/Integer: “An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...)”
According to that definition, -0 isn’t an integer.
Combining that with https://en.wikipedia.org/wiki/Rational_number: “a rational number is a number that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q”
means there’s no way to write -0 as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q.
This is how old temperature-noise based TRNGs can be attacked (modern ones use a different technique, usually a ring-oscillater with whitening... although i have heard noise-based is coming back but i've been out of the loop for a while)
https://en.wikipedia.org/wiki/Hardware_random_number_generat...
https://github.com/usnistgov/SP800-90B_EntropyAssessment
(^^^ this is a fun tool, I recommend playing with it to learn how challenging it is to generate "true" random numbers.)
An infinite precision ADC couldn't be subject to thermal attack because you could just sample more bits of precision. (Of course, then we'd be down to Planck level precision so obviously there are limits, but my point still stands, at least _I_ think it does. :))
It is e^(pi^2/(12 log 2))
Here's where it comes from. For almost all real numbers if you take their continued fraction expansion and compute the sequence of convergents, P1/Q1, P2/Q2, ..., Pn/Qn, ..., it turns out that the sequence Q1^(1/1), Q2^(1/2), ..., Qn^(1/n) converges to a limit and that limit is Lévy's constant.
For context, a number is transcendental if it's not the root of any non-zero polynomial with rational coefficients. Essentially, it means the number cannot be constructed using a finite combination of integers and standard algebraic operations (addition, subtraction, multiplication, division, and integer roots). sqrt(2) is irrational but algebraic (it solves x^2 - 2 = 0); pi is transcendental.
The reason we haven't been able to prove this for constants like Euler-Mascheroni (gamma) is that we currently lack the tools to even prove they are irrational. With numbers like e or pi, we found infinite series or continued fraction representations that allowed us to prove they cannot be expressed as a ratio of two integers.
With gamma, we have no such "hook." It appears in many places (harmonics, gamma function derivatives), but we haven't found a relationship that forces a contradiction if we assume it is algebraic. For all we know right now, gamma could technically be a rational fraction with a denominator larger than the number of atoms in the universe, though most mathematicians would bet the house against it.
Slight clarification, but standard operations are not sufficient to construct all algebraic numbers. Once you get to 5th degree polynomials, there is no guarantee that their roots can be found through standard operations.
Galois went a step further and proved that there existed polynomials whose specific roots could not be so expressed. His proof also provided a relatively straightforward way to determine if a given polynomial qualified.
So it's a bit stronger than the term "closed formula" implies. You can then show explicit examples of degree 5 polynomials which don't fulfill this condition, prove a quantitative statement that "almost all" degree 5 polynomials are like this, explain the difference between degree 4 and 5 in terms of group theory, etc.
For instance, the standard statement that pi us transcendental would become the pi is transcendental in Q (the rational numbers). However, pi is trivially not transcendental over Q(pi), which is the smallest field possible after adding pi to the rational numbers. A more interesting question is if e is transcendental over Q(pi); as far as I am aware that is still an open problem.
(I'm not sure what "the elements of the base need to be enumerable" means—usually, as above, one speaks of a single base; while mixed-radix systems exist, the usual definition still has only one base per position, and only countably many positions. But the proof of countability of transcendental numbers is easy, since each is a root of a polynomial over $\mathbb Q$, there are only countably many such polynomials, and every polynomial has only finitely many roots.)
Proof of what? Needed for what?
The elements of the number system are the base raised to non-negative integer powers, which of course is an enumerable set.
> transcendental numbers are not enumerable
Category mistake ... sets can be enumerable or not; numbers are not the sort of thing that can be enumerable or not. (The set of transcendental numbers is of course not enumerable [per Georg Cantor], but that doesn't seem to be what you're talking about.)
So why bring some numbers here as transcendental if not proven?
That's really all you can do, given that 3 and 4 are really famous. At this point it is therefore just not possible to write a list of the "Fifteen Most Famous Transcendental Numbers", because this is quite possibly a different list than "Fifteen Most Famous Numbers that are known to be transcendental".
I might be OK with title "Fifteen Most Famous Numbers that are believed to be transcendental" (however, some of them have been proven to be transcendental) but "Fifteen Most Famous Transcendental Numbers" is implying that all the listed numbers are transcendental. Math assumes that a claim is proven. Math is much stricter compared to most natural (especially empirical) sciences where everything is based on evidence and some small level of uncertainty might be OK (evidence is always probabilistic).
Yes, in math mistakes happen too (can happen in complex proofs, human minds are not perfect), but in this case the transcendence is obviously not proven. If you say "A list of 15 transcendental numbers" a mathematician will assume all 15 are proven to be transcendental. Will you be OK with claim "P ≠ NP" just because most professors think it's likely to be true without proof? There are tons of mathematical conjectures (such as Goldbach's) that intuitively seem to be true, yet it doesn't make them proven.
Sorry for being picky here, I just have never seen such low standards in real math.
"Fifteen Most Famous Transcendental Numbers" is indeed not the same as "Fifteen Most Famous Numbers that are known to be transcendental". It is also not the same as "Fifteen Most Famous Numbers that have been proven to be transcendental". Instead, it is the same as "Fifteen Most Famous Numbers that are transcendental".
That's math for you.
"Transcendental" or even "irrational" isn't a vibesy category like "mysterious" or "beautiful", it's a hard mathematical property. So a headline that flatly labels a number "transcendental" while simultaneously admitting "not even proven" inside the article, looks more like a clickbait.
You clearly still don't understand. And to call the title "clickbait" is pretty silly.
No, my dudes. Just no. If it’s not proven transcendental, it’s not to be considered such.
The human-invented ones seem to be just a grasp of dozens man can come up with.
i to the power of i is one I never heard of but is fascinating though!
I think I read a book by this guy as a kid: it was an illustrated mostly black and white book about Chaitin's constant, halting problema and various ways of counting over infinite sets.
Seems like Cantor would have been all over this.
Indeed. And by similar arguments, there are more uncomputable real numbers than computable real numbers. (And almost all transcendental numbers are uncomputable).
In fact we can tighten that to all irrational numbers are manufactured in a mathematical laboratory somewhere. You'll never come across a number in reality that you can prove is irrational.
That's not necessarily because all numbers in reality "really are" rational. It is because you can't get the infinite precision necessary to have a number "in hand" that is irrational. Even if you had a quadrillion digits of precision on some number in [0, 1] in the real universe you'd still not be able to prove that it isn't simply that number over a quadrillion no matter how much it may seem to resemble some other interesting irrational/transcendental/normal/whatever number. A quadrillion digits of precision is still a flat 0% of what you'd need to have a provably irrational number "in hand".
If a square with sides of rational (and non-zero) length can exist in reality, then the length of its diagonal is irrational. So which step along the way isn't possible in reality? Is the rational side length possible? Is the right angle possible?
Five continuous quantities related to each other, where by default when not specified we can safely assume real values, right? So we must have real values in reality, right?
But we know that gas is not continuous. The "real" ideal gas law that relates those quantities really needs you to input every gas molecule, every velocity of every gas molecule, every detail of each gas molecule, and if you really want to get precise, everything down to every neutrino passing through the volume. Such a real formula would need to include terms for things like the self-gravitation of the gas affecting all those parameters. We use a simple real-valued formula because it is good enough to capture what we're interested in. None of the five quantities in that formula "actually" exist, in the sense of being a single number that fully captures the exact details of what is going on. It's a model, not reality.
Similarly, all those things using trig and such are models, not reality.
But while true, those in some sense miss something even more important, which I alluded to strongly but will spell out clearly here: What would it mean to have a provably irrational value in hand? In the real universe? Not metaphorically, but some sort of real value fully in your hand, such that you fully and completely know it is an irrational value? Some measure of some quantity that you have to that detail? It means that if you tell me the value is X, but I challenge you that where you say the Graham's Number-th digit of your number is a 7, I say it is actually a 4, you can prove me wrong. Not by math; by measurement, by observation of the value that you have "in hand".
You can never gather that much information about any quantity in the real universe. You will always have finite information about it. Any such quantity will be indistinguishable from a rational number by any real test you could possibly run. You can never tell me with confidence that you have an irrational number in hand.
Another way of looking at it: Consider the Taylor expansion of the sine function. To be the transcendental function it is in math, it must use all the terms of the series. Any finite number of terms is still a polynomial, no matter how large. Now, again, I tell you that by the Graham's Number term, the universe is no longer using those terms. How do you prove me wrong by measurement?
All you can give me is that some value in hand sure does seem to bear a strong resemblance to this particular irrational value, pi or e perhaps, but that's all. You can't go out the infinite number of digits necessary to prove that you have exactly pi or e.
Many candidates for the Theory of Everything don't even have the infinite granularity in the universe in them necessary to have that detailed an object in reality, containing some sort of "smallest thing" in them and minimum granularity. Even the ones that do still have the Planck size limit that they don't claim to be able to meaningfully see beyond with real measurements.
If rationals exist in reality and you are comfortable with Graham’s number existing in reality (which has more digits in its base 10 representation than the number of particles in the observable universe) then why not irrationals? They are the completion of the rationals.
Unless you are a finitist.
For example, Graham's number is pretty famous but it's more of a historical artifact rather than a foundational building block. Other examples of non-foundational fame would be the famous integers 42, 69, and 420.
Love the image of mathematicians laboring over flasks and test tubes, mixing things and extracting numbers... would have far more explosions than day-to-day mathematics usually does...
The transcendental number whose value matters (being the second most important transcendental number after 2*pi = 6.283 ...) is ln 2 = 0.693 ... (and the value of its inverse log2(e), in order to avoid divisions).
Also for pi, there is no need to ever use it in computer applications, using only 2*pi everywhere is much simpler and 2*pi is the most important transcendental number, not pi.
Does a number not matter "in practice" even if it's used to compute a more commonly use constant? Very odd framing.
It is not used for computing the value of ln(2) or of log2(e), which are computed directly as limits of some convergent series.
As I have said, there is no reason whatsoever for knowing the value of e.
Moreover, there is almost never a good choice to use the exponential function or the hyperbolic logarithm function (a.k.a. natural logarithm, but it does not really deserve the name "natural").
For any numeric computations, it is preferably to use everywhere the exponential 2^x and the binary logarithm. With this choice, the constant ln 2 or its inverse appears in formulae that compute derivatives or integrals.
People are brainwashed in school into using the exponential e^x and the hyperbolic logarithm, because this choice was more convenient for symbolic computations done with pen on paper, like in the 19th century.
In reality, choosing to have the proportionality factor in the derivative formula as "1" instead of "ln 2" is a bad choice. The reason is that removing the constant from the derivative formula does not make it disappear, but it moves it into the evaluation of the function and in any application much more evaluations of the functions must be done than computations of derivative or integral formulae.
The only case when using e^x may bring simplifications is in symbolic computations with complex exponentials and complex logarithms, which may be needed in the development of mathematical models for some linear systems that can be described by linear systems of ordinary differential equations or of linear equations with partial derivatives. Even then, after the symbolic computation produces a mathematical model suitable for numeric computations it is more efficient to convert all exponential or logarithmic functions to use only 2^x and binary logarithms.
I didn't understand immediately that you were talking about using values related to e in a computational context. But your comment about "brainwashing" seems a bit off. Are you saying that programmers bring e and ln with them into code when more effective constants exist for the same end? That's probably true. But brainwashing is far too strong, since things need to be taught in the correct order in math in order for each next topic to make sense. e really only comes in when learning derivative rules where it's explained "e is a number where when used as the base in an exponential function, that function's derivative is itself." Math class makes no pretense that you ought to use any of it to inform how you write code, so the brainwashing accusation seems off to me.
In calculations like compound financial interest, radioactive decay and population growth (and many others), e is either applied directly or derived implicitly.
> ... 2*pi is the most important transcendental number, not pi.
Gotta agree with this one.
In radioactive decay and population growth it is much simpler conceptually to use 2^x, not e^x, which is why this is done frequently even by people who are not aware that the computational cost of 2^x is lower and its accuracy is greater.
In compound financial interest using 2^x would also be much more natural than the use of e^x, but in financial applications tradition is usually more important than any actual technical arguments.
That is only true in the special case of computing a half-life. In the general case, e^x is required. When computing a large number of cases and to avoid confusion, e^x is the only valid operator. This is particularly true in compound interest calculations, which would fall apart entirely without the presence of e^x and ln(x).
> In radioactive decay and population growth it is much simpler conceptually to use 2^x, not e^x
See above -- it's only valid if a specific, narrow question is being posed.
> In compound financial interest using 2^x would also be much more natural than the use of e^x
That is only true to answer a specific question: How much time to double a compounded value? For all other cases, e^x is a requirement.
If your position were correct, if 2^x were a suitable replacement, then Euler's number would never have been invented. But that is not reality.
The use of ln 2 for argument range reduction has nothing to do with half lives. It is needed in any computation of e^x or ln x, because the numbers are represented as binary numbers in computers and the functions are evaluated with approximation formulae that are valid only for a small range of input arguments.
The argument range reduction can be avoided only if you know before evaluation that the argument is close enough to 0 for an exponential or to 1 for a logarithm, so that an approximation formula can be applied directly. For a general-purpose library function you cannot know this.
Also the use of 2^x instead of e^x for radioactive decay, population growth or financial interest is not at all limited to the narrow cases of doublings or halvings. Those happen when x in an integer in 2^x, but 2^x accepts any real value as argument. There is no difference in the definition set between 2^x and e^x.
The only difference between using 2^x and e^x in those 3 applications is in a different constant in the exponent, which has the easier to understand meaning of being the doubling or halving time, when using 2^x and a less obvious meaning when using e^x. In fact, only doubling or halving times are directly measured for radioactive decay or population growth. When you want to use e^x, you must divide the measured values by ln 2, an extra step that brings no advantage whatsoever, because it must be implicitly reversed during every subsequent exponential evaluation when the argument range reduction is computed.
That is a false statement.
> In fact, only doubling or halving times are directly measured for radioactive decay or population growth.
That is a false statement -- in population studies, as just one example, the logistic function (https://en.wikipedia.org/wiki/Logistic_function) tracks the effect of population growth over time as environmental limits take hold. This is a detailed model that forms a cornerstone of population environmental studies. To be valid, it absolutely requires the presence of e^x in one or another form.
> ... because the numbers are represented as binary numbers in computers and the functions are evaluated with approximation formulae that are valid only for a small range of input arguments.
That is a spectacularly false statement.
> There is no difference in the definition set between 2^x and e^x.
That is absolutely false, and trivially so.
> No, you did not try to understand what I have written.
On the contrary, I understood it perfectly. From a mathematical standpoint, 2^x cannot substitute for e^x, anywhere, ever. They're not interchangeable.
I hope no math students read this conversation and acquire a distorted idea of the very important role played by Euler's number in many applied mathematical fields.
The importance of e is that it's the natural base of exponents and logarithms, the one that makes an otherwise constant factor disappear. If you're using a different base b, you generally need to adjust by exp(b) or ln(b), neither of which requires computing or using e itself (instead requiring a function call that's using minimax-generated polynomial coefficients for approximation).
The importance of π or 2π is that the natural periodicity of trigonometric functions is 2π or π (for tan/cot). If you're using a different period, you consequently need to multiply or divide by 2π, which means you actually have to use the value of the constant, as opposed to calling a library function with the constant itself.
Nevertheless, I would say that despite the fact that you would directly use e only relatively rarely, it is still the more important constant.
Except for this conversion from directly measured diameters, one rarely cares about hemicycles, but about cycles.
The trigonometric functions with arguments measured in cycles are more accurate and faster to compute. The trigonometric functions with arguments measured in radians have simpler formulae for derivatives and primitives. The conversion factor between radians and cycles is 2Pi, which leads to its ubiquity.
While students are taught to use the trigonometric functions with arguments measured in radians, because they are more convenient for some symbolic computations, any angle that is directly measured is never measured in radians, but in fractions of a cycle. The same is true for any angle used by an output actuator. The methods of measurement with the highest precision for any physical quantity eventually measure some phase angle in cycles. Even the evaluations of the trigonometric functions with angles measured in radians must use an internal conversion between radians and cycles, for argument range reduction.
So the use of the 2*Pi constant is unavoidable in almost any modern equipment or computer program, even if many of the uses are implicit and not obvious for whoever does not know the detailed implementations of the standard libraries and of the logic hardware.
If trigonometric functions with arguments measured in radians are used anywhere, then conversions between radians in cycles must exist, either explicit conversions or implicit conversions.
If only trigonometric functions with arguments measured in cycles are used, then some multiplications with 2Pi or its inverse appear where derivatives or primitives are computed.
In any application that uses trigonometric functions millions of multiplications with 2Pi may be done every second. In contrast, a multiplication by Pi could be needed only at most at the rate at which one could measure the diameters of some physical objects for which there would be a reason to want to know their circumference.
Because Pi is needed so much more rarely, it is simpler to just have a constant Pi_2 to be used in most cases and for the rare case of computing a circumference from the diameter to use Pi_2*D/2,
Please expand on this. Surely if that were the case, numerical implementations would first convert a radian input to cycles before doing whatever polynomial/rational approximation they like, but I've never seen one like that.
> Because Pi is needed so much more rarely, it is simpler to just have a constant Pi_2 to be used in most cases and for the rare case of computing a circumference from the diameter to use Pi_2*D/2,
Well of course, that's why you have (in C) M_PI, M_PI2, and so on (and in some dialects M_2PI).
Then you have not examined the complete implementation of the function.
The polynomial/rational approximation mentioned by you is valid only for a small range of the possible input arguments.
Because of this, the implementation of any exponential/logarithmic/trigonometric function starts by an argument range reduction, which produces a value inside the range of validity of the approximating expression, by exploiting some properties of the function that must be computed.
In the case of trigonometric functions, the argument must be reduced first to a value smaller than a cycle, which is equivalent to a conversion from radians to cycles and then back to radians. This reduction, and the rounding errors associated with it, is avoided when the function uses arguments already expressed in cycles, so that the reduction is done exactly by just taking the fractional part of the argument.
Then the symmetry properties of the specific trigonometric function are used to further reduce the range of the argument to one fourth or one eighth of a cycle. When the argument had been expressed in cycles this is also an exact operation, otherwise it can also introduce rounding errors, because adding or subtracting Pi or its submultiples cannot be done exactly.
I was assuming that as part of the table stakes of the conversation.
Let's look at something basic and traditional like Cephes: https://github.com/jeremybarnes/cephes/blob/master/cmath/sin...
We start of with a range reduction to [0, pi/4] (presumably this would be [0, 1/8] in cycles), and then the polynomial happens.
If cycles really were that better, why isn't this implemented as starting with a conversion to cycles, then removal of the interval part, and then a division by 8, followed by whatever the appropriate polynomial/rational function is?
> adding or subtracting Pi or its submultiples cannot be done exactly.
I was also assuming that we've been talking about floating point this whole time.
Applications such as planes flying, sending data through wires, medical imaging (or any of a million different direct applications) do not count, I assume?
Your naivety about what makes the world function is not an argument for something being useless. The number appearing in one of the most important algorithms should give you a hint about how relevant it is https://en.wikipedia.org/wiki/Fast_Fourier_transform
None of the applications mentioned by you need to use the exponential e^x or the natural logarithm, all can be done using the exponential 2^x and the binary logarithm. The use of the less efficient and less accurate functions remains widespread only because of bad habits learned in school, due to the huge inertia that affects the content of school textbooks.
The fast Fourier transform is written as if it would use e^x, but that has been misleading for you, because it uses only trigonometric functions, so it is irrelevant for discussing whether "e" or "ln 2" is more important, because neither of these 2 transcendental constants is used in the Fast Fourier Transform.
Moreover, FFT is an example for the fact that it is better to use trigonometric functions with the arguments measured in cycles, i.e. functions of 2*Pi*x, instead of the worse functions with arguments measured in radians, because with arguments expressed in cycles the FFT formulae become simpler, all the multiplicative constants explicitly or implicitly involved in the FFT direct and inverse computations being eliminated.
A function like cos(2*Pi*x) is simpler than cos(x), despite what the conventional notation implies, because the former does not contain any multiplication with 2*Pi, but the latter contains a multiplication with the inverse of 2*Pi, for argument range reduction.
It's true that the FFT does not use either of the transcendental numbers e or ln(2), but that's because the FFT does not use transcendental numbers at all! (Roots of unity, sure, but those are algebraic)
> all the multiplicative constants explicitly or implicitly involved in the FFT direct and inverse computations being eliminated.
Doesn't that basically get you a Hadamard transform?
The FFT formulae when written using the function e^ix contain an explicit division by 2Pi which must be done either in the direct FFT or in the inverse FFT. It is more logical to put the constant in the direct transform, but despite this most implementations put the constant in the inverse transform, presumably because a few applications use only the direct transform, not also the inverse transform.
Some implementations divide by sqrt(2Pi) in both directions, to enable the use of the same function for both direct and inverse FFT.
Besides this explicit used of 2Pi, there is an implicit division by 2Pi in every evaluation of e^ix, for argument range reduction.
If instead of using e-based exponentials one uses trigonometric functions with arguments measured in cycles, not in radians, then both the explicit use of 2Pi and its implicit uses are eliminated. The explicit use of 2Pi comes from computing an average value over a period, by integration followed by division by the period length, so when the period is 1 the constant disappears. When the function argument is measured in cycles, argument range reduction no longer needs a multiplication with the inverse of 2Pi, it is done by just taking the fractional part of the argument.
I am sorry, but comments like this are caused by the naivety of not knowing a single things about mathematics.
Do you not understand that mathematics is not just about implementation, but about forming models of reality? The idea of trying to model a physical system while pretending that e.g. the solution of the differential equations x'=x does not matter is just idiotic.
The idea that just because some implementation can avoid a certain constant, that this constant is irrelevant is immensely dumb and tells me that you lack basic mathematical education.
e^(ix) = cos(x) + isin(x). In particular e^(ipi) = -1
(1 + 1/n)^n = e. This is part of what makes e such a uniquely useful exponent base.
Not applied enough? What about:
d/dx e^x = e^x. This makes e show up in the solutions of all kinds of differential equations, which are used in physics, engineering, chemistry...
The Fourier transform is defined as integral e^(iomega*t) f(t) dt.
And you can't just get rid of e by changing base, because you would have to use log base e to do so.
Edit: how do you escape equations here? Lots of the text in my comment is getting formatted as italics.
Cauchy path integration feels like a cheat code once you fully imbibe it.
Got me through many problems that involves seemingly impossible to memorize identities and re-derivation of complex relations become essentially trivial
However, whenever your symbolic computation produces a mathematical model that will be used for numeric computations, i.e. in a computer program, it is more efficient to replace all e^x exponentials and natural logarithms with 2^x exponentials and binary logarithms, instead of retaining the complex exponentials and logarithms and evaluating them directly.
At the same time, it is also preferable to replace the trigonometric functions of arguments measured in radians with trigonometric functions of arguments measured in cycles (i.e. functions of 2*Pi*x).
This replacement eliminates the computations needed for argument range reduction that otherwise have to be made at each function evaluation, wasting time and reducing the accuracy of the results.
Just escape any asterisks in your post that you want rendered as asterisks: this: \* gives: *.
Moreover, you can replace any use of e^x with the use of 2^x, which inserts ln(2) constants in various places, (but removes ln 2 from the evaluations of exponentials and logarithms, which results in a net gain).
If you use only 2^x, you must know that its derivative is ln(2) * 2^x, and knowing this is enough to get rid of "e" anywhere. Even in derivation formulae, in actual applications most of the multiplications with ln 2 can be absorbed in multiplications with other constants, as you normally do not have 2^x expressions that are derived, but 2^(a*x), where you do ln(2)*a at compile time.
You start with the formula for the exponential of an imaginary argument, but there the use of "e" is just a conventional notation. The transcendental number "e" is never used in the evaluation of that formula and also none of the numbers produced by computing an exponential or logarithm of real numbers are involved in that formula.
The meaning of that formula is that if you take the expansion series of the exponential function and you replace in it the argument with an imaginary argument you obtain the expansion series for the corresponding trigonometric functions. The number "e" is nowhere involved in this.
Moreover, I consider that it is far more useful to write that formula in a different way, without any "e":
1^x = cos(2Pi*x) + i * sin(2Pi*x)
This gives the relation between the trigonometric functions with arguments measured in cycles and the unary exponential, whose argument is a real number and whose value is a complex number of absolute value equal to 1, and which describes the unit circle in the complex plane, for increasing arguments.
This formula appears more complex only because of using the traditional notation. If you call cos1 and sin1 the functions of period 1, then the formula becomes:
1^x = cos1(x) + i * sin1(x)
The unary exponential may appear weirder, but only because people are habituated from school with the exponential of imaginary arguments instead of it. None of these 2 functions is weirder than the other and the use of the unary exponential is frequently simpler than of the exponential of imaginary arguments, while also being more accurate (no rounding errors from argument range reduction) and faster to compute.
With this substitution, some formulae become simpler and others become more complicated, but, when also considering the cost of the function evaluations, an overall greater simplicity is achieved.
In comparison with the "e" based exponentials, the binary exponential and the unary exponential and their inverses have the advantage that there are no rounding errors caused by argument range reduction, so they are preferable especially when the exponents can be very big or very small, while the "e" based exponentials can work fine for exponents guaranteed to be close to 0.