It feels a bit like the article's trying to extend some legitimate debate about whether fixing i versus -i is natural to push this other definition as an equal contender, but there's hardly any support offered. I expect the last-place 28% poll showing, if it does reflect serious mathematicians at all, is those who treat the topological structure as a given or didn't think much about the implications of leaving it out.
We now have a base element for complex numbers, derived uniquely from the expansion of <R,0,1,+,*> into its own natural closure. I.e. we are simply exploring the natural closure of <R,0,1,+,*>.
We don't have to ask if y = +y, because it does by definition of multiplication by +1, i.e. { (+1)z = z for all z } because that +1 is the multiplicative identity. No choice is necessary.
The square root of -1 involves a choice, but our definition of y did not require a choice.
As the "evidence" piles up, in further mathematics, physics, and the interactions of the two, I still never got to the point at the core where I thought complex numbers were a certain fundamental concept, or just a convenient tool for expressing and calculating a variety of things. It's more than just a coincidence, for sure, but the philosophical part of my mind is not at ease with it.
I doubt anyone could make a reply to this comment that would make me feel any better about it. Indeed, I believe real numbers to be completely natural, but far greater mathematicians than I found them objectionable only a hundred years ago, and demonstrated that mathematics is rich and nuanced even when you assume that they don't exist in the form we think of them today.
Real numbers function as magnitudes or objects, while complex numbers function as coordinatizations - a way of packaging structure that exists independently of them, e.g. rotations in SO(2) together with scaling). Complex numbers are a choice of coordinates on structure that exists independently of them. They are bookkeeping (a la double‑entry accounting) not money
Take R as an ordered field with its usual topology and ask for a finite-dimensional, commutative, unital R-algebra that is algebraically closed and admits a compatible notion of differentiation with reasonable spectral behavior. You essentially land in C, up to isomorphism. This is not an accident, but a consequence of how algebraic closure, local analyticity, and linearization interact. Attempts to remain over R tend to externalize the complexity rather than eliminate it, for example by passing to real Jordan forms, doubling dimensions, or encoding rotations as special cases rather than generic elements.
More telling is the rigidity of holomorphicity. The Cauchy-Riemann equations are not a decorative constraint; they encode the compatibility between the algebra structure and the underlying real geometry. The result is that analyticity becomes a global condition rather than a local one, with consequences like identity theorems and strong maximum principles that have no honest analogue over R.
I’m also skeptical of treating the reals as categorically more natural. R is already a completion, already non-algebraic, already defined via exclusion of infinitesimals. In practice, many constructions over R that are taken to be primitive become functorial or even canonical only after base change to C.
So while one can certainly regard C as a technical device, it behaves like a fixed point: impose enough regularity, closure, and stability requirements, and the theory reconstructs it whether you intend to or not. That does not make it metaphysically fundamental, but it does make it mathematically hard to avoid without paying a real structural cost.
Most of real numbers are not even computable. Doesn't that give you a pause?
When you divide 2 collinear 2-dimensional vectors, their quotient is a real number a.k.a. scalar. When the vectors are not collinear, then the quotient is a complex number.
Multiplying a 2-dimensional vector with a complex number changes both its magnitude and its direction. Multiplying by +i rotates a vector by a right angle. Multiplying by -i does the same thing but in the opposite sense of rotation, hence the difference between them, which is the difference between clockwise and counterclockwise. Rotating twice by a right angle arrives in the opposite direction, regardless of the sense of rotation, therefore i*i = (-i))*(-i) = -1.
Both 2-dimensional vectors and complex numbers are included in the 2-dimensional geometric algebra, whose members have 2^2 = 4 components, which are the 2 components of a 2-dimensional vector together with the 2 components of a complex number. Unlike the complex numbers, the 2-dimensional vectors are not a field, because if you multiply 2 vectors the result is not a vector. All the properties of complex numbers can be deduced from those of the 2-dimensional vectors, if the complex numbers are defined as quotients, much in the same way how the properties of rational numbers are deduced from the properties of integers.
A similar relationship like that between 2-dimensional vectors and complex numbers exists between 3-dimensional vectors and quaternions. Unfortunately the discoverer of the quaternions, Hamilton, has been confused by the fact that both vectors and quaternions have multiple components and he believed that vectors and quaternions are the same thing. In reality, vectors and quaternions are distinct things and the operations that can be done with them are very different. This confusion has prevented for many years during the 19th century the correct use of quaternions and vectors in physics (like also the confusion between "polar" vectors and "axial" vectors a.k.a. pseudovectors).
For complex numbers my gut feeling is yes, they do.
Which makes me wonder if complex numbers that show up in physics are a sign there are dimensions we can’t or haven’t detected.
I saw a demo one time of a projection of a kind of fractal into an additional dimension, as well as projections of Sierpinski cubes into two dimensions. Both blew my mind.
I believe even negative numbers had their detractors
They originally arose as tool, but complex numbers are fundamental to quantum physics. The wave function is complex, the Schrödinger equation does not make sense without them. They are the best description of reality we have.
So in our everyday reality I think -1 and i exist the same way. I also think that complex numbers are fundamental/central in math, and in our world. They just have so many properties and connections to everything.
In my view, that isn’t even true for nonnegative integers. What’s the physical representation of the relatively tiny (compared to ‘most integers’) Graham’s number (https://en.wikipedia.org/wiki/Graham's_number)?
Back to the reals: in your view, do reals that cannot be computed have good physical representations?
I think these questions mostly only matter when one tries to understand their own relation to these concepts, as GP asked.
If it doesn't differ, you are in the good company of great minds who have been unable to settle this over thousands of years and should therefore feel better!
More at SEP:
Almost every other intuition, application, and quirk of them just pops right out of that statement. The extensions to the quarternions, etc… all end up described by a single consistent algebra.
It’s as if computer graphics was the first and only application of vector and matrix algebra and people kept writing articles about “what makes vectors of three real numbers so special?” while being blithely unaware of the vast space that they’re a tiny subspace of.
A better way to understand my point is: we need mental gymnastics to convert problems into equations. The imaginary unit, just like numbers, are a by-product of trying to fit problems onto paper. A notable example is Schrodinger's equation.
(I have a math degree, so I don't have any issues with C, but this is the kind of question that would have troubled me in high school.)
Complex numbers offers that resolution.
For example, reflections and chiral chemical structures. Rotations as well.
It turns out all things that rotate behave the same, which is what the complex numbers can describe.
Polynomial equations happen to be something where a rotation in an orthogonal dimension leaves new answers.
That is how they started, but mathematics becomes remarkable "better" and more consistent with complex numbers.
As you say, The Fundamental Theorem of Algebra relies on complex numbers.
Cauchy's Integral Theorem (and Residue Theorem) is a beautiful complex-only result.
As is the Maximum Modulus Principle.
The Open Mapping Theorem is true for complex functions, not real functions.
---
Are complex numbers really worse than real numbers? Transcendentals? Hippasus was downed for the irrationals.
I'm not sure any numbers outside the naturals exist. And maybe not even those.
First, let's try differential equations, which are also the point of calculus:
Idea 1: The general study of PDEs uses Newton(-Kantorovich)'s method, which leads to solving only the linear PDEs,
which can be held to have constant coefficients over small regions, which can be made into homogeneous PDEs,
which are often of order 2, which are either equivalent to Laplace's equation, the heat equation,
or the wave equation. Solutions to Laplace's equation in 2D are the same as holomorphic functions.
So complex numbers again.
Now algebraic closure, but better: Idea 2: Infinitary algebraic closure. Algebraic closure can be interpeted as saying that any rational functions can be factorised into monomials.
We can think of the Mittag-Leffler Theorem and Weierstrass Factorisation Theorem as asserting that this is true also for meromorphic functions,
which behave like rational functions in some infinitary sense. So the algebraic closure property of C holds in an infinitary sense as well.
This makes sense since C has a natural metric and a nice topology.
Next, general theory of fields: Idea 3: Fields of characteristic 0. Every algebraically closed field of characteristic 0 is isomorphic to R[√-1] for some real-closed field R.
The Tarski-Seidenberg Theorem says that every FOL statement featuring only the functions {+, -, ×, ÷} which is true over the reals is
also true over every real-closed field.
I think maybe differential geometry can provide some help here. Idea 4: Conformal geometry in 2D. A conformal manifold in 2D is locally biholomorphic to the unit disk in the complex numbers.
Idea 5: This one I'm not 100% sure about. Take a smooth manifold M with a smoothly varying bilinear form B \in T\*M ⊗ T\*M.
When B is broken into its symmetric part and skew-symmetric part, if we assume that both parts are never zero, B can then be seen as an almost
complex structure, which in turn naturally identifies the manifold M as one over C.I showed various colleagues. Each one would ask me to demonstrate the equivalence to their preferred presentation, then assure me "nothing to see here, move along!" that I should instead stick to their convention.
Then I met with Bill Thurston, the most influential topologist of our lifetimes. He had me quickly describe the equivalence between my form and every other known form, effectively adding my node to a complete graph of equivalences he had in his muscle memory. He then suggested some generalizations, and proposed that circle packings would prove to be important to me.
Some mathematicians are smart enough to see no distinction between any of the ways to describe the essential structure of a mathematical object. They see the object.
This is a very interesting question, and a great motivator for Galois theory, kind of like a Zen koan. (e.g. "What is the sound of one hand clapping?")
But the question is inherently imprecise. As soon as you make a precise question out of it, that question can be answered trivially.
One of the roots is 1, choosing either adjacent one as a privileged group generator means choosing whether to draw the same complex plane clockwise or counterclockwise.
1) Exactly one C
2) Exactly two isomorphic Cs
3) Infinitely many isomorphic Cs
It's not really the question of whether i and -i are the same or not. It's the question of whether this question arises at all and in which form.
Haven’t thought it through so I’m quite possibly wrong but it seems to me this implies that in such a situation you can’t have a coordinate view. How can you have two indistinguishable views of something while being able to pick one view?
anyhow. I'm a bit of an odd one in that I have no problems with imaginary numbers but the reals always seemed a bit unreal to me. that's the real controversy, actually. you can start looking up definable numbers and constructivist mathematics, but that gets to be more philosophy than maths imho.
That's not the interesting part. The interesting part is that I thought everyone is the same, like me.
It was a big and surprising revelation that people love counting or algebra in just the same way I feel about geometry (not the finite kind) and feel awkward in the kind of mathematics that I like.
It's part of the reason I don't at all get the hate that school Calculus gets. It's so intuitive and beautifully geometric, what's not to like. .. that's usually my first reaction. Usually followed by disappointment and sadness -- oh no they are contemplating about throwing such a beautiful part away.
The obsession with rigor that later developed -- while necessary -- is really an "advanced topic" that shouldn't displace learning the intuition and big picture concepts. I think math up through high school should concentrate on the latter, while still being honest about the hand-waving when it happens.
calculus works... because it was almost designed for Mechanics. If the machine it's getting input, you have output. When it finished getting input, all the output you get yields some value, yes, but limits are best understood not for the result, but for the process (what the functions do).
But instead calculus is taught from fundamentals, building up from sequences. And a lot of complexity and hate comes from all those "technical" theorems that you need to make that jump from sequences to functions. E.g. things like "you can pick a converging subsequence from any bounded sequence".
In Maths classes, we started with functions. Functions as list of pairs, functions defined by algebraic expressions, functions plotted on graph papers and after that limits. Sequences were peripherally treated, just so that limits made sense.
Simultaneously, in Physics classes we were being taught using infinitesimals, with the a call back that "you will see this done more formally in your maths classes, but for intuition, infinitesimals will do for now".
(attributed to Jerry Bona)
There's no "intend to". The complex numbers are what they are regardless of us; this isn't quantum mechanics where the presence of an observer somehow changes things.
(Yes, mathematicians really use it. It makes parity a simpler polynomial than the normal assignment).
To fix the coordinate structure of the complex numbers (a,b) is in effect to have made a choice of a particular i, and this is one of the perspectives discussed in the essay. But it is not the only perspective, since with that perspective complex conjugation should not count as an automorphism, as it doesn't respect the choice of i.
For instance: if you forget the order in Q (which you can do without it stopping being a field), there is no algebraic (no order-dependent) way to distinguish between the two algebraic solutions of x^2 = 2. You can swap each other and you will not notice anything (again, assuming you "forget" the order structure).
But over the reals R, this polynomial is not irreducible. There we find that some pairs of roots have the same real value, and others don't. This leads to the idea of a "complex conjugate pair". And so some pairs of roots of the original polynomial are now different than other pairs.
That notion of a "complex conjugate pair of roots" is therefore not a purely algebraic concept. If you're trying to understand Galois theory, you have to forget about it. Because it will trip up your intuition and mislead you. But in other contexts that is a very meaningful and important idea.
And so we find that we don't just care about what concepts could be understood. We also care about what concepts we're currently choosing to ignore!
Basically C comes up in the chain R \subset C \subset H (quaternions) \subset O (octonions) by the so-called Cayley-Dickson construction. There is a lot of structure.
This disagreement seems above the head of non mathematicians, including those (like me) with familiarity with complex numbers
The disagreement is on how much detail of the fine structure we care about. It is roughly analogous to asking whether we should care more about how an ellipse is like a circle, or how they are different. One person might care about the rigid definition and declare them to be different. Another notices that if you look at a circle at an angle, you get an ellipse. And then concludes that they are basically the same thing.
This seems like a silly thing to argue about. And it is.
However in different branches of mathematics, people care about different kinds of mathematical structure. And if you view the complex numbers through the lens of the kind of structure that you pay attention to, then ignore the parts that you aren't paying attention to, your notion of what is "basically the same as the complex numbers" changes. Just like how one of the two people previously viewed an ellipse as basically the same as a circle, because you get one from the other just by looking from an angle.
Note that each mathematician here can see the points that the other mathematicians are making. It is just that some points seem more important to you than others. And that importance is tied to what branch of mathematics you are studying.
This is really a disagreement about how to construct the complex numbers from more-fundamental objects. And the question is whether those constructions are equivalent. The author argues that two of those constructions are equivalent to each other, but others are not. A big crux of the issue, which is approachable to non-mathematicians, is whether it i and -i are fundamentally different, because arithmetically you can swap i with -i in all your equations and get the same result.
Functions are defined as relations on two sets such that each element in the first set is in relation to at most one element in the second set. And suddenly we abandon that very definitions without ever changing the notation! Complex logarithms suddenly have infinitely many values! And yet we say complex expressions are equal to something.
Madness.
This desire to absolutely pick one when from the purely mathematical perspective they're all equal is both ugly and harmful (as in complicates things down the line).
But couldn't we just switch the nomenclature? Instead of an oxymoronic concept of "multivalue function", we could just call it "relation of complex equivalence" or something of sorts.
> Starting 2022, I am now the John Cardinal O’Hara Professor of Logic at the University of Notre Dame.
> From 2018 to 2022, I was Professor of Logic at Oxford University and the Sir Peter Strawson Fellow at University College Oxford.
Also interesting:
> I am active on MathOverflow, and my contributions there (see my profile) have earned the top-rated reputation score.
I feel like the problem is that we just assume that e^(pi*i) = -1 as a given, which makes i "feel" like number, which gives some validity to other interpretations. But I would argue that that equation is not actually valid. It arises from Taylor series equivalence between e, sin and cos, but taylor series is simply an approximation of a function by matching its derivatives around a certain point, namely x=0. And just because you take 2 functions and see that their approximations around a certain point are equal, doesn't mean that the functions are equal. Even more so, that definition completely bypasses what it means to taking derivatives into the imaginary plane.
If you try to prove this any other way besides Taylor series expansion, you really cant, because the concept of taking something to the power of "imaginary value" doesn't really have any ties into other definitions.
As such, there is nothing really special about e itself either. The only reason its in there is because of a pattern artifact in math - e^x derivative is itself, while cos and sin follow cyclic patterns. If you were to replace e with any other number, note that anything you ever want to do with complex numbers would work out identically - you don't really use the value of e anywhere, all you really care about is r and theta.
So if you drop the assumption that i is a number and just treat i as an attribute of a number like a negative sign, complex numbers are basically just 2d numbers written in a special way. And of course, the rotations are easily extended into 3d space through quaternions, which use i j an k much in the same way.
Not sure I follow you here... The special thing about e is that it's self-derivative. The other exponential bases, while essentially the same in their "growth", have derivatives with an extra factor. I assume you know e is special in that sense, so I'm unclear what you're arguing?
On a similar note, why insist that "i" (or a negative, for that matter) is an "attribute" on a number rather than an extension of the concept of number? In one sense, this is a just a definitional choice, so I don't think either conception is right or wrong. But I'm still not getting your preference for the attribute perspective. If anything, especially in the case of negative numbers, it seems less elegant than just allowing the negatives to be numbers?
The point of contention that leads to 3 interpretations is whether you assume i acts like a number. My argument is that people generally answer yes, because of Eulers identity (which is often stated as example of mathematical beauty).
My argument is that i does not act like a number, it acts more like an operator. And with i being an operator, C is not really a thing.
But then in India we discovered that it can really participate with the the other bonafide numbers as a first class citizen of numbers.
It is not longer a place holder but can be the argument of the binary functions, PLUS, MINUS, MULTIPLY and can also be the result of these functions.
With i we have a similar observation, that it can indeed be allowed as a first class citizen as a number. Addition and multiplication can accept them as their arguments as well as their RHS. It's a number, just a different kind.
Your whole point about Taylor series is also wrong, as Taylor series are not approximations, they are actually equal to the original function if you take their infinite limit for the relevant functions here (e^x, sin x, cos x). So there is no approximation to be talked about, and no problem in identifying these functions with their Taylor series expansions.
I'd also note that there is no need to use Taylor series to prove Euler's formula. Other series that converge to e^x,cos x, sin x can also get you there.
The whole idea of imaginary number is its basically an extension of negative numbers in concept. When you have a negative number, you essentially have scaling + attribute which defines direction. When you encounter two negative attributes and multiply them, you get a positive number, which is a rotation by 180 degrees. Imaginary numbers extend this concept to continuous rotation that is not limited to 180 degrees.
With just i, you get rotations in the x/y plane. When you multiply by 1i you get 90 degree rotation to 1i. Multiply by i again, you get another 90 degree rotation to -1 . And so on. You can do this in xyz with i and j, and you can do this in 4dimentions with i j and k, like quaternions do, using the extra dimension to get rid of gimbal lock computation for vehicle control (where pointed straight up, yaw and roll are identicall)
The fact that i maps to sqrt of -1 is basically just part of this definition - you are using multiplication to express rotations, so when you ask what is the sqrt of -1 you are asking which 2 identical number create a rotation of 180 degrees, and the answer is 1i and 1i.
Note that the definition also very much assumes that you are only using i, i.e analogous to having the x/y plane. If you are working within x y z plane and have i and j, to get to -1 you can rotate through x/y plane or x/z plane. So sqrt of -1 can either mean "sqrt for i" or "sqrt for j" and the answer would be either i or j, both would be valid. So you pretty much have to specify the rotation aspect when you ask for a square root.
Note also that you can you can define i to be <90 degree rotation, like say 60 degrees and everything would still be consistent. In which case cube root of -1 would be i, but square root of -1 would not be i, it would be a complex number with real and imaginary parts.
The thing to understand about math is under the hood, its pretty much objects and operations. A lot of times you will have conflicts where doing an operation on a particular object is undefined - for example there are functions that assymptotically approach zero but are never equal to it. So instead, you have to form other rules or append other systems to existing systems, which all just means you start with a definition. Anything that arises from that definition is not a universal truth of the world, but simply tools that help you deal with the inconsistencies.
There's more to it than rotation by 180 degrees. More pedagogically ...
Define a tuple (a,b) and define addition as pointwise addition. (a, b) + (c, d) = (a+c, b+d). Apples to apples, oranges to oranges. Fair enough.
How shall I define multiplication, so that multiplication so defined is a group by itself and interacts with the addition defined earlier in a distributive way. Just the way addition and multiplication behave for reals.
Ah! I have to define it this way. OK that's interesting.
But wait, then the algebra works out as if (0, 1) * (0, 1) = (-1, 0) but right hand side is isomorphic to -1. The (x, 0)s behave with each other just the way the real numbers behave with each other.
All this writing of tuples is cumbersome, so let me write (0,1) as i.
Addition looks like the all too familiar vector addition. What does this multiplication look like? Let me plot in the coordinate axes.
Ah! It's just scaled rotation, These numbers are just the 2x2 scaled rotation matrices that are parameterized not by 4 real numbers but just by two. One controls degree of rotation the other the amount of scaling.
If I multiply two such matrices together I get back a scaled rotation matrix. OK, understandable and expected, rotation composed is a rotation after all. But if I add two of them I get back another scaled rotation matrix, wow neato!
Because there are really only two independent parameters one isomorphic to the reals, let's call the other one "imaginary" and the tupled one "complex".
What if I negate the i in a tuple? Oh! it's reflection along the x axis. I got translation, rotation and reflection using these tuples.
What more can I do? I can surely do polynomials because I can add and multiply. Can I do calculus by falling back to Taylor expansions ? Hmm let me define a metric and see ...
You made it seem like rotations are an emergent property of complex numbers, where the original definition relies on defining the sqrt of -1.
Im saying that the origin of complex numbers is the ability to do arbitrary rotations and scaling through multiplication, and that i being the sqrt of -1 is the emergent property.
Not true historically -- the origin goes back to Cardano solving cubic equations.
But that point aside, it seems like you are trying to find something like "the true meaning of complex numbers," basing your judgement on some mix of practical application and what seems most intuitive to you. I think that's fruitless. The essence lies precisely in the equivalence of the various conceptions by means of proof. "i" as a way "to do arbitrary rotations and scaling through multiplication", or as a way give the solution space of polynomials closure, or as the equivalence of Taylor series, etc -- these are all structurally the same mathematical "i".
So "i" is all of these things, and all of these things are useful depending on what you're doing. Again, by what principle do you give priority to some uses over others?
Whether or not mathematicians realized this at the time, there is no functional difference in assuming some imaginary number that when multiplied with another imaginary number gives a negative number, and essentially moving in more than 1 dimension on the number line.
Because it was the same way with negative numbers. By creating the "space" of negative numbers allows you do operations like 3-5+6 which has an answer in positive numbers, but if you are restricted to positive only, you can't compute that.
In the same way like I mentioned, Quaternions allow movement through 4 dimentions to arrive at a solution that is not possible to achieve with operations in 3 when you have gimbal lock.
So my argument is that complex numbers are fundamental to this, and any field or topological construction on that is secondary.
You disagreed with the parent comment that said
"Rotations fell out of the structure of complex numbers. They weren't placed there on purpose. If you want to rotate things there are usually better ways."
I see Complex numbers in the light of doing addition and multiplication on pairs. If one does that, rotation naturally falls out of that. So I would agree with the parent comment especially if we follow the historical development. The structure is identical to that of scaled rotation matrices parameterized by two real numbers, although historically they were discovered through a different route.
I think all of us agree with the properties of complex numbers, it's just that we may be splitting hairs differently.
I mean, the derivation to rotate things with complex numbers is pretty simple to prove.
If you convert to cartesian, the rotation is a scaling operation by a matrix, which you have to compute from r and theta. And Im sure you know that for x and y, the rotation matrix to the new vector x' and y' is
x' = cos(theta)*x - sin(theta)*y
y' = sin(theta)*x + cos(theta)*y
However, like you said, say you want to have some representation of rotation using only 2 parameters instead of 4, and simplify the math. You can define (xr,yr) in the same coordinates as the original vector. To compute theta, you would need ArcTan(yr/xr), which then plugged back into Sin and Cos in original rotation matrix give you back xr and yr. Assuming unit vectors:
x'= xr*x - yr*y
y'= yr*x + xr*y
the only trick you need is to take care negative sign on the upper right corner term. So you notice that if you just mark the y components as i, and when you see i*i you take that to be -1, everything works out.
So overall, all of this is just construction, not emergence.
But all that you said is not about the point that I was trying to convey.
What I showed was you if you define addition of tuples a certain, fairly natural way. And then define multiplication on the same tuples in such a way that multiplication and addition follow the distributive law (so that you can do polynomials with them). Then your hands are forced to define multiplication in very specific way, just to ensure distributivity. [To be honest their is another sneaky way to do it if the rules are changed a bit, by using reflection matrices]
Rotation so far is nowhere in the picture in our desiderata, we just want the distributive law to apply to the multiplication of tuples. That's it.
But once I do that, lo and behold this multiplication has exactly the same structure as multiplication by rotation matrices (emergence? or equivalently, recognition of the consequences of our desire)
In other words, these tuples have secretly been the (scaled) cos theta, sin theta tuples all along, although when I had invited them to my party I had not put a restriction on them that they have to be related to theta via these trig functions.
Or in other words, the only tuples that have distributive addition and multiplication are the (scaled) cos theta sin theta tuples, but when we were constructing them there was no notion of theta just the desire to satisfy few algebraic relations (distributivity of add and multiply).
> "How shall I define multiplication, so that multiplication so defined is a group by itself and interacts with the addition defined earlier in a distributive way. Just the way addition and multiplication behave for reals."
which eventually becomes
> "Ah! It's just scaled rotation"
and the implication is that emergent.
Its like you have a set of objects, and defining operations on those objects that have properties of rotations baked in ( because that is the the only way that (0, 1) * (0, 1) = (-1, 0) ever works out in your definition), and then you are surprised that you get something that behaves like rotation.
"Imaginary" is an unfortunate name which gives makes this misunderstanding intuitive.
https://youtube.com/playlist?list=PLiaHhY2iBX9g6KIvZ_703G3KJ...
However, what's true about what you and GP have suggested is that both i and -1 are used as units. Writing -10 or 10i is similar to writing 10kg (more clearly, 10 × i, 10 × -1, 10 × 1kg). Units are not normally numbers, but they are for certain dimensionless quantities like % (1/100) or moles (6.02214076 × 10^23) and i and -1. That is another wrinkle which is genuinely confusing.
You need a base to define complex numbers, in that new space i=0+1*i and you could call that a complex number
0 and 1 help define integers, without {Empty, Something} (or empty, set of the empty, or whatever else base axioms you are using) there is no integers
i=0+1*i
Makes i a number. Since * is a binary operator in your space, i needs to be a number for 1*i to make any sense.
Similarly, if = is to be a binary relation in your space, i needs to be a number for i={anything} to make sense.
Comparing i with a unary operator like - shows the difference:
i*i=-1 makes perfect sense
-*-=???? does not make sense
i*i=-1 makes perfect sense
This is one definition of i. Or you could geometrically say i is the orthogonal unit vector in the (real,real) plane where you define multiplication as multiplying length and adding angles-2 > 1 (in C)
Which is why I prefer to leave <,> undefined in C and just take the magnitude if I want to compare complex numbers.