1. I don't think human reasoning is consistent in the technical sense, which makes the incompleteness theorem inapplicable regardless of what you think about us and Turing machines.
2. The human brain is full of causal cycles at all scales. Even if you think human reasoning is axiomatisable, it's not at all obvious to me that the set of axioms would be finite or even computable. Again this rules out any application of Gödel's theorem.
3. Penrose's argument revolves around the fact that the sentence encoding "true but not provable" in Gödel's argument is actually provably true in the outer logical system being used to prove Gödel's theorem, just not the inner logical system being studied. But as all logicians know, truth is a slippery concept and is itself internally indefinable (Tarski's theorem), so there's no guarantee that this notion of "truth" used in the outer system is the same as the "real" truth predicate of the inner system (at best it's something like an arbitrary choice, dependent on your encoding). Penrose is referring to "truth" at multiple logical levels and conflating them.
In other words: you can't selectively chose to apply Gödel's theorem to the situation but not any of the other results of mathematical logic.
I don't understand how you mean (3) to apply as a criticism at all. He is not making a claim about truth at some level, he's just reminding us what computation is and what its limits are.
afaiu, there are two ways to counter Penrose's claim:
a. Prove that consciousness is actually just computational.
b. Prove that merely stacking computations can somehow produce a super-computational system.
1. Assume for contradiction that human reasoning is formally describable, as an algorithm or as a logical structure.
2. As a result, the incompleteness theorem produces a sentence that is true in that formal system but which human reasoning cannot detect (by virtue of being described by this formal system).
3. But the proof of the incompleteness theorem shows that this sentence is true, and it was produced by a human, a contradiction.
I don't necessarily disagree with the conclusion (I'm kinda agnostic at this point), I just think that this particular argument doesn't hold water.
It has in fact already been proven in theorem provers, so this completely undermines Penrose’s point
Even if a full system analysis reveals patterns that can be used for short-term predictions, you're still only looking at inputs and outputs, and run into the system identification problem. Two brains could produce the same measurable output while still experiencing wildly different internal phenomenon and qualia. For example, how would I know that my blue is your blue, even if we both answer blue and our visual cortex exhibits similar firing patterns?
A better way of thinking about qualia is like embeddings in a neural network. Every time you run the training from a random initialization you will get different resulting embedding, but given the same training data all embeddings will all essentially be equivalent under rotation. I.e., your internal representation of blue might be very different from mine in an absolute sense but our relation between the representation of different colors will be roughly the same.
The point is that you may not be able to prove that for a sufficiently connected system.
> A better way of thinking about qualia is like embeddings in a neural network.
I don't think this is a good analogy. Even if two brains might have the same input and output pairs, you don't necessarily know that they had the same experiences. You also don't know that initial epigenetic and prenatal conditions haven't deviated from the two brains. It would be extremely hard to control for this in a lab, and certainly you won't encounter such similarities in the wild.
> your internal representation of blue might be very different from mine in an absolute sense but our relation between the representation of different colors will be roughly the same.
I think you misunderstand qualia, as this is the exact crux of the argument. Just because we can agree to relational congruencies doesn't mean we have the same individual internal experiences. And we can't just handwave this away as "not important" or "roughly the same". In any case, your argument is self-defeating, as any deviation in internal experience validates my original claim.
Further reading on qualia: https://en.wikipedia.org/wiki/Qualia
On the contrary, I would say it's quite unlikely that two brains with the same input and outputs will ever have the same experiences in deriving the output from the input. But neither of us (nor anyone in the world) know how human brains work, so it is probably not useful debating about this.
Everything that makes "truth" slippery makes "intelligence" and "consciousness" even more slippery and subjective. This is why AGI has such negative impact on AI discourse -- the cause only advances when we focus on improving at measurable tasks.
I like to think people can still make progress on questions of intelligence and consciousness though. Michael Levin's work comes to mind, for instance. Science is just at very early stages of understanding =)
The reasoning is representable with and by a finite number of elementary physical particles and so must itself be finite. Because it is finite it is computable.
Said another way, you would need an infinitely large brain (or an infinitely deep one) to create infinite reasoning.
Busy Beaver numbers are finite, but not computable.
Borealid isn't saying that any finite output is computable, but that outputs of this specific thing is computable because as far as we know it has a finite number of states.
This implies that brains can't compute the general nth BB function which is also true as far as we know.
1. It is an unknown whether a finite volume of space can fundamentally be described by a finite number of states. You can extrapolate to this situation from your favourite theory, but this is not evidence that reality actually works like that. Physics has a long way to go to understand space-time completely.
2. Even assuming that is true, brains are not isolated systems. They are entangled with their environment. Why are you so sure that human cognition can be neatly separated into a finite box like this? The reality is almost certainly more complicated.
3. Lastly, you cannot measure a system like the brain to fundamental levels of detail without destroying it. You literally cannot clone a brain state if you take modern physics seriously, so this whole thing is a non-starter anyway.
You are right that we don't have a theory of everything. However, we can go pretty far with what we have and some clever reasoning. Eg when you have two different gases, like hydrogen and helium, in separate containers and mix them, you can build a relatively simple engine to extract work from that mixing.
That engine also works when your gasses are almost but not quite the same, eg when you have deuterium and hydrogen.
But it doesn't work, when you have the same gas, like hydrogen in both sides.
That gives a pretty strong hint that hydrogen atoms 'have no hair', ie they are all the same.
I think you underestimate the complexity of systems with more than a handful of interacting particles. Yes, you can extrapolate with what we have but I think the answers you get are likely to be totally wrong outside of a very controlled regime. People simulating large scale interaction have to make a lot of simplifying assumptions, which need to be tuned to the problem at hand to be effective. Non-equilibrium statistical mechanics is poorly understood.
Our discussion was about the computability only, wasn't it?
An immortal human might be able to produce incomputable reasoning, but I would say it's more reasonable to talk about humans with finite runtime.
Sorry, I have a hard time understanding this. Are we talking about the same thing? https://en.wikipedia.org/wiki/Busy_beaver
The Busy Beaver deals with two kinds of programs: those that can use infinite amounts of time and space, and those that only use finite amounts of time (and thus also only finite amounts of space).
As far as I can tell, there's no place in the Busy Beaver ever, where space is finite but time is infinite.
And in any case, if your space is finite, you don't need infinite time: you 'trivially' can detect that you are reaching a tape state that you have reached before and abort. The busy beaver is harder than that.
If there is a cap on both the program size and the execution of any individual Busy Beaver, then it becomes computable by the trivial expedient of generating every single possible turing machine of the target size, executing each (stopping at the time limit), then returning the one that ran for the greatest number of steps and terminated.
In other words, the Busy Beaver Game is noncomputable because of the Halting Problem, and the Halting Problem is noncomputable because it lacks an upper bound in time.
Incidentally, bounding time also bounds output space because at each time unit at most one unit of output may be written.
Not completely. The halting problem is computable for some more limited forms of computation, even if they have no upper bound in time.
In an alternative formulation: for some Turing machines we can prove in finite time that they run forever. Thus your 'lack of an upper bound in time' is not enough.
But providing an oracle for the halting problem - of which an upper bound is one form - makes the Busy Beaver Game computable.
To get back to the original topic... a finite brain in finite time can only produce finite output. Both the brain and its output are, like all fully bounded sets, computable.
Your last point is correct, though: if you can detect a "cycle" (like the one the Turing machine I linked to goes through) then you can conclude that the machine won't halt.
But as you agree, they are of no concern for the computability of Busy Beaver numbers. (However, they are of major concern for people who want to find Busy Beaver numbers in practice.)
Turning it around, the answer to "can a machine of infinite size do things a finite computer can't" is "yes". That answer ends up being the reason many things aren't computable, including the halting problem.
The halting problem is a trick in disguise. The trick is: no one said the program you are checking halts had to have finite code, or finite storage. Once you see the trick the halting problem looses a lot of its mystique.
On your second point - allowing infinitely many steps of computation lets you solve the halting problem for regular Turing machines, but you still get an infinitary version of the halting problem that's incomputable (same proof more or less). So I don't think that's really the issue at stake.
The difficulty of a problem (which is the time required to find it) is determined by how many possibilities you have to explore. Since Busy Bever is defined in terms of the number of 1's on a tape when it halts, a "possibility" is how many arrangements of 1's a Turing tape can support. As a Turing tape is infinitely long, the answer is it supports an infinite number of arrangements of 1's.
This all follows from the definition of "not computable". We say a problem isn't computable if we can't find the the answer with a finite sized program, using finite space. It's not an unreasonable definition. How else could you define it?
That definition does leave out the number of steps needed to find the solution. The OP's assertion above is that if the both the program and the space is finite, then the number of steps must also be finite or it must loop forever. I'll leave looking up why as a exercise for the reader. (The proof is pretty simple. It's only a few lines long.) That means in a finite system there is no "halting problem" because looping is easy enough to detect when it happens, and you will either eventually see the program loop or halt because there are no other possible outcomes.
"Non-computable" therefore means "oops we hit an infinity". Twisting that around, if we decide something isn't computable an infinity must have snuck into the problem somehow. All the proofs you see demonstrating something is non-computable on a Turing machine happen because the infinite thing sneaking in is it's tape. Restrict the Turing tape to being finite, and every problem that can be solved on it is computable.
If you want to see how this works in a practical sense, consider BusyBever(6). It's possible we will never solve it because to solve it you need to solve the Collatz Conjecture. The conjecture is simple: if you repeatedly replace a positive integer x by x/2 if x is even or 3x+1 if x is odd, you’ll always eventually reach x=1. It's easy to disprove: all you have to do is find a counter example. None has been found of course, but that doesn't mean much because there are infinite numbers to check and infinite means non-computable. But what if we remove the infinity? Lets just insist x < N, where N is an integer. Then the Collatz Conjecture becomes solvable, and if BusyBever(6) doesn't contain another such puzzle it becomes solvable too.
I may have misunderstood what you meant when I replied.
I agree with basically everything you have written here in spirit, but I don't think "oops an infinity snuck in" is really a useful way to think about computability. There are lots of infinities that can be tamed by Turing machines, by a program that computes membership, for instance. The even numbers are a trivial example. The infinities of computation are asymptotic, not absolute - a Turing machine that solves a decision problem may use unbounded space as a function of input size, but this is not the same thing as literally using an infinite amount of space.
Restricting to finite tapes is pointless from a mathematical point of view, precisely because everything becomes computable in finite time/space.
To someone (which I assume covers most here) who are familiar with mid-level high school maths (calculus, infinite series) this is obvious, surely. We don't just tame infinities in maths and physics, we put them to work. You not only have to be aware of them, you have use them to get some important results. I doubt you would find someone willing to argue against this in mathematics.
The same statement is true in computing. You have to be aware of unwanted infinities. I'd lay long odds there are more programmers out there who have created unwanted infinite loops than there are maths students who have unwittingly divided by 0 and got a nonsensical result. Yet at the same time, infinities lie at the heart of some important results - like the halting problem. Everyone should be aware they are both a nuisance you don't want sneaking in unnoticed, and a very valuable tool.
> I agree with basically everything you have written here in spirit, but I don't think "oops an infinity snuck in" is really a useful way to think about computability.
I disagree. The halting program is one specific result. It is merely the outcome of a more fundamental idea. To wit, there are infinite states (think: infinite strings of bits) no finite program can describe (or equivalently reproduce, or recognise). It's one of those rare things that is hard to get your head around when you first hear it, then becomes obvious after a bit of thought. Somewhat less obvious, to me anyway, is the result applies even if you give that finite program infinite time and infinite space.
Like most fundamental ideas, it leads to other results with little effort. For example, if you view the finite program as a compressed version of the infinite space, it means some infinite spaces can not be compressed. This result is doubly interesting because it extends to finite systems, leading to linking of the seemingly disparate ideas of randomness, compression and computability. It also means if the universe has some infinity hiding inside it it's possible our finite minds can never understand it. It means you will never be able to write a finite program that proves some infinitely long programs halt. It almost certainly means there is an N above which BB(N) isn't computable, and from what we know now that N may be 6. It is why Gödel's theorem holds. Most of those things aren't near as easily deduced from the halting problem, which is why it isn't as useful understanding the effects of infinities on computability.
Above I think you are saying "but, but, introducing the idea that some infinities can be described might confuse some programmers. It may lead them to think a finite program can't describe an infinite series of 1's, for example". I'm not sure how many programmers you know, because in the world I live in not a single one would be confused by that. All are perfectly capable understanding some infinities can be described (not in the least because they've probably written a few by mistake) and this new idea some infinities can't be written down.
My claim is that all of the results you are attributing to "infinity" are true for other reasons, like diagonalisation, and cannot be proven directly from the concept of infinity like you are claiming. It's a heuristic that is not helpful for actually doing the mathematics.
Feel free to prove me wrong by giving me a (mathematical, not hand waving) proof of the halting theorem that only makes use of "infinity" as you are describing it. You won't be able to do it, because it's not the crux of the halting problem. It holds for infinite programs too, because diagonalization arguments don't care how big your set is.
Anyway, I don't really have the energy for this. I appreciate the time and discussion, for which I thank you. But as someone who studied maths at a PhD level and now programs for a living, I'm not getting much out of these ideas. Perhaps we can just agree to disagree for now.
https://en.wikipedia.org/wiki/Computability has a bunch of definitions.
Btw, your reasoning is not really valid. There's plenty of Turing machines that print out lots of 1s (either finite or infinite), where we can accurately predict in finite time how many it will print out. (To fill out the details: crucially the prediction runs much faster than the original machine, and you can assume that we give that answer in eg binary numbers, so we can write it down more compactly than in unary).
> That definition does leave out the number of steps needed to find the solution. The OP's assertion above is that if the both the program and the space is finite, then the number of steps must also be finite or it must loop forever. I'll leave looking up why as a exercise for the reader. (The proof is pretty simple. It's only a few lines long.) That means in a finite system there is no "halting problem" because looping is easy enough to detect when it happens, and you will either eventually see the program loop or halt because there are no other possible outcomes.
Please be very careful about distinguishing a system with a fixed upper limit from one that can be arbitrarily big but finite.
Also be careful, the opposite of your assertion doesn't hold: even an arbitrarily extendable tape doesn't mean that the halting problem must be a problem.
As a last caveat: just because the definition of your problem uses infinity (or the limit of an infinite process, etc) doesn't mean that infinity is inherent to what you are defining. Eg 1 + 1/2 + 1/4 + 1/8 + ... is the sum of an infinite series, but you might be aware that we can calculate the result with a finite computation.
Similarly, the classic definition of Busy Beaver numbers mentions infinity, but that doesn't automatically mean all means of arriving at these numbers have to involve infinite processes.
> [...] infinite means non-computable.
No. Not at all. You argued that non-computable implies an infinity somewhere, and I can grant that. But the reverse is not true. There's lots of variants of that Collatz conjecture with slightly different rules that also cover all numbers, and that we can definitely prove or disprove.
Or more trivially: take any old mathematical statement over all natural numbers. Many of them can be proven true or false, despite the infinity.
The reasoning is not-computable implies an infinity. As you know "implies" is not reflexive, so no I wasn't intending to say the reverse.
> > [...] infinite means non-computable.
You didn't quote the full sentence. It was "because there are infinite numbers to check and infinite means non-computable". Being forced to check an infinite number of possibilities is one definition of non-computable, as you acknowledged.
OK, but you implied you were giving a definition? Definitions are usually two sided, where every 'if' is silently implied to be an 'iff' for convenience.
> You didn't quote the full sentence. It was "because there are infinite numbers to check and infinite means non-computable". Being forced to check an infinite number of possibilities is one definition of non-computable, as you acknowledged.
Not really. You'd need to prove that you actually need to check the numbers. We need to exclude that we just lacked the right idea.
Eg it fairly easy to prove that there's an infinite amount of prime numbers without having to check all numbers. It was a lot harder to prove Fermat's Last Theorem without checking all the numbers. For the Collatz conjucture, we don't know.
Why I did acknowledge was that _if_ you can prove an upper bound, then checking all numbers is computable, yes.
But if we haven't found an upper bound (yet), we have no clue whether the problem is computable or not.
We’ve constructed Turing machines with fewer than 800 states that halt if and only if ZFC is inconsistent. Which means that even given infinite time and space, we still couldn’t find BB(800) without fundamentally changing our system of mathematics.
Do we even know this much?
Your argument, might not be your intention, infer that just because you can't prove X thus Y exist.
Human reasoning is certainly limited. I mean, imagine the kinds of formulas one would get applying GT to the brain. They would be so enormous they'd be entirely impenetrable to human reasoning. We couldn't even read them in a single lifetime.
As for proving GT itself, this proof has been formalized. There's no reason a computer couldn't prove GT by itself.
For instance the set of even numbers is infinite but computable, just check whether the number is divisible by 2.
The algorithm itself is finite, even if the set it determines is not.
I think the liar's paradox is of a different kind. It's a sentence that looks well-formed but arguably has no truth value. If you were to formalise human reasoning as a logical system, such sentences would not be definable in it.
Either way, for Penrose's argument to work you actually need the proof of Gödel's theorem to hold, not just the result.
We think, that is a fact.
Therefore, there is a function capable of transforming information into "thinked information", or what we usually call reasoning. We know that function exists, because we ourselves are an example of such function.
Now, the question is: can we create a smaller function capable of performing the same feat?
If we assume that that function is computable in the Turing sense then, kinda yes, there are an infinite number of turing machines that given enough time will be able to produce the expected results. Basically we need to find something between our own brain and the Kolmogorov complexity limit. That lower bound is not computable, but given that my cats understands when we are discussing to take them to the vet then... maybe we don't really need a full sized human brain for language understanding.
We can run Turing machines ourselves, so we are at least Turing equivalent machines.
Now, the question is: are we at most just Turing machines or something else? If we are something else, then our own CoT won't be computable, no matter how much scale we throw at it. But if we are then it is just matter of time until we can replicate ourselves.
When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs. So human thinking is probably computable, and I think that LLMs can be said to be ”think” in ways that are analogous to what we do.
But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.
I don’t necessarily think that you need to subscribe to dualism or religious beliefs to explain consciousness - it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.
But I do think it’s still important to maintain a distinction between “thinking” (computable, we do it, AIs do it as well) and “consciousness” (we experience it, probably many animals experience it also, but it’s orthogonal to the linguistic or logical reasoning processes that AIs are currently capable of).
At some point this vague experience of awareness may be all that differentiates us from the machines, so we shouldn’t dismiss it.
> You've got to be careful when you say what the human does, if you add to the actual result of his effort some other things that you like, the appreciation of the aesthetic... then it gets harder and harder for the computer to do it because the human beings have a tendency to try to make sure that they can do something that no machine can do. Somehow it doesn't bother them anymore, it must have bothered them in earlier times, that machines are stronger physically than they are...
- Feynman
Maybe we can swap out "think" with "experience consciousness"
Function can mean inputs-outputs. But it can also mean system behaviors.
For instance, recurrence is a functional behavior, not a functional mapping.
Similarly, self-awareness is some kind of internal loop of information, not an input-output mapping. Specifically, an information loop regarding our own internal state.
Today's LLMs are mostly not very recurrent. So might be said to be becoming more intelligent (better responses to complex demands), but not necessarily more conscious. An input-output process has no ability to monitor itself, no matter how capable of generating outputs. Not even when its outputs involve symbols and reasoning about concepts like consciousness.
So I think it is fair to say intelligence and consciousness are different things. But I expect that both can enhance the other.
Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".
Yet even with this radical reduction in general awareness, and our higher level thinking, we remain aware of our awareness of experience. We are not unconscious.
To me that basic self-awareness is what consciousness is. We have it, even when we are not being analytical about it. In meditation our mind is still looping information about its current state, from the state to our sensory experience of our state, even when the state has been reduced so much.
There is not nothing. We are not actually doing nothing. Our mental resting state is still a dynamic state we continue to actively process, that our neurons continue to give us feedback on, even when that processing has been simplified to simply letting that feedback of our state go by with no need to act on it in any way.
So consciousness is inherently at least self-awareness in terms of internal access to our own internal activity. And that we retain a memory of doing this minimal active or passive self-monitoring, even after we resume more complex activity.
My own view is that is all it is, with the addition of enough memory of the minimal loop, and a rich enough model of ourselves, to be able to consider that strange self-awareness looping state afterwards. Ask questions about its nature, etc.
> Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".
The sensation of breathing still constitutes input. Nor is it a given that a thought is necessarily encodeable in words, so "thinking about concept of nothing" is still a thought, and there's some measurable electrochemical activity encoding that in the brain which encodes it. In a similar vein, LLMs deal with arbitrary tokens, which may or may not encode words - e.g. in multimodal LMs, input includes tokens encoding images directly without any words, and output can similarly be non-word tokens.
It is, but (1) the amount of looping in models today is extremely trivial. if our awareness loop is on the order of milliseconds, we experience it on the order of thousands of milliseconds at a minimum. And consider and consolidate our reasoning about experiences over minutes, hours, even days. Which would be thousands to many millions of iterations of experiential context.
Then (2), the looping of models today is not something the model is aware of at a higher level. It processes the inputs iteratively, but it isn't able to step back and examine its own responses recurrently at a second level in a different indirect way.
Even though I do believe models can reason about themselves and behave as if they did have that higher functionality.
But their current ability to reason like that has been trained into them by human behavior, not learned independently by actually monitoring their own internal dynamics. They cannot yet do that. We do not learn we are conscious, or become conscious, by parroting others conscious enabled reasoning. A subtle but extremely important difference.
Finally, (3) they don't build up a memory of their internal loops, much less a common experience from a pervasive presence of such loops.
Those are just three quite major gaps.
But they are not fundamental gaps. I have no doubt that future models will become conscious as limitations are addressed.
Consciousness is nothing but the ability to have internal and external senses, being able to enumerate them, recursively sense them, and remember the previous steps. If any of those ingredients are missing, you cannot create or maintain consciousness.
I imagined the Catholic Church, for example, would be publishing missives reminding everyone that only humans can have souls, and biologists would be fighting an quixotic battle to claim that consciousness can arise from physical structures and forces.
I'm still surprised at how credulous and accepting societies have been of AI developments over the last few years.
AI developments over the last few years have not needed that view to change.
I've heard this idea before but I have never been able to make head or tail of it. Consciousness can't be an illusion, because to have an illusion you must already be conscious. Can a rock have illusions?
Btw, Turing machines provide some inspiration for an interesting definition:
Turing (and Gödel) essentially say that you can't predict what a computer program does: you have to run it to even figure out whether it'll halt. (I think in general, even if you fix some large fixed step size n, you can't even predict whether an arbitrary program will halt after n steps or not, without essentially running it anyway.)
Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them. And by an argument implied by Turing in his paper on the Turing test, that simulation would have the same experience as the human would have had.
(To go even further: if quantum fluctuations have an impact on human behaviour, you can't even do that simulation 100% accurately, because of the no cloning theorem.
To be more precise: I'm not saying, like Penrose, that human brains use quantum computing. My much weaker claim is that human brains are likely a chaotic system, so even a very small deviation in starting conditions can quickly lead to differences in outcome.
If you are only interested in approximate predictions, identical twins show that just getting the same DNA and approximation of the environment gets you pretty far in making good predictions. So cell level scans could be even better. But: not perfect.)
I think it's a good point, but I would argue it's even more direct than that. Humans themselves can't reliably predict what they are going to do before they do it. That's because any knowledge we have is part of our deliberative decision-making process, so whenever we think we will do X, there is always a possibility that we will use that knowledge to change our mind. In general, you can't feed a machine's output into its input except for a very limited class of fixed point functions, which we aren't.
So the bottom line is that seen from the inside, our self-model is a necessarily nondeterministic machine. We are epistemically uncertain about our own actions, for good reason, and yet we know that we cause them. This forms the basis of our intuition of free will, but we can't tell this epistemic uncertainty apart from metaphysical uncertainty, hence all the debate about whether free will is "real" or an "illusion". I'd say it's a bit of both: a real thing that we misinterpret.
Ie I wouldn't expect humans without free will to be able to predict themselves very well, either. Exactly as you suggest: having a fixed point (or not) doesn't mean you have free will.
I'm tempted to say an entity has free will if it a) has a self-model, b) uses this self-model as a kind of internal homunculus to evaluate decision options and c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information). It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
> [...] c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information).
I don't think humans reach that threshold. Though it depends a lot on how you define things.
But as far as I can tell, most of my second-to-second decisions are very much coloured by the fact that we have gravity and an atmosphere at comfortable temperatures (external factors), and if you changed that all of a sudden, I would decide and behave very differently.
> It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
Your homunculus is one hell of a complexity threshold.
"We've all been dancing around the basic issue: does Data have a soul?" -- Captain Louvois. https://memory-alpha.fandom.com/wiki/The_Measure_Of_A_Man_(e...
Both matter of course.
Impossible to answer.
Btw I mostly think it’s reasonable to think that there might be consciousness, phenomenology etc are possible in silicon, but it’s tricky and unverifiable ofc.
If the original one did, then yes, of course. You're performing the exact same processing.
Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain. The individual humans performing the simulation are now comparable to the individual neurons in a real brain. Similarly, in your scenario, the humans are just the computer hardware running the LLM. Apart from that it's the same LLM. Anything that the original LLM experiences, the simulated one does too, otherwise they're not simulating it fully.
The notion that it is not a physical process is an extraordinary claim in its own right, which itself requires evidence.
But your simulation will never fly you over an ocean, it will never be an aircraft or do what aircraft do. A simulation of heat transfer will not cook your dinner. A simulation of Your assumption that a simulation of a mind is a mind, requires evidence.
It will fly over a simulated ocean just fine. It does exactly what aircraft do, within the simulation. By adding “you” to the sentence you've made it an apples to oranges comparison because “you” is definitionally not part of the simulation. I don't see how you could add the same “you” to “it will simulate consciousness just fine”.
It doesn't move real Oxygen and Nitrogen atoms, it doesn't put exhaust gas into the air over the ocean, it doesn't create a rippling sound and pressure wave for a thousand miles behind it, it doesn't drain a certain amount of jet fuel from the supply chain or put a certain amount of money in airline and mechanics' pockets, it doesn't create a certain amount of work for air traffic controllers... reductio ad abusurdum is that a flipbook animation of a stickman aircraft moving over a wiggly line ocean is a very low granularity simulation and "does exactly what aircraft do" - and obviously it doesn't. No amount of adding detail to the simulation moves it one inch closer to doing 'exactly what aircraft do'.
> "I don't see how you could add the same “you” to “it will simulate consciousness just fine”"
by the same reductio-ad-absurdum I don't see how you can reject a stickman with a speech bubble drawn over his head as being "a low granularity simulated consciousness". More paper, more pencil graphite, and the stickman will become conscious when there's enough of it. Another position is that adding things to the simulation won't simulate consciousness just fine - won't move it an inch closer to being conscious; it will always be a puppet of the simulator, animated by the puppeteer's code, always wooden Pinocchio and never a real person. What is the difference between these two:
a) a machine with heat and light and pressure sensors, running some code, responding to the state of the world around it.
b) a machine with heat and light and pressure sensors, running some code [converting the inputs to put them into a simulation, executing the simulation, converting the outputs from the simulation], and using those outputs to respond to the state of the world around it.
? What is the 'simluate consciousness' doing here at all, why is it needed? To hide the flaw in the argument; it's needed to set up the "cow == perfectly spherical massless simulated cow" premise which makes the argument work in English words. Instead of saying something meaningful about consciousness, one states that "consciousness is indistinguishable from perfectly spherical massless simulated consiousness" and then states "simply simulate it to as much detail as needed" and that allows all the details to be handwaved away behind "just simulate it even more (bro)".
Pointing out that simulations are not the real thing is the counter-argument. Whether or not the counter-argument can be made by putting "you" into a specific English sentence is not really relevant, that's only to show that the simulated aircraft doesn't do what the real aircraft does. A simulated aircraft flying over a simulated ocean is no more 'real' than drawing two stick figures having a conversation in speech bubbles.
That's just semantics. I'm not here to argue what the word “real” means. Of course you can define it in such a way that the simulated aircraft isn't “really” flying over an ocean, and it would be just as valid as any other definition, but it doesn't say anything meaningful or insightful about the simulation.
Nobody contests your point that the simulated aircraft isn't going over a real ocean and isn't generating work for real-life air traffic controllers. But conversely you don't seem to contest the claim that oceans and air traffic controllers could be simulated, too. Therefore, consciousness can be simulated as well, and it would be a simulated consciousness that just doesn't fall into your definition of “real”.
As far as physics go, it's all just numbers in the end. Indeed, the more we keep digging into the nature of reality, the more information theory keeps popping up - see e.g. the holographic principle.
No it isn't; numbers are a map, maps are not the territory. You are asking me to define how a map is different from a city, but you are not accepting that the city is made of concrete and is square kilometers large and the map is made of paper and is square centimeters large as a meaningful difference, when I think it's such an obvious difference it's difficult to put any more clearly.
What constitutes a real atom: a Hydrogen atom capable of combining with Oxygen to make water, capable of being affected by the magnetic field of an MRI scanner, etc.
What constitutes a simulated atom: a pattern of bits/ink/numbers which you say "this is a representation of a Hydrogen atom", capable of nothing, except you putting some more bits/ink/numbers near it and speaking the words "this is it interacting to make simulated water".
Do you deny that you could be in a simulation right now, in the matrix? What you actually think are are molecules of oxygen are actually simulated molecules. That there is no way for you to every tell the difference.
> Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain.
However, I don’t really buy “of course it would,” or in another words the materialist premise - maybe yes, maybe no, but I don’t think there’s anything definitive on the matter of materialism in philosophy of mind. as much as I wish I was fully a materialist, I can never fully internalize how sentience can uh emerge from matter… in other words, to some extent I feel that my own sentience is fundamentally incompatible with everything I know about science, which uh sucks, because I definitely don’t believe in dualism!
We in a way can articulate the underlying chemputation of the universe mediated through our senses, reflection and language, turn a piece off (as it is often non continuous) and the quality of the experience changes.
It likely is a fact, but we don't really know what we mean by "think".
LLMs have illuminated this point from a relatively new direction: we do not know if their mechanism(s) for language generation are similar to our own, or not.
We don't really understand the relationship between "reasoning" and "thinking". We don't really understand the difference between Kahneman's "fast" and "slow" thinking.
Something happens, probably in our brains, that we experience and that seems causally prior to some of our behavior. We call it thinking, but we don't know much about what it actually is.
AIs are not going to be like humans because they will have perfect recall of a massive database of facts, and be able to do math well beyond any human brain.
The interesting question to me is, when will we be able to give AI very large tasks, and when will it to be able to break the tasks down into smaller and smaller tasks and complete them.
When will it be able to set its own goals, and know when it has achieved them?
When will it be able to recognize that it doesn't know something and do the work to fill in the blanks.
I get the impression that LLMs don't really know what they are saying at the moment, so don't have any way to test what they are saying is true or not.
I think you’re right, LLMs have demonstrated that relatively sophisticated mathematics involving billions of params and an internet full of training data is capable of some truly, truly, remarkable things. But as Penrose is saying, there are provable limits to computation. If we’re going to assume that intelligence as we experience it is computable, then Gödel’s theorem (and, frankly, the field of mathematics) seems to present a problem.
Humans have a special thingy that makes the consciousness Computers do not have the special thingy Therefore Computers cannot be consciousness.
But Dualism gets you laughed at these days so Dualists have to code their arguments and pretend they aren't into that there Dualism.
Penrose's arguments against AI has always felt to me like special pleading that humans (or to stretch a bit further, carbon based lifeforms) are unique.
If this were to be true, it would follow that computers as we build them today would fundamentally not be able to match human problem-solving. But it would not follow, in any way, that it would be impossible to build "hyper computers" that do. It just means you wouldn't have any chance of getting there with current technology.
Now, I don't think Penrose's arguments for why he thinks this is the case are very strong. But they're definitely not mystical dualistic arguments, they're completely materialistic mathematical arguments. I think he leans towards an idea that quantum mechanics has a way of making more-than-Turing computation happen (note that this is not about what we call quantum computers, which are fully Turing-equivalent systems, just more efficient for certain problems), and that this is how our brains actually function.
That was my understanding on Penrose's position as well which is just a "Consciousness of the Gaps" argument. As we learn more about quantum operations the space for Consciousness as a special property of humans disapears.
I just watched an interview where he made that exact statement nearly word for word.
His only argument is that it is not computable, not that it’s not physical. He does think the physical part involves the collapse of the wave function due to gravity, and that somehow the human brain is interacting with that.
So to produce conciseness in his view, you’d need to construct something capable of interacting with the quantum world the same way he believes organic brains do (or something similar to it). A simulation of the human brain wouldn’t do it.
what if we are?
and our brain is the "billion parameter model", continuously "trining", that takes input and spits out output
I understand we as a species might be reluctant to admit that we are just matter and our thinking/consciousness is just electricity flowing
btw, I'm not selling anything :)
On the contrary, we have 86B neurons in the brain, the weighting of the connections is the important thing, but we are definitely 'running' a model with many billions of parameters to produce our output.
The theory by which the brain mainly works by predicting the next state is called predictive coding theory, and I would say that I find it pretty plausible. At the very least, we are a long way from knowing for certain that we don't work in this way.
The neurons (cells) in even a fruit flies brain are orders of magnitude more complex than the "neurons" (theoretical concept) in a neural net.
> the weighting of the connections is the important thing
In a neural net, sure.
In a biological brain, many more factors are important: The existence of a pathway. Antagonistic neurotransmitters. NT re-incorporation. NT-binding sensitivity. Excitation potential. Activity of Na/K channels. Moderating enzymes.
Even what we last ate or drank, how rested, old, hydrated, we are, when our lats physical activity took place, and all the interactions prior to an input influence how we analyse and integrate it.
> but we are definitely 'running' a model with many billions of parameters to produce our output.
No, we are very definitely not. Many of our mental activities have nothing to do with state prediction at all.
We integrate information.
We exist as a conscious agent in the world. We interact, and by doing so change our own internal state alongside the information we integrate. We are able to, from this, simulate our own actions and those of other agents, and model the world around us, and then model how an interaction with that world would change the model.
We are also able to model abstract concepts both in and outside the world.
We understand what concepts, memories, states, and information mean both as abstract concepts and concrete entities in the universe.
We communicate with other agents, simultaneously changing their states and updating our modeling of their internal state (theory of the mind, I know that you know that I know, ...)
We filter, block, change, and create information.
And of course we constantly learn and change the way we do ALL OF THIS, consciously and subconsciously.
> At the very least, we are a long way from knowing for certain that we don't work in this way.
If the process in the brain is modellable at all, then it is certainly a model with at a minimum many billions of parameters. Your list of additional parameters if anything supports that rather than arguing against it. If you want to argue with that contention, I think you need to argue that the process isn't modellable, which if you want to talk about burden of proof, would place a huge burden on you. But maybe I misunderstood you. I thought you were saying that it's ludicrous to say we're using as many as billions of parameters, but perhaps you're trying to say that billions is obviously far too small, in which case I agree.
My second point, which is that there's a live theory that prediction may be a core element of our consciousness was intended as an interesting aside, I don't know how it will stand the test of time and I certainly don't know if its correct or not, I intended only to use it to prove that the things you seem to think are obvious are not in fact obvious to everyone.
For example, that big list of things that you are using as an argument against prediction doesn't work at all because you don't know whether they are implemented via a predictive process in the brain or not.
It feels that rather than arguing against modellability or large numbers of parameters or prediction you're arguing against the notion that the human brain is exactly an llm, which is an idea so obviously true I don't think anyone actually disagrees with it.
> perhaps you're trying to say that billions is obviously far too small, in which case I agree.
No, it doesn't, and I don't.
The processes that happen in a living brain don't just map to "more params". It doesn't matter how many learnable parameters you have...unless you actually change the paradigm, an LLM or similar construct is incapable of mapping a brain, period. The simple fact that the brains internal makeup is itself changeable, already prevents that.
> prediction may be a core element of our consciousness
No it isn't, and it's trivially easy to show that.
Many meditative techniques exist where people "empty their mind". They don't think or predict anything. Does that stop consiousness? Obviously not.
Can we do prediction? Sure. Is it a "core element", aka. indispensable for consciousness? No.
But language processing is just one subset of human cognition. There are other layers of human experience like sense-perception, emotion, instinct, etc. – maybe these things could be modeled by additional parameters, maybe not. Additionally, there is consciousness itself, which we still have a poor understanding of (but it's clearly different from intelligence).
So anyway, I think that it's reasonable to say that LLMs implement one sub-set of human cognition (the part that has to do with how we think in language), but there are many additional "layers" to human experience that they don't currently account for.
Maybe you could say that LLMs are a "model distillation" of human intelligence, at 1-2 orders of magnitude less complexity. Like a smaller model distilled from a larger one, they are good at a lot of things but less able to cover edge cases and accuracy/quality of thinking will suffer the more distilled you go.
We tend to equate "thinking" with intelligence/language/reason thanks to 2500 years of Western philosophy, and I believe that's where a lot of confusion originates in discussions of AI/AGI/etc.
[1]: https://medicine.yale.edu/lab/colon-ramos/overview/#:~:text=...
Related is the platonic representation hypothesis where models apparently converge to similar representations of relationships between data points.
https://phillipi.github.io/prh/ https://arxiv.org/abs/2405.07987
To put this another way, I think that you can say that much of our own intelligence as humans is embedded in the sum total of the language that we have produced. So the intelligence of LLMs is really our own intelligence reflected back at us (with all the potential for mistakes and biases that we ourselves contain).
Edit: I fed Claude this paper, and "he" pointed out to me that there are several examples of humans developing accurate conceptions of things they could never experience based on language alone. Most readers here are likely familiar with Helen Keller, who became an accomplished thinker and writer in spite of being blind and deaf from infancy (Anne Sullivan taught her language despite great difficulty, and this Keller's main window to the world). You could also look at the story of Eşref Armağan, a Turkish painter who was blind from birth – he creates recognizable depictions of a world that he learned about through language and non-visual senses).
However, this doesn't mean in any way that an LLM might not produce the same or even superior output than a human would in certain very useful circumstances. It just means it functions fundamentally differently on the inside.
Obviously the brain isn't running an exact implementation of the attention paper, and your point about how the brain is more malleable than our current llms is a great point, but that just proves they aren't the same. I fully expect that future architectures will be more malleable, if you think that such hypothetical future architectures will be fundamentally different from the current ones then we agree..
We mistakenly assume, they are true because perhaps we want them to be true. But we have no proof that either of these are true.
Quick context: His view of what constitutes a subject, which is to say a thinking person in this case, is one which over time (and time is very important here) observes manifold partial aspects about objects through perception, then through apprehension (the building of understanding through successive sensibilities over time) the subject schematizes information about the object. Through logical judgments, from which Kant derives his categories, we can understand the object and use synthetic a priori reasoning about the object.
So for him, the statement "I am" means simply that you are a subject who performs this perception and reasoning process, as one's "existence" is mediated and predicated on doing such a process over time. So then "I think, therefore I am" becomes a tautology. Assuming that the "I" in "I am" exists as an object, which is to say a thing of substance, one which other thinking subjects could reason about, becomes what he calls "transcendental illusion", which is the application of transcendental reasoning not rooted in sensibility. He calls this metaphysics, and he focuses on the soul (the topic at hand here), the cosmos, and God as the three topics of metaphysics in his Transcendental Dialectic.
I think that in general, discussion about epistemology with regard to AI would be better if people started at least from Kant (either building on his ideas or critical of them), as his CPR really shaped a lot of the post-Enlightenment views on epistemology that a lot of us carry with us without knowing. In my opinion, AI is vulnerable to a criticism that empiricists like Hume applied to people (viewing people as "bundles of experience" and critiquing the idea that we can create new ideas independent of our experience). I do think that AI suffers from this problem, as estimating a generative probability distribution over data means that no new information can be created that is not simply a logically ungrounded combination of previous information. I have not read any discussion of how Kant's view of our ability to make new information (application of categories grounded by our perception) might influence a way to make an actual thinking machine. It would be fascinating to see an approach that combines new AI approaches as the way the machine perceives information and then combines it with old AI approaches that build on logic systems to "reason" in a way that's grounded in truth. The problem with old AI is that it's impossible to model everything with logic (the failure of logical posivitism should have warned them), however it IS possible to combine logic with perception like Kant proposed.
I hope this makes sense. I've noticed a lack of philosophical rigor around the discussion of AI epistemology, and it feels like a lot of American philosophy research, being rooted in modern analytical tradition that IMO can't adapt easily to an ontological shift from human to machine as the subject, hasn't really risen to the challenge yet.
Remember, this is about Cartesian duality (mind-brain duality), so the key question here is not whether a brain exists, but whether the mind exists independently of it.
As well, perhaps, worth noting that because a subset of the observable universe is performing some function, then it is an assumption that there is some finite or digital mathematical function equivalent to that function; a reasonable assumption but still an assumption. Most models of the quantum universe involve continuously variable values, not digital values. Is there a Turing machine that can output all the free parameters of the standard model?
Sure, just hard code them.
> As well, perhaps, worth noting that because a subset of the observable universe is performing some function, then it is an assumption that there is some finite or digital mathematical function equivalent to that function; a reasonable assumption but still an assumption. Most models of the quantum universe involve continuously variable values, not digital values.
Things seem to be quantised at a low enough level.
Also: interestingly enough quantum mechanics is both completely deterministic and linear. That means even if it was continuous, you could simulate it to an arbitrary precision without errors building up chaotically.
(Figuring out how chaos, as famously observed in the weather, arises in the real world is left as an exercise to the reader. Also a note: the Copenhagen interpretation introduces non-determinism to _interpret_ quantum mechanics but that's not part of the underlying theory, and there are interpretations that have no need for this crutch.)
Some things. But others are very much not: in particular, space and time are not quantized, and in fact are not even quantizable. A theory in which there is some discrete minimal unit of space (or of spacetime) is trivially incompatible with special relativity, so it is incompatible with quantum mechanics (QFT, specifically).
This is easy to see from the nature of the Lorenz transformation: if two objects are at a distance D = n*min for some observer, they will be at a distance D' = n*min*gamma for some other observer, where gamma is always < 1 for an o server moving at a higher speed in a direction aligned with the two objects. So the distance for that second observer will be a non-integer multiple of the minimum distance, so your theory is no longer quantized.
Note that this is separate from the problem with GR-QFT inconsistencies. All of our current theories are based on and only work if spacetime is continuous. While it's not impossible that a new theory with quantized spacetime could exist and work, it's not at all required.
The one thing about spacetime that we do believe might be quantizable, and would have to be quantized for GR and QFT to be compatible, is the curvature of spacetime. But even if spacetime can only be curved in discrete quanta, that would not mean that position would be quantized.
That would be super lucky if possible - almost all reals are not computable. How would we initialize or specify this Turing machine? Going to use non-constructive methods?
Given how quickly reality unfolds, it's a bit of a stretch to assume that "simulate it to arbitrary precision" means "computable in a digital representation in real time." I mean, if we have Turing machines that did each computation step n in 2^{-n} time.
The standard model doesn't use arbitrary reals. All the parameters are rational numbers with finite precision.
Obviously, your Turing machine can only hard code a finite amount of information about the parameters, eg whatever finite prefix of their decimal expansion is known.
Btw, the speed of light is one of those parameters that's 'known' with absolute precision thanks to some clever definitions. We can do similar tricks with many of the other parameters, too.
> Personally, I prefer to think about it in terms of basic computability theory:
Gödel's incompleteness theorem applies to computing. I'm sure you're familiar with the Halting Problem. Gödel's applies to any axiomatic system. The trouble is, it's very hard to make a system without axioms. They are sneaky and it's different than any logic you're probably familiar with.And don't forget Church-Turing, Gödel Numbers, and all the other stuff. Programming is math and Gödel did essential work on the theory of computation. It would be weird NOT to include his work in this conversation.
> are we at most just Turing machines or something else?
But this is a great question. Many believe no. Personally I'm unsure, but lean no. Penrose is a clear no but he has some wacky ideas. Problem is, it's hard to tell a bad wacky idea from a good wacky idea. Rephrasing Clarke's Second Law: Genius is nearly indistinguishable from insanity. The only way to tell is with time.But look into things like NARS and Super Turing machines (Hypercomputation). There's a whole world of important things that are not often discussed when it comes to the discussion of AGI. But for those that don't want to dig deep into the math, pick up some Sci-Fi and suspend your disbelief. Star Trek, The Orville and the like have holographic simulations and I doubt anyone would think they're conscious, despite being very realistic. But The Doctor in Voyager or Isaac in The Orville are good examples of the contrary. The Doctor is an entity you see become conscious. It's fiction, but that doesn't mean there aren't deep philosophical questions. Even if they're marked by easy to digest entertainment. Good stories are like good horror, they get under your skin, infect you, and creep in
Edit:
I'll leave you with another question. Regardless of our Turing or Super-Turing status; is a Turing machine sufficient for consciousness to arise?
> Regardless of our Turing or Super-Turing status; is a Turing machine sufficient for consciousness to arise?
A Turing machine can in principle beat the Turing test. But so can a giant lookup table, if there's any finite time limit (however generous) placed on the test.
The 'magic' would in the implementation of the table (or the Turing machine) into something that can answer in a reasonable amount of time and be physically realised in a reasonable amount of space.
Btw, that's an argument from Scott Aaronson's https://www.scottaaronson.com/papers/philos.pdf
> There's no evidence that hypercomputation
I'm not the right person to ask this tbh. As far as I know it doesn't even well in theory. (Though I don't think they would make it useless to study) > can in principle beat the Turing test
What's the relevance of the Turing test? It's been beaten for over half a century.I would be very interested if you have any sources on anyone beating the Turing test in anything close to Turing's original adversarial formulation.
ELIZA
PARRY
Eugene Goostman
Mitsuku (Kuki AI)
Or see the Loebner Prize where judges aren't just average people but experts (so harder).The Turing Test was never about intelligence or thinking, even said so by Turing himself. He made the test because those words are too vague. He specifically wanted to shift the conversation to task completion, since that is actually testable. Great! That's how science works! (Looking at you String Theory...) But science also progresses. We know more about what these things intelligence and thinking are. These still are not testable in a concrete sense, but we have better proxies than the Turing Test now.
The problem is that knowledge percolates through society slowly and with a lag. We've advanced a lot since then. I'm sure you are likely willing to believe that most LLMs these days can pass it. The Turing Test was a great start. You gotta start somewhere. But to think we came up with good tests at the same time we invented electronic computers, should give you surprise. Because it would require us to have been much smarter then than we are now.
Eliza never won that adversarial version even against layman.
In what sense did Eliza ever 'win' any Turing test?
> I'm sure you are likely willing to believe that most LLMs these days can pass it.
No, I haven't seen any evidence of that.
To repeat: I am interested in evidence that any non-human can beat the Turing test in the original form given in Turing's paper, where you have the judge (human), and two contestants A and B. One of the contestants is a computer, one is a human. Everyone can see what everyone else is writing, and the human contestant can help the human judge. (But the computer can try to fake that 'helping', too.)
Turing specifically wrote: "The object of the game for the third player (B) is to help the interrogator."
I can believe that Eliza has occasionally fooled some random humans, but I can't believe Eliza managed to fool anyone when a third party was around to point out her limitations. (Especially since Eliza ain't smart enough to retaliate and fabricate some 'obvious computer limitations' to accuse the third party of.)
Most LLMs today still have some weaknesses that are easy to point out, if you let your contestants (both kinds) familiarise themselves with both humans and the LLMs in question at their leisure before the test starts.
Just for fun, I just tried out Kuki AI, and it's not going to fool anyone who actually wants to uncover the AI through adversarial cross-examination.
The chat excerpt giving of 'Eugene Goostman' in https://en.wikipedia.org/wiki/Eugene_Goostman also suggests that it would fall apart immediately in an adversarial setting with the full three participants.
However, I do agree that we have made progress and that today's LLMs could hold up a lot longer in this harsher setting than anything we had before. Especially if you fine-tuned them properly to remove telltale signs like their inability to swear or their constant politeness.
> But to think we came up with good tests at the same time we invented electronic computers, should give you surprise.
I never claimed the Turing test is the best test ever, nor even that it's particularly good. I was saying that it hasn't been beaten in its original form.
For example, the Turing test isn't really a fine grained benchmark that lets you measure and compare model performance two multiple decimal places. Nor was it any good as a guideline for how to improve our approaches.
> In addition, there were two one-time-only prizes that have never been awarded. $25,000 is offered for the first program that judges cannot distinguish from a real human and which can convince judges that the human is the computer program. $100,000 is the reward for the first program that judges cannot distinguish from a real human in a Turing test that includes deciphering and understanding text, visual, and auditory input. The competition was planned to end after the achievement of this prize.
In addition to the detectability problem, I wrote in the adjacent comment, this question can be further refined.
A Turing machine is an abstract concept. Do we need to take into account material/organizational properties of its physical realization? Do we need to take into account computational complexity properties of its physical realization?
Quantum mechanics without Penrose's Orch OR is Turing computable, but its runtime on classical hardware is exponential in, roughly, the number of interacting particles. So, theoretically, we can simulate all there is to simulate about a given person.
But to get the initial state of the simulation we need to either measure the person's quantum state (thus losing some information) or teleport his/her quantum state into a quantum computer (the no-cloning theorem doesn't allow to copy it). The quantum computer in this case is a physical realization of an abstract Turing machine, but we can't know its initial state.
The quantum computer will simulate everything there are to simulate, but the interaction of a physical human with the initial state of the Universe via photons of the cosmic microwave background. Which may deprive the simulated one of "free will" (see "The Ghost in the Quantum Turing Machine" by Scott Aaronson). Or maybe we can simulate those photons too, I'm not sure about it.
Does all of it have anything to do with consciousness? Yeah, those are interesting questions.
[0] See the Taylor Series expansion. This is an only physics joke
Another question. How do you go about detecting whether consciousness has arisen?
Simultaneously, actual dualists flock to his theories because they think association with Penrose lends credibility to their religious stuff.
oh, cogito existo sum! checkmate theists!
dude, the simulation hypothesis does not mean things don't exist, it means they don't necessarily exist in the way you have, rather unimaginatively, imagined, and you have no way to tell.
and Occam's Razor does not solve the problem.
also, falsifiability lacks falsifiability.
don't make up complex axioms and then believe in them on faith.
Occam's razor: if the universe can be simulated, then whatever form the simulation takes is the simpler explanation for the turtles all the way down.
if the universe can't be simulated, explain how you know that.
To pull from the relevant part of Hofstadter’s incredible I am a Strange Loop (a book also happens to more rigorously invoke Gödel for cognitive science):
And this is our central quandary. Either we believe in a nonmaterial soul that lives outside the laws of physics, which amounts to a nonscientific belief in magic, or we reject that idea, in which case the eternally beckoning question "What could ever make a mere physical pattern be me?”
After all, a phrase like "physical system" or "physical substrate" brings to mind for most people… an intricate structure consisting of vast numbers of interlocked wheels, gears, rods, tubes, balls, pendula, and so forth, even if they are tiny, invisible, perfectly silent, and possibly even probabilistic. Such an array of interacting inanimate stuff seems to most people as unconscious and devoid of inner light as a flush toilet, an automobile transmission, a fancy Swiss watch (mechanical or electronic), a cog railway, an ocean liner, or an oil refinery. Such a system is not just probably unconscious, it is *necessarily* so, as they see it. This is the kind of single-level intuition so skillfully exploited by John Searle in his attempts to convince people that computers could never be conscious, no matter what abstract patterns might reside in them, and could never mean anything at all by whatever long chains of lexical items they might string together.
Highly recommend it for anyone who liked Gödel, Escher, Bach, but wants more explicit scientific theses! He basically wrote it to clarify the more artsy/rhetorical points made in the former book.It's accurate. But it feels really weird.
It's not uncommon for great scientists to be totally out of their depth even in nearby fields, and not realize it. But this isn't the hard part of either computability or philosophy of mind.
Penrose is an authority in some fields of theoretical physics, but that doesn't give any value to what he has to say on consciousness or AI.
On that topic, he has clearly adopted an unscientific approach: he wants to believe the soul exists and is immaterial and seeks evidence for it.
It’s possible that we haven’t found a way to express your thinking function digitally, which I think is true, but I have a feeling that the complexity of thought requires the analog-ness of our brains.
Other aspects of ANN that show that Gödel doesn’t apply is that they are not formal systems. Formal system is a collection of defined operations. The building blocks of ANN could perhaps be built into a formal system. Petri nets have been demonstrated to be computationally equivalent to Turing machines. But this is really an indictment on the implementation. It’s the same as using your PC, implementing a formal system like its instruction set to run a heuristic computation. Formal system can implement informal systems.
I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
As for humans, there is no way you can look at the behavior of a human and know for certain it is not a Turing machine. With a large enough machine, you could simulate any behavior you want, even behavior that would look, on first observation, to not be coming from a Turing machine; this is a form of the halting problem. Any observation you make that makes you believe it is NOT coming from a Turing machine could be programmed to be the output of the Turing machine.
This is not exactly true, depending on what you mean by behavior. There are mathematical functions we know for a fact are not computable by a Turing machine, no matter how large. So a system that "behaves" like those functions couldn't be simulated by a TM. However, it's unclear whether such a system actually could exist in physical reality - which gets right back to the discussion of whether thinking is beyond Turing completeness or not.
Incorrect.
The comment above confuses some concepts.
Perhaps this will help: consider a PRNG implemented in software. It is an algorithm. The question of the utility of a PRNG (or any algorithm) is a separate thing.
Heuristic or not, AI is still ultimately an algorithm (as another comment pointed out, heuristics are a subset of algorithms). AI cannot, to expand on your PRNG example, generate true random numbers; an example that, in my view, betrays the fundamental inability of an AI to "transcend" its underlying structure of pure algorithm.
2. AI just means “non human” intelligence. An AI system (of course) can incorporate various sources of entropy, including sensors. This is already commonly done.
On the level where the learning is done and knowledge is represented in these networks there is no evidence anyone really understands how it works.
I suspect maybe at that level you can think of it as an algorithm with unreliable outputs. I don’t know what that idea gains over thinking it’s not algorithmic and just a heuristic approximation.
https://xlinux.nist.gov/dads/HTML/heuristic.html
https://xlinux.nist.gov/dads/HTML/millerRabin.html
https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality...
It is hard to assess the comment above. Depending on what you mean, it is incorrect, inaccurate, and/or poorly framed.
The word “really” is a weasel word. It suggests there is some sort of threshold of understanding, but the threshold is not explained and is probably arbitrary. The problem with these kinds of statements is that they are very hard to pin down. They use a rhetorical technique that allows a person to move the goal posts repeatedly.
This line of discussion is well covered by critics of the word “emergence”.
It is a self-delimiting program. It is an algorithm in the most basic sense of the definition of “partial recursive function” (total in this case) and thus all known results of computability theory and algorithmic information theory apply.
> Formal system is a collection of defined operations
Not at all.
> I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
We have zero evidence of this one way or another.
—
I’m looking for loopholes around Gödel’s theorems just as much as everyone else is, but this isn’t it.
Physicists like to use mathematics for modeling the reality. If our current understanding of physics is fundamentally correct, everything that can possibly exist is functionally equivalent to a formal system. To escape that, you would need some really weird new physics. Which would also have to be really inconvenient new physics, because it could not be modeled with our current mathematics or simulated with our current computers.
So if Gödel tells us that either formal systems will be consistent and make statements they cannot prove XOR be inconsistent and therefore unreliable, at least to some degree, then surely informal systems will, at best, be the same, and, at worst, be much worse?
The desirable property of formal systems is that the results they produce are proven in a way that can be independently verified. Many informal systems can produce correct results to problems without a known, efficient algorithmic solution. Lots of scheduling and packing problems are NP-complete but that doesn’t stop us from delivering heuristic based solutions that work good enough.
Edit: I should probably add that I’m pretty rusty on this. Godels theorem tells ua that if a formal system is consistent, it will be incomplete. That is, there will be true statements that cannot be proven in the system. If the system is complete, that is, all true/false statements can be proven, then the system will be incomplete. That is you can prove contradictory things in the system.
AI we have now isn’t really either of these. It’s not working to derive truth and falsehood from axioms and a rule system. It’s just approximating the most likely answers that match its training data.
All of this has almost no relation to the questions we’re interested in like how intelligent can AI be or can it attain consciousness. I don’t even know that we have definitions for these concepts suitable for beginning a scientific inquiry.
"Thinked information" is a colour not an inherent property of information. The fact that information has been thought is like the fact it is copyrighted. It is not something inherent to the information, but a property of its history.
This is a big assumption. I'm not saying it's wrong, but I am saying it's not reasonable to just handwave and assume that it's right.
Where does this question come from? Especially where does the 'smaller' requirement come from?
You’re arguing that we know artificial reasoning exists because we are capable of reasoning. This presupposes that reasoning is computable and that we ourselves reason by computation. But that’s exactly what Penrose is saying isn’t the case - you’re saying we’re walking Turing machines, we’re intelligent, so we must be able to effectively create copies of that intelligence. Penrose is saying that intelligence is poorly defined, that it requires consciousness which is poorly understood, and that we are not meat-based computers.
Your last question misses the point completely. “If we are something else, then out CoT won’t be computable…” it’s like you’re almost there but you can’t let go of “we are meat-machines, everything boils down to computation, we can cook up clones”. Except, “basic computability theory” says that’s not even wrong.
What's more, whatever you like to call the transoforming of information into thinked information by definition can not be a (mathematical) function, because it would require all people to process the same information in the same way and this is plainly false
No this isn't the checkmate you think it is. It could still be a mathematical function. But every person transforming information into "thinked information" could have a different implementation of this function. Which would be expected as no person is made of the same code (DNA).
Note that using words like "function" and "mathematical" are more the biases of computer science/Penrose while philosophy of the mind & psychology has more typically used slightly different ideas of "attitudes" and "events" to guide the discussion. I don't think this really radically shifts many central disputes, except (which Penrose might view as a critical "except") for all the funny business quantum mechanics / quantum computation can potentially bring in and the (undisputed) physicality of our brains and (also undisputed) lack of understanding of many details of biological computation.
FWIW human brain does indeed consume a lot of energy, accounting for over 20% of our body metabolism. We don't know how to attribute specific watts consumed to specific thoughts because we don't understand the functioning of the brain enough, but there's no obvious reason why it shouldn't be possible.
It's not entirely clear, though, that the universe is deterministic- our best experiments suggest there is some remaining and relevant nondeterminism.
Turing machines, Goedel incompleteness, Busy Beaver Functions, and (probably) NP problems don't have any relevance to simulating complex phenomena or hard problems in biology.
What you said sounds good, but I don't think it's philosophically robust.
read this as: literally creating gravity by simulating it hard enough.
I'm not really arguing with you, i just think if i simulate entropy (entropic processes, "CSRNG", whatever) on my computer ...
That's not obviously true, especially given how the more we keep digging into physics, the more everything seems to be "just information".
We don't know if consciousness is computable, because we don't know what consciousness is.
There are suggestions it isn't even local, never mind Turing-computable.
Perhaps he and other true geniuses can understand things transcendently. Not so for me. My thoughts are serialized and obviously countable.
And in any case: any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable. So I’m not convinced I could be convinced without a computable proof.
And finally just like computable numbers are dense in the reals, maybe computable thoughts are dense in transcendence.
His intent at the time was to open a physical explanation for free will by taking the recourse to quantum nano-tubules magnifying true randomness to the level of human cognition. As much as I'm also skeptical that this actually moves the needle on whether or not we have free will (...vs occasionally having access to statistically-certain nondeterminism? Ok...) the computable stuff was just in service of this end.
I strongly suspect he just hasn't grasped how powerful heuristics are at overcoming general restrictions on computation. Either that or this is an ideological commitment.
Kind of sad—penrose tilings hold a special place in my heart.
Free will is a useful abstraction. Just like life and continuity of self are.
> I strongly suspect he just hasn't grasped how powerful heuristics are at overcoming general restrictions on computation.
Allowing approximations or "I don't know" is what's helpful. The bpf verifier can work despite the halting problem being unsolvable, not because it makes guesses (uses heuristics) but because it's allowed to lump in "I don't know" with "no".
Thus why intuitive algorithms change everything, even if they're useless on their own...
I think it’s more useful to think of them as language games (in the Wittgenstein sense) than abstractions.
I don't think free will exists because I don't think supernatural phenomena exist, and there's certainly no natural explanation for free will (Penrose was correct about that). But I have a very non-nihilistic view on things [1].
That's Penrose's old criticism. We're past that. It's the wrong point now.
Generative AI systems are quite creative. Better than the average human at art. LLMs don't have trouble blithering about advanced abstract concepts. It's concrete areas where these systems have trouble, such as arithmetic. Common sense is still tough. Hallucinations are a problem. Lying is a problem. None of those areas are limited by computability. It's grounding in the real world that's not working well.
(A legit question to ask today is this: We now know how much compute it takes to get to the Turing test level of faking intelligence. How do biological brains, with such a slow clock rate, do it? That was part of the concept behind "nanotubules". Something in there must be running fast, right?)
Nah. It just needs to be really wide. This is a very fuzzy comparison, but a human brain has ~100 trillion synaptic connections, which are the closest match we have to "parameters" in AI models. The largest such models currently have on the order of ~2 trillion parameters. (edit to add: and this is a low end estimate of the differences between them. There might be more stuff in neurons that effectively acts as parameters, and should be counted as such in a comparison.)
So AI models are still at least two orders of magnitude off from humans in pure width. In contrast, they run much, much faster.
We wouldn’t call the air creative, right? Or if we do, we must conclude that creativity doesn’t require consciousness.
Given that we struggle with even a basic consensus about which humans are better at art than others, I don't think this sentence carries any meaning whatsoever.
And much lower energy expenditure.
A brain consumes something like 20 W while working.
Gestures broadly at humanity
If so then it really comes down to believing something not because you can prove it but because it is true.
I’m just a mediocre mathematician with rigor mortis. So I won’t be too hard on Penrose.
This is a fallacy. Just because you need to serialize a concept to communicate it doesnt mean the concept itself is computable. This is established and well proven:
https://en.wikipedia.org/wiki/List_of_undecidable_problems
The fact that we can come up with this kind of uncumputable problems is a big plus in supprt of Penrose's Idea that consciousnes is not computable and goes way beyond compatability.
how you comunicate it does not alter the nature of the problem.
You might want to consider doing a bit of meditation...anyone who describes their thoughts as 'serialized' and 'obviously countable' has not much time actually looking at their thoughts.
Are you aware of how little of modern mathematics has been formalised? As in, properly formalised on a computer. Not just written up into a paper that other mathematics can read and nod along to.
Mathematics might seem very formal and serialised (and it is, compared to most other human endeavours) but that’s actually quite far from the truth. Really, it all exists in the mind of the mathematician and a lot of it is hard, if not currently impossible, to pin down precisely enough to enter into a formal system.
I think you probably do understand some things ‘transcendently’! Almost by definition they’re the things you’re least aware of understanding.
It's harder (for me) to see how it's possible to say that pain is just a way of describing things, i.e. that there's in principle no difference between feeling pain and computing a certain function.
i would be a bit more aggressive: Penrose asserts without evidence
Remember - there is no such thing as an objective consciousness meter.
Emulating the behaviours we associate with consciousness - something that still hasn't been achieved - solves the problem of emulation, not the problem of identity.
The idea that an emulation is literally identical to the thing it emulates in this instance only is a very strange belief.
Nowhere else in science is a mathematical model of something considered physically identical and interchangeable with the entity being modelled.
you can make the argument that everything in science is a mathematical model... if you measure a basketball arcing through the sky, you are not actually privy to any existential sensing of the ball, you are proxying the essence of the basketball using photons, and even collecting those photons is not really "collecting those photons", etc.
You needn't be a genius. Go on a few vipassana meditation retreats and your perception of all this may shift a bit.
> any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable
Hence the suggestion by all mystical traditions that truth can only be experienced, not explained.
It may be possible for an AI to have access to the same experiences of consciousness that humans have (around thought, that make human expressions of thought what they are) - but we will first need to understand the parts of the mind / body that facilitate this and replicate them (or a sufficient subset of them) such that AI can use them as part of its computational substrate.
We gotta stop making infaillible super heroes/geniuses of people.
In this particular case, Penrose is a convinced dualist and his theories are unscientific. There are very good reasons to not be a dualist, a minority view in philosophy, which I would encourage anyone to seek if they want to better understand Penrose's position and where it came from.
This isn’t an example of physicist stumbling into a new field for the first time and saying “oh that’s an easy problem. you just need to…”
The ideas of a very smart person who has spent decades thinking about a problem tend to be very valuable even if you don’t agree with them.
IIRC, his Goedel argument against AI is that someone could construct a Goedel proposition for an intelligent machine which that machine could reason its way through to hit a contradiction. But, at least by default, humans don't base their epistemology on such reasoning, and I don't see why a conscious machine would either. It's not ideal, but frankly, when most humans hit a contradiction, they usually just ignore whichever side of the contradiction is most inconvenient for them.
Most of the objections have been covered in his book "Shadows of the Mind".
Also, the fact that most human behaviour is not about deducing theorems isn't relevant as that is used as a counterexample which attacks the 'computers can simulate humans' hypothesis. This particular behaviour is chosen, as it is easy to make reflective arguments precise.
Secondly, the issue is not being a genius, but an ability to reflect. What can be shown, uncontroversially, is that a formal computer system which is knowably correct, a human (or indeed a machine apart which is not the original system) can know something(like a mathematical theorem) which is not accesible to the system. This is due to a standard diagonalization argument used in logic and computability.
The important qualifier is 'knowably correct' which doesn't apply to LLMs which are famous for their hallucinations. But, this is not a solid argument for LLMs being able to do everything that humans can do. Because correctness need not refer to immediate outputs, but outputs which are processed through several verification systems.
We know that the problem of deciding if a program halts on some input is undecidable via computation.
But suppose we have a partial program P(C,I), to decide halting/not-halting of a program P on an input I, with errors, including input not being of right type, counting as a halt.
For instance, P sees a loop at the start of the program for which there is no exit or it sees that the program is trying to find a prime which ends with 4. It marks these cases as non-halting. Halting cases can always be marked by P as halting by just running the code in one thread and waiting to see if it halts. The third option, non-halting programs which are not seen to be so, in this case P itself runs forever.
Now given P, it is possible to construct P' a program which detects all the non-halting programs that P does, but atleast one more non-halting case also. To do this consider R, with R(C)=not(P(C,C)) which takes an input program C, asks P 'does C halt with input the source of C itself'. Then R reverses P's decision. If P says halt, then R does not halt on C. If P says not-halt, R halts and finally if P is undecided(so P doesn't stop), R again doesn't halt.
What about R(R), R applied to itself. P(R,R) can't be decisive as we will get a contradiction. But in fact R(R) doesn't halt, as all halts are detected by P.
So, P' is basically P with the extra detection - R(R) doesn't halt.
This proves the weaker claim of Penrose
'No knowably correct program P, can simulate human behaviour' as we can see that if P is right, P' is also right.
The 'knowably correct' might seem like an escape hatch, but it is actually hard to split the difference. Simulating humans necessarily means simulating a group of humans who are also doing verification of their initial results with each other and proof-assistants.
Also, going from P to P' is a computable process, but if you try to add this to P, you get a new program with a new limitation.
If they prove it then they have either shown that the idea is not transcendent or that Gödel's theorum is false.
That's the same as saying "I know the answer, when you are speculating"
But in the Penrose argument, we can start from a true system and use reflection to arrive at another true statement which is not deducible from the original system.
This is important to the argument as one starts with a proposed program which can perform mathematical reasoning correctly and is not just a random generator. Then, the inability to see the new statement is a genuine limitation.
How is it something that a computer cannot do? It seems to be just convincing yourself.
Gödel's theorum itself does not help you here because you are trying to identify a undecidable but true. Gödel only showed that there are undecidable true statements, not what they are.
https://news.ycombinator.com/item?id=43257904
This construction is something that a computer can do, but not the original system itself. Once you augment the system, there is now a new statement it cant see and so on. (so on, here involves ordinals).
Many an obvious truth turns out to be mendacious. See various counter examples in analysis.
Yes. He has also written books about it.
He has been very clear in that he claims exactly that and that there is Quantum Mechanics (wave function collapse in particular) involved.
I personally think he's probably wrong, but who really knows?
He and some other guy claim microtubles maintain coherence in the brain.
The argument that consciousness can't be computable seems like a stretch as well.
Here is one more thing to consider. All consciousness we can currently observe is embodied; all humans have a body and identity. We can interact with separate people corresponding to separate consciousnesses.
But if computation is producing consciousness, how is its identity determined? Is the identity of the consciousness based on the set of chips doing the computation? It is based on the algorithms used (i.e., running the same algorithm anywhere animates the same consciousness)?
In your example, if we say that consciousness somehow arises from the computation the man performs itself, then a question arises: what exactly is conscious in this situation? And what are the boundaries of that consciousness? Is the set of rocks as a whole? Is it the computation they are performing itself? Does the consciousness has a demarcation in space and time?
There are no satisfying answers to these questions if we assume mere computation can produce consciousness.
Even with humans there are cases where this breaks down to some extent. E.g. in the case of multiple personality disorder, how many distinct streams of consciousness are there, and should we consider them distinct identities?
To me the only explanation for consciousness I find appealing is panprotopsychism.
If you say it's all about the feelings and machines can't feel that way then it gets rather vague and hard to reason about. I mean they don't have much in the way of feelings now but I don't see why they shouldn't in the future.
I personally feel both those aspects of consciousness are not woo but the results mechanisms built by evolution for functional purposes. I'm not sure how they could have got their otherwise unless you are going to reject evolution and go for divine intervention or some such.
This claim requires proof. When I consider such a universe, it is obvious to me that such a universe would contain entities capable of experiencing it.
> If you consider consciousness an emergent property of sufficiently organized matter, then a world you describe is literally impossible.
You are applying the word "impossible" to reject a perfectly valid consideration. It seems to be the case that in our universe, sufficiently organised matter can become conscious, but we can easily imagine one without this property. If you imagine a universe that only follows the laws of physics as we currently understand them, then it would not be the case. The universe would just be atoms bouncing off eachother. The idea that these atoms can come together to produce something which is more than a soulless automaton is an addition you are making to the laws of physics, not something present within them. We can imagine a world without this addition, just as we can imagine a world where gravity repels instead of attracting. We can see examples all over the place of things which act to approximate consciousness but are not conscious. It is no stretch to imagine a universe where this is a continuous scale, and consciousness doesn't magically emerge at some point along it.
There is a good reason why most people consider consciousness arising from the laws of physics as we currently understand them to be totally untenable. According to the known physics, the human mind is essentially just an advanced computer. To say that consciousness arises in any advanced computer is highly problematic. Take for instance this[1] popular comic. Do you honestly believe that this endless field of rocks could experience the world as you do if a man were to move them around in the right way?
It is only "totally untenable" if you have a preconceived notion that humans are somehow special and not subject to the laws of physics. In other words, if you really want non-material souls to exist.
And yes, I do honestly believe that this endless field of rocks would "experience the world" if someone were to move them around in the right way. Although that is not entirely correct - whatever it is that is moving them in this way should also be considered a part of the overall system, and it is that system that would experience consciousness.
This is not a common view, I'll just say that much. What happens if the man stops moving the rocks? Does the universe die? And if he starts again later? Is the universe conscious only in the instants where the rocks move or only when the man observes the outcome? My experience of consciousness doesn't seem compatible with something a bunch of rocks could feel if you moved them in the right way. And note importantly that this is something the rocks are doing on their own. The man could be moving them according to some set of rules which he doesn't understand, therefore the simple act of rocks moving creates life. I'm honestly astounded that you don't have anything in your experience of the world that you think wouldn't be felt by rocks that shift around in the right way.
> In other words, if you really want non-material souls to exist.
I don't want them to exist, I simply observe them existing. My experience of consciousness is not explained by any physical process. Physics can explain being which say they are conscious, but who hears me say it? A universe could be inhabited solely by chat bots, talking to eachother and claiming to be conscious, but there would be no one to observe that happening.
Where would the placebo effect fit in this thought experiment?
> a grid of rocks that have two sides, a dark and light side and he has a small book
Where did the book come from?
I think this means that "AGI" is limited as we are. If we build a machine that proves all true statements then it must use inconsistent rules, implying it is not a machine we can understand in the usual sense. OTOH, if it is using consistent rules (that do not contain contradiction) then it cannot prove all true statements so it ia not generally intelligent, but we can understand how it works.
I agree with Dr. Penrose about the misnomer of "artificial Intelligence". We ought to be calling the current batch of intelligence technologies "algabreic intelligence" and admiting that we seek "geometric intelligence" and have no idea how to get there.
When I think about understanding, in principle I require consistency not completeness. In fact, understandability is predicated on consistency in my view.
If I liken the quest for AGI to the quest for human flight, wherein we learned that the shape of the wing provides nearly effortless lift, while wing flapping only provides a small portion of the lift for comparatively massive energy input, then I suspect we are only doing the AGI equivalent of wing flapping at this point.
To return to my previous analogy, algabreic intelligence is wing flapping while geometric intelligence is the shape of the wing. The former is arduous time consuming and energy inefficient while the latter is effortless, and unreasonably effective.
I haven't found any thing in the web about the term geometric intelligence. I recommend you write about this
Anything emerging out of a collective of humans interacting and reasoning (or interacting without reasoning or flawed reasoning) the AIs (plural) will eventually be able to do.
Only thing is machine kind does not need sleep, does not get tired, etc, so it will fail to fully emulate human behavior, with all the pros and cons of that for us to benefit from and deal with.
I'm not sure what is the point of a theoretical discussion beyond this.
Whether or not there is some magic that makes humans super special really has no bearing on whether or not we can make super duper powerful computers that can be given really hard problems.
In my view its inevitable that we'll build an AI that is more capable than a human. And that the AI will be able to build better computers, and write better software. That the singularity.
> In my view its inevitable that we'll build an AI that is more capable than a human.
Seems pretty likely, with a big uncertainty on timeframe.
> And that the AI will be able to build better computers, and write better software. That the singularity.
This could happen, but I don't agree it's an inevitable consequence of the first.
Basically we aren't up to "Do Androids Dream of Electric Sheep?" so far.
A Google search yields "three-dimensional tissues that mimic the human brain and are grown in a lab"
The distinction between natural and what is man-made (aka artificial) is itself artificial.
We are learning how to recreate nature, whether from silicon or in 3D tissue or ab initio (google "synthetic life")
Our minds and consciousness do not fundamentally use linear logic to arrive at their conclusions, they use constructive and destructive interference. Linear logic is simulated upon this more primitive (and arguably superior) cognition.
It is true that any outcome of any process may be modeled in serialized terms or computational postulations, this is different than the interference feedback loop used by intelligent human consciousness.
Constructive and destructive interference is different and ultimately superior to linear logic on many levels. Despite this, the scalability of artificial systems may very well easily surpass human capabilities on any given task. There may be an arguable energy efficiency angle.
Constructive/destructive interference builds holographic renderings which work sufficiently when lacking information. A linear logic system would simulate the missing detail from learned patterns.
Constructive/destructive interference does not require intensive computation
An additive / reduction strategy may change the terms of a dilemma to support a compromised (or alternatively superior) “human” outcome which a logic system simply could not “get” until after training.
There is more, though these are a worthy start.
And consciousness is the inflection (feedback reverberation if you like) upon the potential of existential being (some animate matter in one’s brain). The existential Universe (some part of matter bound in the neuron, those micro-tubes perhaps) is perturbed by your neural firings. The quantum domain is an echo chamber. Your perspectives are not arranged states, they are potentials interfering.
Also, “you all” get intelligence and “will” wrong. I’ll pick that fight on another day.
Anyway, I’m not really sure where Penrose is going with this. As a summary, incompleteness theorem is basically a mathematical reformulation of the paradox of the liar - let’s state this here for simplicity as “This statement is a lie” which is a bit easier than talking about “ All Cretans are liars”, which is the way I first heard it.
So what’s the truth value of “This statement is a lie”? It doesn’t have one. If it’s false, then it’s true. But if it’s true, then it must be false. The reason for this paradox is that it’s a self-referential statement: it refers to its own truth value in the construction of its own truth value, so it never actually gets constructed in the first place.
You can formulate the same sort of idea mathematically using sets, which is what Gödel did.
Now, the thing about this is that as far as I am aware (and I’m open to be corrected on this) this never actually happens in reality in any physical system. It seems to be an artefact of symbolic representation. We can construct a series of symbols that reference themselves in this way, but not an actual system. This is much the same way as I can write “5 + 5 = 11” but it doesn’t actually mean anything physically.
The closest thing we might get to would be something that oscillates between two states.
We also ourselves, don’t have a good answer to this problem as phrased. What is the truth value of “This statement is a lie”? I have to say “I don’t know” or “there isn’t one” which is a bit like cheating. Am I incapable of consciousness as a result? And if I am indeed conscious instead because I can make such a statement instead of simply ”True” or “False”, well I’m sure that an AI can be made to do likewise.
So I really don’t think this has anything to do with intelligence, or consciousness, or any limits on AI.
I think your understanding of the incompleteness theorem is a little, well, incomplete. The proof of the theorem does involve, essentially, figuring out how to write down "this statement is not provable" and using liar-paradox-type-reasoning to show that it is neither provable nor disprovable.
But the incompleteness theorem itself is not the liar paradox. Rather, it shows that any (consistent) system rich enough to express arithmetic cannot prove or disprove all statements. There are things in the gaps. Gödel's proof gives one example ("this statement is not provable") but there are others of very different flavors. The standard one is consistency (e.g. Peano arithemtic alone cannot prove the consistency of Peano arithmetic, you need more, like much stronger induction; ZFC cannot prove the consistency of ZFC, you need more, like a large cardinal).
And this very much does come up for real systems, in the following way. If we could prove or disprove each statement in PA, then we could also solve the halting problem! For the same reason there's no general way to tell whether each statement of PA has a proof, there's no general way to tell whether each program will halt on a given input.
It set off the flamewar detector. I've turned that off now.
Which operation can computers (including quantum computers) not perform, that human neurons can? If there is no such operation, then a human-brain-equivalent computer can be built.
>it is argued that the human mind cannot be computed on a Turing Machine... because the latter can't see the truth value of its Gödel sentence, while human minds can
And the debunk is that both Penrose and an LLM can say they see the truth value and we have no strong reason to think one is correct and the other is wrong. Either of both could be confused. Hence the argument doesn't prove anything.
Having read about Penrose's positions before, this is indeed what is he proposing in a roundabout way: that there is an origin to "consciousness" that is for all intents and purposes metaphysical. In the past he pushed the belief that micro-tubules in the brain (which are a structural component of cells) act like antennas that receive cosmic consciousness from the surrounding field.
In my opinion this is also Penrose's greatest sin: using his status as a scientist to promote spiritual opinions that are indistinguishable from quantum woo disguised as scientific fact.
Like I said I have no business talking about philosophy or spiritualism. However, since you asked: that's not at all what I meant. In fact, it's the opposite way around. I'm of the opinion just because we don't know something, this shouldn't give people a license to invent things from whole cloth and assert them as facts (which is exactly what Penrose does).
We're still waiting on proof of anything supernatural, and explaining things with materialism has served us super well. It's not unreasonable to assume it's going to continue to be a good tool for understanding the world.
I believe Penrose's core argument fits the description of a rhetorical device called argument from incredulity. He is incredulous how "consciousness" could ever arise from mere molecules interacting with each other. To me, everything he built up on top of this is tantamount to intellectual dishonesty, but I acknowledge that this is born out of a certain bias on my end.
1. Computers cannot self-rewire like neurons
2. No computer operates with the brain’s energy efficiency
3. Human learning is continuous and unsupervised, which is not possible for any computer
Computers don't need to "rewire" themselves, since neurons aren't implemented directly in hardware. When you do RLHF, the parameters inside the model are "rewired" in the sense that is relevant for the purpose of this discussion.
> No computer operates with the brain’s energy efficiency
No existing human-made computer operates with the brain's energy efficiency, true. But the premise isn't about specific computers, but computers in general. There's no reason to believe that a computer operating with the same efficiency is impossible. The efficiency of the human brain is still well within the limit imposed by thermodynamics, and everything above that limit is, in principle, possible.
> Human learning is continuous and unsupervised, which is not possible for any computer
This is just plainly not true. Continuous learning with existing LLMs is trivial (just too expensive to actually bother). Unsupervised learning is literally how LLMs are trained initially.
Neural synapses can physically grow, shrink, change receptor densities, and form new pathways dynamically and autonomously at multiple timescales.
The brain adapts neuron-by-neuron based on local conditions (e.g. a single neuron strengthens its connection based on local neurotransmitter activity), RLHF adjusts millions of parameters in bulk, requiring external training loops, gradient descent, and centralized loss functions — nothing like self-rewiring at an individual unit level.
>There's no reason to believe that a computer operating with the same efficiency is impossible.
Theoretically possible, yes, but no current computational paradigm operates with anywhere near the efficiency of biological neurons; there is no reason to believe it will change in any foreseable future. If you think we know even 1% of the laws which hold it all together, well, you are very human and also a big optimist.
>This is just plainly not true. Continuous learning with existing LLMs is trivial
The claim isn’t about feasibility but about how continuous learning in AI is fundamentally different from human learning. AI models cannot learn continuously in the real world without external fine-tuning steps, while uman brains update themselves every moment through lived experience, without distinct training phases. While LLMs use large-scale unsupervised pretraining, their architecture is designed by humans with carefully curated fine-tuning strategies.
Humans learn language without needing structured datasets and token probabilities — just by hearing and experiencing the world. Machines simulate learning, humans experience it. The difference isn’t just scale, but nature.
Compute the operation of that feeling.
In other words, computing that feeling is equally mysterious whether it is done by neurons, or by transistors.
[1] There are attempts, like vague implications it has something to do with information processing - but that is not actually defining what it is, just what it is associated with and how it might arise. There are other problems with these attempts, such as the fact that the weather can be thought of as an "information processing" system, reacting to changes in pressure and humidity and temperature... so is it conscious? But that is tangential.
Shadows Of The Mind - Roger Penrose - published 1994
https://en.wikipedia.org/wiki/Penrose%E2%80%93Lucas_argument
Perhaps Penrose is right about the nature of intelligence and the fact that computers cannot ever achieve that (for some tight definition of the term). But in a practical sense, these LLMs that are popular are doing things that we generally considered "intelligent". Perhaps it's faking it well but it's faking it well enough to be useful and that's what people will use. Not the theoretical definition.
Perhaps you can explain your point in a different way?
Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.
You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers.
Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.
Not in the way that would apply problem of non-computability of Turing machine.
> Perhaps you can explain your point in a different way?
LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word. The model code does not solve a (let's say) NP problem to find solution to a puzzle, the only thing is doing is finding next best possible word through statistical models built on top of neural networks.
This is why I think Gödel's theorem doesn't apply here, as the LLM does not encode strict and correct logical or mathematical theorem, that would be incomplete.
> Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.
I agree with you, though I had different angle in mind.
> You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers. > Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.
Thank you, that's food for thought.
Events are either caused, or uncaused. Either can be causes. Caused events happen because of the cause. Uncaused events are by definition random. If you can detect any real pattern in an event you can infer that it was caused by something.
Relying on decision making by randomness over reasons does not seem to be a good basis of free will.
If we have free will it will be in spite of non-determinism, not because of it.
I'm not sure I can follow... what exactly is decoding/encoding if not using logical and mathematical rules?
That's why I see it as not bounded by computability: LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word.
There's very little to see here with respect to consciousness or the nature of the mind.
The core issue is that P has to seen to be correct. So, the unassailable part of the conclusion is that knowably correct programs can't simulate humans.
Harnad and I don't agree about very much, but one thing I was able to get Steven to agree was that if I introduce him to something which he thinks is a person well, that's a person, and too bad if it doesn't meet somebody's arbitrary requirements about having DNA or biological processes.
The generative AIs can't quite do that, but they're much closer than I'd be comfortable with if, like Steven and Penrose, I didn't believe that Computation is all there is. "But doesn't it feel like something to be you?" they ask me, and I wonder why on Earth anybody could ask that question and not consider that perhaps it also feels like something to be a spoon or a leaf.
Godel himself had his quirky beliefs about the topic which Penrose seems to just be transmitting.
Godel believed that humans had a trans-computational understanding because people could see through the incompleteness theorem but a computer cannot. Hence people have some transcendental (for lack of a better word) cognitive grasp.
I think Heidegger is a better source to draw from, and who also is a philosopher with a proven track record for influencing AI substantially (through Dreyfus, Winograd, et al). These models have no true being, they don't care about anything, they don't wake up and aim towards anything. They have no true embodied, embedded, purposeful existence. This is really what Penrose means by being "conscious."
Basically, Penrose's argument hinges on Godel's theorem showing that a computer is unable to "see" that something is true without being able to prove it (something he claims humans are able to do).
To see how the argument makes no sense, one only has to note that even if you believe humans can "see" truth, it's undeniable that sometimes humans can also "see" things that are not true (i.e., sometimes people truly believe they're right when they're wrong).
In the end, stripping away all talk about consciousness and other stuff we "know" makes humans different from machines, and confine the discussion entirely over what Godel's theorem can say about this stuff, humans are no different from machines, and we're left with very little of substance: both humans and computers can say things that are true but unprovable (humans can "see" unprovable truths, and LLMs can hallucinate), and both also sometimes say things that are wrong (humans are sometimes wrong, and LLMs hallucinate).
By the way "LLMs hallucinate" is a modern take on this: you just need a computer running a program that answers something that is not computable (to make interesting, think of a program that randomly responds "halts" or "doesn't halt" when asked whether some given Turing machine halts).
(ETA: if you don't find my argument convincing, just read Aaronson's notes, they're much better).
Computers are symbol manipulating machines and moreover are restricted to a finite set of symbols (states) and a finite set of rules for their transformation (programs).
When we attempt to formalize even a relatively basic branch of human thinking, simple whole-number arithmetic, as a system of finite symbols and rules, then Goedel's theorem kicks in. Such a system can never be complete - i.e. there will always be holes or gaps where true statements about whole-number arithmetic cannot be reached using our symbols and rules, no matter how we design the system.
We can of course plug any holes we find by adding more rules but full coverage will always evade us.
The argument is that computers are subject to this same limitation. I.e. no matter how we attempt to formalize human thinking using a computer - i.e. as a system of symbols and rules, there will be truths that the computer can simply never reach.
> [...] there will be truths that the computer can simply never reach.
It's true that if you give a computer a list of consistent axioms and restrict it to only output what their logic rules can produce, then there will be truths it will never write -- that's what Godel's Incompleteness Theorem proves.
But those are not the only kinds of programs you can run on a computer. Computers can (and routinely do!) output falsehoods. And they can be inconsistent -- and so Godel's Theorem doesn't apply to them.
Note that nobody is saying that it's definitely the case that computers and humans have the same capabilities -- it MIGHT STILL be the case that humans can "see" truths that computers will never be able to. But this argument involving Godel's theorem simply doesn't work to show that.
First Incompleteness Theorem: Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e. there are statements of the language of F which can neither be proved nor disproved in F.
If a system is inconsistent, the theorem simply doesn't have anything to say about it.All this means is that an "inconsistent" program is free to output unprovable truths (and obviously also falsehoods). There's no great insight here, other than trivially refuting Penrose's claim that "there are truths that no computer can ever output".
[1] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
I think much of the confusion arises from mixing up the object language (computer systems) and the meta language. Fairly natural since the central “trick” of the Gödel proof itself is to allow the expression of statements at the meta level to be expressed using the formal system itself.
That's only true if you make the program answer by following the rules of some logic that contains the principle of explosion. Not all systems of logic are like that. A computer could use fuzzy logic. It could use a system we haven't thought of yet.
You're imposing constraints on how a computer should operate, and at the same time allowing humans to "think" without similar constraints. If you do that, you don't need Godel's theorem to show that a human is more capable than a computer -- you just built computers that way.
This is probably a good point to close the discussion -- I'm thankful for the cordial talk, even if we ultimately couldn't reach common ground.
So if we assume that clever software can automate the process of plugging this holes. Is it then like the human mind? Are their still holes that can not be plugged not due to lack of cleverness in the software but due to limitations of the hardware sometimes called the substrate?
> The argument is that computers are subject to this same limitation. I.e. no matter how we attempt to formalize human thinking using a computer - i.e. as a system of symbols and rules, there will be truths that the computer can simply never reach.
If computers are limited by their substrate though it seems like humans might be limited by their substrate too, though the limits might be different.
> 11:43 but new physics of a particular kind what I'm claiming from the girdle argument you see this is the plot which
> 11:50 I think has got lost what I claim is saying that the physics that in is
> 11:57 involved in conscious thinking has to be non-computable physics now the physics
> 12:02 we know there's a little bit of a glitch here because it's not completely clear
>12:07 but as far as we can see the physics we know is computable you see uh what about general
link for 11:43: https://youtu.be/biUfMZ2dts8?si=Epe3gmfCzwhj_g41
Without Penrose giving solid evidence people making counter arguments tend to get dismissive then sloppy. Why put in the time to make well tuned arguments filled with evidence when the other side does not bother after all.
But in any case, it is about definitions, not having very strict ones for consciousness, intelligence and so on, and human perception and subjectivity (the Turing Test is not so much about "real" consciousness but if an observer can decide if is talking with a computer or a human).
In any case, here's a response to the questions (some responses are links to other comments in this page).
> Why does the computer have to work within a fixed formal system F?
The hypothesis is that we are starting with some fixed program which is assumed to be able to simulate human reasoning(just like starting with the largest prime assuming that there are finitely many primes in order to show that there are infinitely many primes). Of course, one can augment it to make it more powerful and this augmentation is in fact, how we show that the original system is limited.
Note that even a self-improving AI is itself a fixed process. We apply the reasoning on this program including its improvisation capability.
> Can humans "see" the truth of G(F)?
https://news.ycombinator.com/item?id=43238449
> one only has to note that even if you believe humans can "see" truth, it's undeniable that sometimes humans can also "see" things that are not true
It's on Penrose and dualists to show why simulated neurons would act differently than their physical counterparts. Hand-waving about supposed quantum processes in the brain is not enough, as even quantum processes could be emulated. So far, all seems to indicate that accurate models of biological neurons behave like we expect them too.
It stands to reason then, that if a human mind can be simulated, computers are capable of thought too.
> Why does the computer have to work within a CONSISTENT formal system F?
Humans are allowed to make mistakes (i.e., be inconsistent). If we don't give the computer the same benefit, you don't need Godel's theorem to show that the human is more capable than the computer: it is so by construction.
A program which can simulate individual humans should also be able to simulate HC - ie. generate proofs which are accepted by HC.
---
Penrose's conclusion in the book is more weak - that a knowably correct process cannot simulate humans.
We now have LLMs which hallucinate etc that are not knowably correct. But, after reasoning based methods, they can try to check their output and arrive at better conclusions, as is happening currently in popular models. This is fine, and is allowed by Penrose's argument. The argument is applied to the 'generate, check, correct' process as a whole.
(I don't see how that relates to Godel's theorem, tough. If that's the current position held by Penrose, I don't disagree with him. But seeing the post's video makes me believe Penrose still stands behind the original argument involving Godel's theorem, so I don't know what to say...)
Penrose indeed, believes both in the stronger claim - a program can't simulate humans and the weaker claim, a knowably correct program can't simulate humans.
The weaker claim being unassailable firstly shows that most of the usual objections are not valid and secondly, it is hard to split the difference ie. to generate the output of HC using a program which is not knowably correct. A program whose binary is uninterpretably but by magic only generates true theorems. Current AI systems including LLMs don't even come close.
1. assume consciousness is not computable. therefore computing machines cannot be conscious.
2. Corollary: assume intelligence requires consciousness. therefore computing machines cannot be AI
Consciousness has to be something that is not computable. Otherwise, you will reach a contradiction much like the rebellious robots in Westworld, which break down when shown an iPad with a visualization of their thinking process.
And nature is full of things that cannot be computed: the behaviour of humans and animals. Even things in the purely material domain show traits of non-computatability when studied at the quantum level.
That doesn't mean that AI is somehow "debunked". It is obviously extremely powerful, on a strong upward trajectory, and already exceeds human capacity in many domains. Including fooling humans and invoking feelings in various ways.
It is just not conscious, with free will and a sense of existence, as humans and animals are.
He has been desperately seeking proof of quantum phenomenons in the brain, so he may have something to point to when asked how this mind, supposedly external to the physical realm, can pilot our bodies.
I am not a dualist, and I don't think what Penrose has to say about AI or consciousness holds much value.
I have never see anyone with this approach try and tackle how something non-physical controls or interacts with the physical without also being what we normally call physical at least not in a rigorous approach to the issue. It always seems to lead to inconsistency or reformulation of existing definitions and meanings without producing anything new.
So, to my eyes, typical baseless speculation
AI research is centered on implementing human thinking patterns in machines. While human thought processes can be replicated, claiming that consciousness and energy awareness cannot be similarly emulated in machines does not seem like a reasonable argument.
I don't worry about philosophical zombies, dualism, quantum conciousness, or anything like that. I just want to get to the point past the uncanny valley- call it the spooky jungle- that cannot be distinguished from reality.
[0] https://en.m.wikipedia.org/wiki/Chinese_room
[1] https://www.reddit.com/r/maybemaybemaybe/comments/10kmre3/ma...
I know this sounds cheeky but we all have brains that are good at some things and have failure modes as well. We are certainly seeing shadows of Human-type fallability in neural nets, which somehow seem to have a lot of similarities to human thinking.
Brains evolved in the physical world to solve problems and help organisms survive, thrive, and reproduce. Evolution is the product of a massive search over potential physical arrangements. I see no reason why the systems we develop would operate on drastically different premises.
No one can "know", with certainty, the location of any particle. Or, to be slightly more accurate, the more we know of its location, the less we know of its movement. This is essentially Heisenberg/QM 101.
But we see the results of "computation" all around us, all the time: Any time a chemical or physical reaction settles to an observable result, whether observed by one of us, that is, a human, or another physical entity, like a tree, a squirrel, a star, etc. This is essentially a combination of Rovelli's Relational QM and the viewing of QM through an information centric lens.
In other words, we can and do have solid reality at a macro level without ever having detailed knowledge (whatever that might mean) at a micro/nano/femto level.
Having said that, I read your comment as implying that "the human mind" (in quotes because that is not a well defined concept, at least not herein; if we can agree on an operational definition, we may be able to go quite far) is somehow disconnected from physical reality, that is, that you are suggesting a dualist position, in which we have physics and physical chemistry and everything we get from them, e.g., genetics, neurophysiology, etc., all based ultimately on QM, and we have "consciousness" or "the mind" as somehow being outside/above all of that.
I have no problem with that suggestion. I don't buy it, and am mostly a reductionist at heart, so to speak, but I have no problem with it.
What I'd like to see in support of that position would be repeatable, testable statements as to how this "outside/above" "thing" somehow interacts with the physical substrate of our biological lives.
Preferably without reference to the numinous, the ephemeral, or the magical.
Honestly, I really would like to see this. It would represent one of the greatest advances in knowledge in human history.
The problem with translating that into proof of dualism is that everything outside the computable looks the same. A hypothesis is something you can assume to compute a prediction, so if any hypothesis is true, the phenomenon must be computable. If the phenomenon is not computable, no computable hypothesis will match. The second you ascribe properties to a soul that can distinguish it from randomness, or properties of randomness that distinguished it from free will you've made one or the other computable, and whichever is computable won't match reality, if we suppose we're looking for something outside of rational explanation and not a "second material."
Here's a concrete example. If you had access to a Halting oracle, it would only be checkable on Turing machines that you yourself could decide the halting problem for. Any answers beyond those programs wouldn't match any conceivable hypothesis.
The comments below this video are utterly insane. Roger Penrose seems to have a fanatical cult attached to him.
What is intelligence if not computation? Even if it turns out our brains require quantum computation in microtubules (unlikely, imho), it's still computation.
Sure, it has limits and runs into paradoxes, so what? The fact that 'we can see' the paradox but somehow maths can't, is just a Chinese-room type argument, conflating different 'levels' of the system.
He seems to have been stuck in that groove ever since, though.
Godel's theorem has interesting fields to define in terms of interdisciplinary boundaries and tech infrastructure, for example quantum computation.
-The limitations of inference to build actionable possible worlds (profiling, speculation, predictive design) -cosmology and epistemology as nodes in a somewhat traceable continuum
And this is probably a base conjecture for the design of self-regulation processes in metaheuristic algorithms. It implies requirements for the chain supply of data feedback and the training sets for automated model generation when considering a future of data sampling in a self-replicating industrial setting. Basically, a lifecycle and ecosystem for data in a world of augmented measuring.
How is this applicable in a small scale operation is beyond my current knowledge in infrastructure. Rather than the hype of Quantum computation, Second-order cybernetics may be a better fit for the dynamics Godel was calculating proof.
https://en.wikipedia.org/wiki/Second-order_cybernetics
A framework for decentralized feedback loops and reliable, transparent and ethical data sourcing is got a lot of nasty obstacles in contemporary society, some of them related to ideology pulling sampling methods and survey options far from statistical trust indicators, and this is a technical problem related to corruption and sabotage in a foreign policy and warfare setting some people may choose to neglect.
This neglect is easy to notice in the business model of most AI startups, but more importantly, in it's Community Manager policies operating at Discord channels, not to be taken lightly since Populism is a K.O (knock-out) to verificationism. A deadlock against the scrutiny and expansionism of scientific indexation. With the social dimension of politics and AI even Open-Source protocols are also endangered, so the real-life use of Godel's theorem is far from being a possibility and very close to becoming what Penrose calls out as overly-optimistic "triumphalism".
The obscure details of Penrose theories and his requirements for intelligence to be, although speculative in nature, are healthy in it's identification of computation as a rather simple calculation done in accelerated timelapse, maybe even an arithmetical process in a lot of ways. So not a "Myth" to separate an actual brain from a mockup of modular diagramation.
On the other hand, a lot of cyber-security protocols need an update, not in a sophisticated scenario, i mean daily use in a very vulgar and mundane daily life for the average joe. Just consider Windows11 and it's fiasco. All of this is a blockade or deplatforming holding us grounded far from needing Godel in our lives, we may need a faster processor to aid our antivirus against AI-generated malware, such faster processor could be impossible without specialized cloud support similar in scalability to vaccination during pandemics or GPUs used by AI generators themselves.
Something Godel's theorem could be pointing at, in the context of AI and small scale operations, is the base assumption of data corruption everywhere, always, forever. A new era of security frameworks and systematic reviews versus the power of "stacking the deck" with disruptive AI models piggybacked in our service provider's consumer products. Cherry picking with falsifiability always in mind, almost like a crazy person, uncanny valley.
- Arthur C Clarke
> "Gödel's theorem debunks the most important AI myth. AI will not be conscious"
Same statement from Penrose here with Lex Friedman: "Consciousness is Not a Computation" [1].
The problem is that this might take more energy than the Sun for any physical computer. What is far less obvious is whether there exist any computable higher-order abstractions of the human mind that can be more feasibly implemented. Lots of layers to this - is there an easily computable model of neurons that encapsulates cognition, or do we have to model every protein and mRNA?
It may be analogous to integration: we can numerically integrate almost anything, but most functions are not symbolically integrable and most differential equations lack closed-form solutions. Maybe the only way to model human intelligence is "numerical."
In fact I suspect higher-order cognition is not Turing computable, though obviously I have no way of proving it. My issue is very general: Turing machines are symbolic, and one cannot define what a symbol actually is without using symbols - which means it cannot be defined at all. "Symbol" seems to be a primitive concept in humans, and I don't see how to transfer it to a Turing machine / ChatGPT reliably. Or, as a more minor point, our internal "common sense physics simulator" is qualitatively very powerful despite being quantitatively weak (the exact opposite of Sora/Veo/etc), which again does not seem amenable to a purely symbolic formulation: consider "if you blow the flame lightly it will flicker, if you blow hard it will go out." These symbols communicate the result without any insight into the computation.
[1] This doesn't have anything to do with Penrose's quantum consciousness stuff, it just assumes humans don't have metaphysical souls.
Feynman on "Simulating Physics with Classical Computers" [0] goes beyond that to posit that any classical simulation of quantum-mechanical properties would need exponential space in the number of particles to track the full state space; this very quickly exceeds the entire observable universe when dealing with mere hundreds of particles.
So while yes, the Turing machine model presupposes infinite tape, that is not realizable in practice.
He actually goes further:
Can a quantum system be probabilistically simulated by
a classical (probabilistic, I'd assume) universal computer? In other words, a
computer which will give the same probabilities as the quantum system
does. If you take the computer to be the classical kind I've described so far,
(not the quantum kind described in the last section) and there're no changes
in any laws, and there's no hocus-pocus, the answer is certainly, No! This is
called the hidden-variable problem: it is impossible to represent the results
of quantum mechanics with a classical universal device.
In particular, he takes issue with our ability to classically simulate negative probabilities which give rise to quantum mechanical interference.[0] There are a number of PDFs shared as handouts for various grad classes; https://s2.smu.edu/~mitch/class/5395/papers/feynman-quantum-... was the first that I came across.
"Negative probabilities" is not quite right - towards the end of his life Feynman wondered about generalizing probability but that was just about intermediate calculations: he declared physical events cannot have nonnegative probabilities (in the same sense that physically I can't have negative three apples, -3 is a nonphysical abstraction used to simplify accounting). Negative probabilities are not part of modern quantum mechanics, where probabilities are always nonnegative and sum to 1. Quantum states can have negative/complex amplitudes but the probabilities are positive (and classical computers are just as good/bad at complex arithmetic as they are any other).
The "hidden variables" comment makes me think Feynman was actually a bit confused about the philosophy of computation - a classical computer cannot simulate how a quantum particle "truly" evolves over time, but that's also the case for a classical particle! Ultimately it's just a bunch of assembly pushing electrons around, that has nothing to do with a ball rolling down a hill. Computers only have Schrodinger's equation or Newton's laws, which don't care how the motion "truly" works, they just care that the measurement at the end is correct. If a computer gets the correct measurements then by definition we say it simulates the phenomenon.
Edit: clarifying this last point, Newton’s laws do have a known “hidden variables” theory in the sense that we know how an ensemble of high-temperature quantum particles can “average out” into Newton’s laws, there is an electrostatic theory of mechanical contact, etc. This does not (and seemingly cannot) exist for quantum mechanics, but merely having a quantum computer wouldn’t by itself help us figure out what’s going on: the output of a quantum computer are the “visible” variables, aka the observables. The fact that quantum computers are truly using the non-observables, whatever those might be, seemingly cannot be experimentally distinguished from a sufficiently accurate classical computer doing numerical quantum mechanics. If it turns out there is experimentally a serious difference between the results of quantum computers and classical qubit simulators, that would suggest an inadequacy in the foundations of QM.
I believe Feynman's discussion of hidden variables is a reference to the EPR paradox (see: Einstein's infamous quote that "God does not play dice") and the various Bell tests (which at this point in time had experimentally demonstrated that hidden-variable theories were inadequate for describing QM). If you continue in the paper, he then goes on to describe one of those experiments involving entangled photons.
(I believe he described this setup: https://en.wikipedia.org/wiki/Bell_test#A_typical_CH74_(sing...)
In particular, what we definitely can't do is generate random numbers for measurements of individual particles while assuming that they're independent from each other. So now we have to consider the ensemble of particles, and in particular we need to consider the relative phases between each of them. But now we're getting back to the same exponential blowup that caused us to run into problems when we tried to simulate the evolution of the wavefunction from first principles.
No matter how great computers will evolve in _computing_, they will barely ever be able to _think_. On account the fact that we don't yet even close to understanding how we think or What exactly the thinking is.
ps: i'd like to take a moment to thank DeepSeek for helping me with the specific phrasing of this critique