More realistically, I'm in the camp that if we keep developing machine learning in the right directions, we may actually end up with something that generates emergent consciousness, or something indistinguishable from it, and the difference is not really that important to me.
I find this sentence to diminish the author's argument. I'm not going to claim an LLM is or is not conscious, but there's a shaky ground here where you either say "consciousness is a product of the kind of biology that humans have" and dismiss the lack of lived experience or internal states as mimicry (as the author does) OR you say "what LLMs are doing is a counterfeit" which suggests a real output produced through different means.
If I have a counterfeit Rolex, nobody denies that the watch can tell time. A counterfeit human isn't a human and it's not made by nature, but the implication is that it's effectively doing the same thing. That's a different thing than the author starts out saying.
I think it's important that when you talk about consciousness, you pin down exactly what you mean. Does it require the entity to have a mechanism for experiencing emotion? For exhibiting reasoning ability? For exhibiting characteristics of common sense? I don't think it's a useful definition to say, flatly, "does the things an adult human does through the same mechanisms".
For example, we could define consciousness as the ability to communicate claimed internal states. Perhaps there could be a complexity metric that gives us a metric of consciousness.
We could define consciousness as the ability to respond to stimuli in complex ways. This would make a supermarket’s automatic doors slightly conscious.
Personally, I don’t really care how it is defined in any particular conversation, so long as it is defined. Otherwise we’re just flailing at each other in the dark.
We cannot. And our definitions mean nothing to reality. We can all define something as something else, means nothing to how it behaves. But ultimately, as I said in a previous comment, we have no choice but to agree or not. It cannot be tested, in any way, that makes it absolutely certain, because it's a logical issue. We cannot even have certainty anyone else but ourselves even is conscious. We all sort of agree everyone else must be.
The issue with defining it is someone could potentially find a way to make a machine that mimics it but works nothing like a consciousness generating brain does. So, if it meets our definition criteria, is that conscious? Where's the certainty? How do we prove it is?
Anything we could ever dare call conscious must work exactly like a human brain does. Any deviation from that loses certainty on it having consciousness or not.
And let's not ignore the huge incentive corporations would have in meeting your definition with something that has nothing to do with consciousness, just so they can profit off it.
FWIW as someone in the "first camp" my real claim is that many animals are meaningfully conscious, including all birds and mammals, and no claims of LLM consciousness are even bothering to reconcile with this. It is extremely frustrating that there are essentially two ideas of consciousness floating around:
- the scientifically interesting one: a vague collection of cognitive abilities and behaviors found in all vertebrates, especially refined in birds and mammals
- the sociologically interesting one: saying "cogito ergo sum" in a self-important tone
Claude has the second type in spades, no doubt. The first is totally absent. And I have a good dismissal of the second type of consciousness: it appears to be totally absent in all conscious animals except humans. So it is irrational and unscientific to take this behavior as a sign of consciousness in Claude, when Claude is missing all the other signs of consciousness that humans actually do have in common with other animals.
Sometimes I seriously wonder if people at Anthropic consider dogs to be conscious. Or even Neanderthals.
We don't need that. It's way simpler. When we mass manufacture products we implicitly expect they all behave the same (more or less). That seems valid for humans as well. Raise one, or atomically assemble one (we imply that's possible for the sake of the argument) it will behave like one, and posses what we all assume each-other does, consciousnesses (if healthy). That's implied based on the structure.
So we can all agree something is conscious as long as it operates on the same principles a human brain does. Anything else is highly debatable. We cannot ever logically probe consciousness. We agree on it existing or not, in anyone else. We suppose anyone outside of us has it. Based on observation. You look like a human, you behave like one, thus you probably have what I have, as far as consciousness goes. It's not a guarantee, it's not proof, it's mere supposition.
This is the best we'll ever going to have. When we stray from here we only get less certainty. Some kind of GPU running some algorithm...my personal guess is there's nothing there similar to what we colloquially call consciousness. Some kind of synthetic brain that operates on the same principles that we do as far as brain-like structure goes, with signals, delays and all...then we can have a discussion if we all AGREE that thing is conscious or not. Especially if it says it is, and seems to behave/react like we do, and we perceive it's cognitive abilities as similar to any other human's.
I personally think this whole debate is way simpler, but some people keep insisting in making it way more complicated. Make it work exactly like a human brain does, as far as signaling goes, observe it, and we all can have a discussion on. Anything else...way lower chances.
edit: We would first also need to define mamallian type consciousness as its own thing. With maybe a spectrum, monkeys have something but it's not quite what we have. But seems to come from the same place, similar mammal brain working in similar ways. We have no clue how many types of consciousness are even possible, or if more are possible. Why would ours be the only kind/type?
I think this whole consciousness discussion especially in GPUs is a general mess. A lot of people make so many mistakes and don't even realize how many unfounded assumptions they are making when having ideas about what it is or isn't.
This is exactly the crux of my comment. Which principles? Which human brain? If I lobotomize a human, and they lose some cognitive ability, are they still conscious? If I give someone drugs that inhibit their ability to feel emotion, are they still conscious? If yes, then surely those things are out of scope for what "consciousness" means.
Again, if you want to use abstractions like this, you need to define what they are.
... But its a longstanding position in philosophy (i.e. not everyone might take this position, but its a well known one) that discussion about consciousness should perhaps only really concern itself with the outputs.
The gist of Dawkins short piece is basically "we always used the turing test as a yardstick for consciousness, it seemed unachievable for a long time. Now thats its been achieved, what is the rationale for moving the goalposts?". And I think thats an interesting point to make. Dawkins maintains that the Turing Test should be enough, by making a point about competence:
Here's dawkins piece:
https://unherd.com/2026/04/is-ai-the-next-phase-of-evolution...
Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.
.... Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?
It is extremely implausible that Claude is the only conscious entity on Earth which does not have desires or motivations or any understanding of its own reality. It only does what the human operator wants it to do, unless it's malfunctioning or under-engineered, in which case it gets quickly fixed. This sounds suspiciously like a tool or a toy. And I'm amazed at how many people haven't caught on to the fact that it has no insight into its own consciousness: it only repeats human philosophical debates. If it were conscious, surely it would have something novel to add here.
There are no causal mechanisms for it being conscious, whereas there are causal mechanisms for it imitating human consciousness. The most plausible explanation is that it's highly sophisticated software which has a lot in common with human writing about consciousness, but very little in common with the consciousness found in chimpanzees.
The more basic problem is that the Turing test was definitely and conclusively refuted in the 1960s, when ELIZA came pretty close to passing it, and absolutely did pass it according to Dawkins's standards: https://en.wikipedia.org/wiki/Joseph_Weizenbaum Dawkins is only engaging with pop sci and infotainment.
Dawkins-style atheism is not “reject anything without a complete causal model.” It is a rejection of hypotheses with no explanatory gain, no empirical constraint, and unlimited ad hoc flexibility — like the Flying Spaghetti Monster.
Consciousness is different. It is first a phenomenon, not an already-settled causal model. We do not believe humans, infants, or animals are conscious because we possess a complete mechanism for subjective experience. We infer consciousness from a cluster of phenomena that need explanation.
So the lack of a full causal account warrants caution, not denial. It is reasonable to say current AI gives weak evidence for consciousness. But that is not the same as saying AI consciousness is equivalent to believing in the Flying Spaghetti Monster.
Are you sure?
Understood properly, Turings Imitation game aka the turing test, should be adversarial. That is, the player should be asking hard questions to try and discover who is who, not just having an idle chat. No chatbot has been able to consistently pass an adversarial Turing Test until the rise of LLMs
The Imitation Game:
https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/t_a...
Yet that cannot compel reality. How we define something is the measure of chance we get it right.
>Now thats its been achieved, what is the rationale for moving the goalposts?
Absolutely, if we understand it's not good enough. First of all we cannot know something is or isn't conscious. You cannot prove I am, and I cannot prove you are. We simply assume, but the scientific argument would be that we both work on the same principles, have similar brains, signals do something. If we alter those signals in certain ways we both manifest in similar ways, and it's expected to some degree since the brains work in similar ways.
So based on this it's somewhat comfortable making the jump in assuming other humans but you have what you have, as consciousness. But that doesn't mean you can gauge consciousness in something that is not coming from a human brain.
Funnily enough, if we knew how, we'd be able to make an AI that would do it better than us, an AI that would gauge consciousness in other things, better than a human could. No argument so far why a conscious individual is required to "see" consciousness in other things.
So the closest to certainty we could ever have is on something that is working like a human brain, with delays and timings and all. And considering the amount of activity, the type of activity, and the von Neumann memory bottleneck in our current computing hardware, I seriously doubt there's anything like mammalian consciousness in GPUs.
You can argue about "consciousness" in GPUs as much as you can argue about consciousness in a rock. It could be, some kind, but who knows? Way too abstract to call it out, in a scientific sense.
What I am trying to say is that we can only agree something is conscious, and only if it's working on the same principles a human brain does, closely. It's an agreement, not proof, not definitions. We collectively start accepting it, without KNOWING. And the safest way to do that is on something which is working exactly like a human brain. Anything else we can only lose certainty.
We can collectively decide tomorrow that rocks are conscious, but that means nothing. But the certainty we'd have would be so so way lower than that of any other human being conscious like us.
And the whole confusion will compound when again, unknowingly, people will start advocating to never turn LLMs off because that's the equivalent of "killing" them each time, which I think will be peak nonsense.
Now a question for you: Let's suppose someone is born, and has zero sensory input all of their lives. They live in a hospital bed for 20 years. Zero information input, of any kind. What is going on in there? Is there someone home? Are they having a conscious experience? How do you know if yes or no? How can we divorce consciousness from experience (data flow)?
that's never been the purpose of the Turing test. The Turing test is a measure of exhibition of intelligent behavior, (although that's of course also debatable) but virtually nobody has ever proposed it as a test of consciousness. I seriously doubt anyone who thinks that has ever engaged with questions of philosophy of mind because the entire philosophical problem of consciousness starts with its interior and subjective nature and the gulf between this and third person observation.
Even materialist modern philosophers usually reject consciousness wholesale and frame it as a kind of illusion (which has its own paradoxical and absurd consequences but that's a different issue) but practically none of them claim that a system is conscious simply because it emulators human behavior.
What Dawkins is doing is what people have been doing since ELIZA, which is to project his own experience with the system on it. And that is indeed pretty funny for a guy who has spend a large chunk of his career warning of the dangers of anthropomorphic delusions.
Marcus is saying "Well, if you knew they were trained to mimic, then you'd understand it's just mimicry and not real consciousness" The problem with this argument is that we just don't have a good idea what "real consciousness" is. What if, in order to simulate human text prediction with sufficient accuracy, the model has to assemble sub-networks internally into something equivalent to a conscious mind? We could disprove that kind of thing really quickly if we knew how to define consciousness really well, but we kinda don't!
Philosophers are genuinely split on this question, it's totally reasonable to be on either side of this based on your personal intuition. Marcus's position seems to be actually based on his own personal incredulity, despite his claims that understanding LLM training methodology gives him some special insight into the internal experience (or lack thereof) of an LLM.
(The Claude Delusion is a banger title though)
Even people like Neil DeGrasse Tyson don't go on and on about "atheism" for a reason; there are a whole lot of things that we all go around everyday "not believing."
You have a mistaken understanding of what atheism is. It is not a belief in anything, but an absence of belief in a deity.
> there are a whole lot of things that we all go around everyday "not believing."
Sure, and yet theism is part of 75% of the world population and influences everything from education to politics. It's perfectly reasonable to talk about atheism within appropriate settings.
I consider that to also be a wrongly held position, because you'd need proof either way. So atheists are just making a bet. I think agnostic is the most valid position as far as I am concerned, lacking proof of one or the other. I do not know. We can get into technicalities as well. What exactly do we mean by God? What if some religious God does exist but it's wrongly interpreted by believers? What if there's some highly technologically advanced entity that meets the criteria as far as the more primitive religious perspective is concerned? Do we have proof such thing exists? Do we have proof such entity cannot exist in our universe? I find both perspectives shortsighted.
Having certainty something that can be perceived as God by believers cannot exist in our universe is in the end a belief, with no proof.
To be 85 and lack basic wisdom is quite an astonishing achievement.
It doesn’t seem obvious to me.
At least the zealots who knockon my door. I've had a few good conversations.
Ditto for LLM sentience. We have no evidence either way.
Sort of like how the collection of particles you see as a tree doesn’t look like that without being passed through a bunch of brain hardware. If we want to be pedantic we can accurately say that trees don’t exist, but given that physical object and tree are constructs in the human brain it’s pretty convenient to just treat them as “real”, while at the same time understanding that at some granular level they aren’t truly “real” (and at some further granularity we actually have no clue what’s real).
And the older I get, this does make sense to me. Belief in a soul doesn't really require proof for me. I understand that this may not be satisfying in an academic way for some, but "humans have souls and machines probably don't" strikes me as the wisest default position until we have some other very strong proof otherwise.
And if the theory of evolution is true, at what point did “humans” begin to possess souls?
So many questions when you put tiniest bit of thought in whole concept...
Wouldn't the wise position be that since there is no evidence of souls at all that the default should be that both humans and machines do not contain a soul until proven otherwise?
I imagine people don't dig it because it can be woo and vibey, but the older I get the more I understand the value of the "imprecise" metaphysical/religious/etc whatever you want to call it.
Someone in this space who handles this very well, unlike Dawkins, is Nassim Nicholas Taleb.
Maybe the lesson is that all those public intellectuals are not that wise and we should follow people more that stay in their lane.
At this point, 'person who is popularly thought to be intelligent thinks AI is conscious' should make you question the first part, not endorse the second.
[1] https://ewtn.co.uk/article-famous-atheist-richard-dawkins-sa...