It includes a letter that starts:
I am Jennifer Hudin, John Searle’s secretary of 40 years. I am writing to tell you that John died last week on the 17th of September. The last two years of his life were hellish. HIs daughter–in-law, Andrea (Tom’s wife) took him to Tampa in 2024 and put him in a nursing home from which he never returned. She emptied his house in Berkeley and put it on the rental market. And no one was allowed to contact John, even to send him a birthday card on his birthday.
It is for us, those who cared about John, deeply sad.
I'm surprised to see the NYT obituary published nearly a month after his death. I would have thought he'd be included in their stack of pre-written obituaries, meaning it could be updated and published within a day or two.There are many people who know a lot about a little. There are also those who know a little about a lot. Searle was one of those rare people who knew a lot about a lot. Many a cocky undergraduate sauntered into his classroom thinking they'd come prepared with some new fact that he hadn't yet heard, some new line of attack he hadn't prepared for. Nearly always, they were disappointed.
But you know what he knew absolutely nothing about? Chinese. When it came time to deliver his lecture on the Chinese Room, he'd reach up and draw some incomprehensible mess of squigglies and say "suppose this is an actual Chinese character." Seriously. After decades of teaching about this thought experiment, for which he'd become famous (infamous?), he hadn't bothered to teach himself even a single character to use for illustration purposes.
Anyway, I thought it was funny. My heart goes out to Jennifer Hudin, who was indispensable, and all who were close to him.
In general, I think he's spectacularly misunderstood. For instance: he believed that it was entirely possible to create conscious artificial beings (at least in principle). So why do so many people misunderstand the Chinese Room argument to be saying the opposite? My theory is that most people encounter his ideas from secondary sources that subtly misrepresent his argument.
At the risk of following in their footsteps, I'll try to very succinctly summarize my understanding. He doesn't argue that consciousness can only emerge from biological neurons. His argument is much narrower: consciousness can't be instantiated purely in language. The Chinese Room argument might mislead people into thinking it's an epistemology claim ("knowing" the Chinese language) when it's really an ontology claim (consciousness and its objective, independent mode of existence).
If you think you disagree with him (as I once did), please consider the possibility that you've only been exposed to an ersatz characterization of his argument.
No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example. But his belief, which he articulates very explicitly in the article, is that you couldn't create a machine consciousness by running even a perfect simulation of a biological brain on a digital computer, neuron for neuron and synapse for synapse. He likens this simulation of a brain, which wouldn't think, to a simulation of a fire, which can't burn down a real building.
Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why. His ideas are very much muddy, and while he accuses others of supporting cartesian dualism when they think the brain and the mind can be separated, that you can "run" the mind on a different substrate, it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.
> with no clear reason whatsoever as to why
It's not clear to me how you can understand that fire has particular causal powers (to burn, and so on) that are not instantiated in a simulation of fire; and yet not understand the same for biological processes.
The world is a particular set of causal relationships. "Computational" descriptions do not have a causal semantics, so aren't about properties had in the world. The program itself has no causal semantics, it's about numbers.
A program which computes the fibonacci sequence describes equally-well the growth of a sunflower's seeds and the agglomeration of galactic matter in certain galaxies.
A "simulation" is, by definition, simply an accounting game by which a series of descriptive statements can be derived from some others -- which necessarily, lacks the causal relations of what is being described. A simulation of fire is, by definition, not on fire -- that is fire.
A simulation is a game to help us think about the world: the ability to derive some descriptive statements about a system without instantiating the properties of that system is a trivial thing, and it is always disappointing at how easily it fools our species. You can move beads of wood around and compute the temperature of the sun -- this means nothing.
What we mean by a simulation is, by definition, a certain kind of "inference game" we play (eg., with beads and chalk) that help us think about the world. By definition, if that simulation has substantial properties, it isn't a simulation.
If the claim is that an electrical device can implement the actual properties of biological intelligence, then the claim is not about a simulation. It's that by manufacturing some electrical system, plugging various devices into it, and so on -- that this physical object has non-simulated properties.
Searle, and most other scientific naturalists who appreciate the world is real -- are not ruling out that it could be possible to manufacture a device with the real properties of intelligence.
It's just that merely by, eg., implementing the fibonacci sequence, you havent done anything. A computation description doesnt imply any implementation properties.
Further, when one looks at the properties of these electronic systems and the kinds of causal realtions they have with their environments via their devices, one finds very many reasons to suppose that they do not implement the relevant properties.
Just as much as when one looks at a film strip under a microscope, one discovers that the picture on the screen was an illusion. Animals are very easily fooled, apes most of all -- living as we do in our own imaginations half the time.
Science begins when you suspend this fantasy way of relating to the world, look it its actual properties.
If your world view requires equivocating between fantasy and reality, then sure, anything goes. This is a high price to pay to cling on to the idea that the film is real, and there's a train racing towards you in your cinema seat.
This is kind of a no-true-scotsman esque argument though, isn't it? "substantial properties" are... what, exactly? It's not a subjective question. One could, and many have, insist that fire that really burns is merely a simulation. It would be impossible from the inside to tell. In that case, what is fantasy, and what is reality?
S is a simulation of O iff there is an inferential process, P, by which properties of O can be estimated from P(S) st. S does not implement O
Eg., "A video game is a simulation of a fire burning if, by playing that game, I can determine how long the fire will burn w/o there being any fire involved"
S is an emulation model of O iff ...as-above.. S implements O (eg., "burning down a dollhouse to model burning down a real house").
You define a 'real' implementation to exclude computational substrate, then use the very same definition to prove that computational substrate cannot implement 'real' implementations. It's circular!
Searle described himself as a "naive realist" although, as was typical for him, this came with a ton of caveats and linguistic escape hatches. This was certainly my biggest objection and I passed many an afternoon in office hours trying to pin him down to a better position.
Saying that the symbols in the computer don't mean anything, that it is only we who give them meaning, presupposes a notion of meaning as something that only human beings and some things similar to us possess. It is an entirely circular argument, similarly to the notion of p-zombies or the experience of seizing red thought experiment.
If indeed the brain is a biological computer, and if our mind, our thinking, is a computation carried out by this computer, with self-modeling abilities we call "qualia" and "consciousness", then none of these arguments hold. I fully admit that this is not at all an established fact, and we may still find out that our thinking is actually non-computational - though it is hard to imagine how that could be.
Fire is the result of the intrinsic reactivity of some chemicals like fuels and oxidizers that allows them to react and generate heat. A simulation of fire that doesn't generate heat is missing a big part of the real thing, it's very simplified. Compared to real fire, a simulation is closer to a fire emoji, both just depictions of a fire. A fire isn't the process of calculating inside a computer what happens, it's molecules reacting a certain way, in a well understood and predictable process. But if your simulation is accurate and does generate heat then it can burn down a building by extending the simulation into the real world with a non-simulated fire.
Consciousness is an emergent property from putting together a lot of neurons, synapses, chemical and physical processes. So you can't analyze the parts to simulate the end result. You cannot look at the electronic neuron and conclude a brain accurately made of them won't generate consciousness. It might generate something even bigger, or nothing.
And in a very interesting twist of the mind, if an accurate simulation of a fire can extend in the real world as a real fire, then why wouldn't an accurate simulation of a consciousness extent in the real world as a real consciousness?
I associate the key with "K", and my screen displays a "K" shape when it is pressed -- but there is no "K", this is all in my head. Just as much as when I go to the cinema and see people on the screen: there are no people.
By ascribing a computational description to a series of electrical devices (whose operation distributes power, etc.) I can use this system to augment by own thinking. Absent the devices, the power distribution, their particular casual relationships to each other, there is no computer.
The computational description is an observer-relative attribution to a system; there are no "physical" properties which are computational. All physical properties concern spatio-temporal bodies and their motion.
The real dualism is to suppose there are such non-spatio-temporal "process". The whole system called a "computer" is an engineered electrical device whose construction has been designed to achive this illusion.
Likewise I can describe the solar system as a computational process, just discretize orbits and give their transition in a while(true) loop. That very same algorithm describes almost everything.
Physical processes are never "essentially" computational; this is just a way of specifying some highly superficial feature which allows us to ignore their causal properties. Its mostly a useful description when building systems, ie., an engineering fiction.
A computational description of a system is no more and no less rigurous than any other physical model of that system. To the same extent that you can say that billiards balls interact by colliding with each other and the table, you can say that a processor is computing some function by flipping currents through transistors.
No, you cannot.
A hard-drive needs to a have a physical hysteresis. An input/output device needs to transmit power, and be powered, by an electrical field. A visual device needs to emit light on electrical stimulation, and so on.
The only sense, in the end, in which a "computer" survives its devices being changed is just observer-relative. You attribute a "3" to one state and a "1" to another, and "addition" to some process. By your attribution, does that process compute "4".
But it computes everything and computes nothing. If you plug in a speaker to VGA socket, the electrical signal causes an the air to move, sound.
The only sense in which a VGA signal is a "visual" signal is that we attach an LCD to that socket, and we interpret the light from the LCD semantically.
The world is a particular way objects in space and time move, those exhaust all physical properties. Any other properties are non-physical, which is why this kind of computationalism is really dualism.
You suppose it isnt your physical mechanism and its relationship to your environment which constitutes your thinking -- rather it's your soul. A pure abstract pattern which needs no devices with no specific properties to be realised.
Whatever this pattern is, if you played it through a speaker, it would just be vibrations in the air. Sent to an LCD, whitenoise. Only realised in your specific biology is it any kind of thinking at all.
In either case, the door will open if you're in front of it, and close after you've gone. This will happen regardless of whether you undertsand what it represents, it will open for a basic robot as well as for a human or a squirrel or a plant growing towards it very slowly or a rock rolling downhill.
Of course, you can't replace every single piece of hardware with software - you still need some link with the physical world. And of course, there will be many measurable differences between the two systems - for a basic example, the camera-based system will give off a lot more heat than the photo-sensitive diode one. I'm not claiming that they are perfectly equivalent in every way, not at all. I am claiming that they are equivalent in some measurable, observer-independent ways, and that the specific way in which they are equivalent is that they are running the same computation.
Yes, you can intepret systems as having a goal and realise that goal using a vareity of different devices.
Reality itself doesnt have purposes, there are no goals. "A device that opens a door" isnt a physical process, it's a goal.
Go do the same with chemistry, physics, biology -- no, actual relaity doesnt have purposes. Hexane isnt methane, gravity isnt electromagnetism, the motion of air molecuels isnt the emission of light.
Any time "one thing can serve the purpose as another" you are, by definition, working in the world of human intention.
Your entire observer-relative purpose-attributing "engineering mania" here is anti-naturalistic dualism. Reality is a place of specific causes, not of roles/pruposes/goals/devices
Fire is the thing which is a plasma disposed to burn in oxygen which results from a specific chemical/etc. process etc. etc. There is no "water fire".
Insofar as an object can causally interact with another such that it "pushes it out of the way" -- the property had by all such objects relates to the pauli exclusion principle (essentially) and refined by surface area, volume, density and the like. To "open a door" is to displace wood in a certain location, to do that is to exist such that the femionic structure of the wood is excluded from that place.
Lets put it another way. Say you are some alien being trying to study the inner workings of a system like this with no prior knowledge of how it arose in nature. You will apply the principles of empiricism and try to determine the workings of this physical system through repeated experiments, measurements of the electrical and chemical characteristics of various parts, etc. If your experimentation is sophisticated and complete enough, it will necessarily have to include a representation of the software running in this processor, and of the algorithms it encodes - the behavior of the system cannot be explained without that. An outwardly similar system, built with the exact same "parts" (in the traditional sense, i.e. the same model of processor, motor etc), but programmed with different software, will behave entirely differently. This clearly proves that the software is a physical object that is part of the system and is necessary to fully account for its behavior.
We already know that two different Epistemologies won't necessarily map perfectly to each other, though you might get close.
Might still be interesting to compare notes on what you can and can't predict/achieve with each!
[1] Edit : I can't quite lay my finger on which epistemologies exactly. Tsimionescu is using strong Empirical arguments, while mjburgess is inspired on Searle, which is pretty apt here, of course!
This notion of causality is interesting. When a human claims that he is conscious, there a causal chain from the fact that they are conscious to their claiming so. When a neuron-level simulation of a human claims it is conscious, there must be a similar causal chain, with a similar fact at its origin.
We see this now with LLMs. They just generate text. They get more accurate over time. But how can they understand a concept such as “soft” or “sharp” without actual sensory data with which to understand the concept and varying degrees of “softness” or “sharpness.”
The fact is that they can’t.
Humans aren’t symbol manipulation machines. They are metaphor machines. And metaphors we care about require a physical basis on one side of that comparison to have any real fundamental understanding of the other side.
Yes, you can approach human intelligence almost perfectly with AI software. But that’s not consciousness. There is no first person subjective experience there to give rise to mental features.
This is not a theory (or is one, but false) according to Popper as far as I understand, because the only way to check understanding that I know of is to ask questions, and LLMs passes it. So in order to satisfy falsifiability another test must be devised.
I find this thesis very plausible. LLMs inhabit the world of language, not our human everyday world, so their understanding of it will always be second-hand. An approximation of our own understanding of that world, itself imperfect, but at least aiming for the real thing.
The part about overcoming this limitation by instantiating the system in hardware I find less convincing, but I think I know where he comes from with that as well: by giving it hardware sensors, the machine would not have to simulate the world outside as well - on top of the inner one.
The inner world can more easily be imagined as finite, at least. Many people seem to take this as a given, actually, but there's no good reason to expect that it is. Plank limits from QM are often brought up as an argument for digital physics, but in fact they are only a limit on our knowledge of the world, not on the physical systems themselves.
The question is only if future LLMs might be good enough to trick anyone in most iterations, whether we would be forced to admit they understand meaning.
While I don't disagree with the substance of this post, I don't think this was one of Searle's arguments. There was definitely an Embodied Cognition camp on campus, but that was much more in Lakoff's wheelhouse.
His views are perfectly consistent with non-dualism and if you think his views are muddy, that doesn't mean they are (they are definitively not muddy, per a large consensus). For the record, I am a substance dualist, and his arguments against dualism are pretty interesting, precisely because he argues that you can build something that functions in a different way than symbol manipulation while still doing something that looks like symbol manipulation (but also has this special property called consciousness, kind of like our brains).
Is this true? I don't know (I, of course, would argue "no"), but it does seem at least somewhat plausible and there's no obvious counter-argument.
It does make sense, and there's work being done on this front, (Penrose & Hameroff's Orch OR comes to mind). We obviously don't know exactly what such a mechanism would look like, but the theory itself is not inconsistent. Also, there's all kinds of p-zombies, so we likely need some specificity here.
It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language: https://plato.stanford.edu/entries/chinese-room/#SyntSema
Side note: while the Chinese Room put him on the map, he had as much to say about Philosophy of Language as he did of Mind. It was of more than passing interest to him.
> Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.
I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.
I have, however, heard him say the following:
1. The structure and arrangement of neurons in the human nervous system creates consciousness.
2. The exact causal mechanism for this is phenomenon is unknown.
3. If we were to engineer a set of circumstances such that the causal mechanism for consciousness (whatever it may be) were present, we would have to conclude that the resulting entity- be it biological, mechanical, etc., is conscious.
He didn't have anything definitive to say about the causal mechanism of consciousness, and indeed he didn't see that as his job. That was to be an exercise left to the neuroscientists, or in his preferred terminology, "brain stabbers." He was confident only in his assertion that it couldn't be caused by mere symbol manipulation.
> it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.
He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism:
https://faculty.wcas.northwestern.edu/paller/dialogue/proper...
The Chinese room is an argument caked in notions of language, but it is in fact about consciousness more broadly. Syntax and semantics are not merely linguistic concepts, though they originate in that area. And while Searle may have been interested in language as well, that is not what this particular argument is mainly about (the title of the article is Minds, Brains, and Programs - the first hint that it's not about language).
> I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.
He said both things in the paper that introduced the Chinese room concept, as an answer to the potential rebuttals.
Here is a quote about the brain that would be run in software:
> 3. The Brain Simulator reply (MIT and Berkley)
> [...] The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.
And here is the bit about creating a real electrical brain, that he considers could be conscious:
> "Yes, but could an artifact, a man-made machine, think?"
> Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.
> He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism: https://faculty.wcas.northwestern.edu/paller/dialogue/proper...
I don't find this paper convincing. He admits at every step that materialism makes more sense, and then he asserts that still, consciousness is not ontologically the same thing as the neurobiological states/phenomena that create it. He admits that usually being causally reducible means being ontologically reducible as well, but he claims this is not necessarily the case, without giving any other example or explanation as to what justifies this distinction. I am simply not convinced.
At this point I'm pretty sure we've had a misunderstanding. When I referred to "language" in my original post, you seem to have construed this as a reference to the Chinese language in the thought experiment. On the contrary, I was referring to software specifically, in the sense that a computer program is definitionally a sequence of logical propositions. In other words, a speech act.
> [...] The problem with the brain simulator is that it is simulating the wrong things about the brain.
This quote is weird and a bit unfortunate. It seems to suggest an opening: the brain simulator doesn't work because it simulates the "wrong things," but maybe a program that simulates the "right things" could be conscious. Out of context, you could easily reach that conclusion, and I suspect that if he could rewrite that part of the paper he probably would, because the rest of the paper is full of blanket denials that any simulation would be sufficient. Like this one: >The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn't confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else. For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms.
Regarding the electrical brain:
> Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.
Right, so he describes one example of an "electrical brain" that seems like it'd satisfy the conditions for consciousness, while clearly remaining open to the possibility that a different kind of artificial (non-electrical) brain might also be conscious. I'll assume you're using this quote to support your previous statement:
> Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.
I think it's fairly obvious why this is different from a simulation. If you build a system that reproduces the consciousness-causing mechanism of neurons, then... it will cause consciousness. Not simulated consciousness, but the real deal. If you build a robot that can reproduce the ignition-causing mechanism of a match striking a tinderbox, then it will start a real fire, not a simulated one. You seem to think that Searle owes us an explanation for this. Why? How are simulations even relevant to the topic?
> I don't find this paper convincing.
The title of the paper is "Why I Am Not a Property Dualist." Its purpose is to explain why he's not a property dualist. Arguments against materialism are made in brief.
> He admits at every step that materialism makes more sense
Did we read the same paper?
> He admits that usually being causally reducible means being ontologically reducible as well,
Wrong, but irrelevant
> but he claims this is not necessarily the case, without giving any other example or explanation as to what justifies this distinction.
Examples and explanations are easy to provide, because there are several:
> But in the case of consciousness, causal reducibility does not lead to ontological reducibility. From the fact that consciousness is entirely accounted for causally by neuron firings, for example, it does not follow that consciousness is nothing but neuron firings. Why not? What is the difference between consciousness and other phenomena that undergo an ontological reduction on the basis of a causal reduction, phenomena such as color and solidity? The difference is that consciousness has a first person ontology; that is, it only exists as experienced by some human or animal, and therefore, it cannot be reduced to something that has a third person ontology, something that exists independently of experiences. It is as simple as that.
First-person vs. third-person ontologies are the key, whether you buy them or not. Consciousness is the only possible example of a first-person ontology, because it's the only one we know of
> “Consciousness” does not name a distinct, separate phenomenon, something over and above its neurobiological base, rather it names a state that the neurobiological system can be in. Just as the shape of the piston and the solidity of the cylinder block are not something over and above the molecular phenomena, but are rather states of the system of molecules, so the consciousness of the brain is not something over and above the neuronal phenomena, but rather a state that the neuronal system is in.
I could paste a bunch more examples of this, but the key takeaway is that consciousness is a state, not a property.
I think this muddies the water unnecessarily. Computation is not language, even though we typically write software in so called programming languages. But the computation itself is something different from the linguistic-like description of software. The computation is the set of states, and the relationships between them, that a computer goes through.
> > He admits at every step that materialism makes more sense
> Did we read the same paper?
I should have been clearer - I meant that he admits that materialism makes more sense than idealism or property dualism, but I realize that this comes off as suggesting it makes more sense than his own position, which of course he does not.
> > He admits that usually being causally reducible means being ontologically reducible as well,
> Wrong, but irrelevant
Both you and he seem to find a single example of a phenomenon that is causally reducible to some constituent part, but that is not ontological reducible to that constitutent part - consciousness (he would add intentionality, I think, given the introduction, but it's not clear to me this is even a meaningfully separatable concept from consciousness). And you both claim that this is the case because of this special feature of "first person ontology", which is a different thing than "third person ontology" - which seems to me to simply be dualism by another name.
I think it's entirely possible to reject the notion of a meaningful first person ontology completely. It's very possible that the appearance of a first person narrative that we experience is a retroactive illusion we create that uses our models of how other people function on ourselves. That is, we are simple computers that manipulate symbols in our brains, that generate memories of their recent state as being a "conscious experience", which is just what we invented as a model of why other animals and physical phenomena more broadly behave the way they do (since we intuitively assign emotions and intentions to things like clouds and fires and mountains, to explain their behavior).
In hindsight, choosing the word "language" was probably more distracting than helpful. We could get into a debate about whether computation is essentially another form of language-like syntactic manipulation, but it does share a key feature with language: observer-relative ontology. @mjburgess has already made this case with you at length, and I don't think I could improve on what's already been written, so I'll just leave it at that.
> I should have been clearer - I meant that he admits that materialism makes more sense than idealism or property dualism, but I realize that this comes off as suggesting it makes more sense than his own position, which of course he does not.
I'm not sure that I saw this specific claim made, but it's not especially important. What's more important is understanding what his objection to materialism is, such that you can a)agree with it or b)articulate why you think he's wrong. That said, it isn't the main focus of this paper, so the argument is very compressed. It also rests on the assumption that you believe that consciousness is real (i.e. not an illusion), and given the rest of your comment, I'm not sure that you do.
> Both you and he seem to find a single example of a phenomenon that is causally reducible to some constituent part, but that is not ontological reducible to that constitutent part - consciousness
Yes, although to be clear, I'm mainly interested in correctly articulating the viewpoint expressed in the paper. My own views don't perfectly overlap with Searle's
> (he would add intentionality, I think, given the introduction, but it's not clear to me this is even a meaningfully separatable concept from consciousness)
I doubt he'd add it as a discrete entry because, as you correctly observe, intentionality is inseparable from consciousness (but the reverse is not true)
> And you both claim that this is the case because of this special feature of "first person ontology", which is a different thing than "third person ontology" - which seems to me to simply be dualism by another name.
Ok good, this is directly interacting with the paper's thesis: why he's not a (property) dualist. He's trying to thread the needle between materialism and dualism. His main objection to property dualism is that consciousness doesn't exist "over and above" the brain, on which it is utterly dependent. This is probably his tightest phrasing of his position:
> The property dualist means that in addition to all the neurobiological features of the brain, there is an extra, distinct, non physical feature of the brain; whereas I mean that consciousness is a state the brain can be in, in the way that liquidity and solidity are states that water can be in.
Does his defense work for you? Honestly I wouldn't blame you if you said no. He spends a full third of the paper complaining about the English language (this is a theme) and how it prevents him from cleanly describing his position. I get it, even if I find it a little exhausting, especially when the stakes are starting to feel kinda low.
> I think it's entirely possible to reject the notion of a meaningful first person ontology completely.
On first reading, this sounds like you might be rejecting the idea of consciousness entirely. Or do you think it's possible to have a 'trivial' first person ontology?
> It's very possible that the appearance of a first person narrative that we experience is a retroactive illusion we create that uses our models of how other people function on ourselves. That is, we are simple computers that manipulate symbols in our brains, that generate memories of their recent state as being a "conscious experience", which is just what we invented as a model of why other animals and physical phenomena more broadly behave the way they do (since we intuitively assign emotions and intentions to things like clouds and fires and mountains, to explain their behavior).
I'm not sure where to start with this, so I'll just pick a spot. You seem to deny that "conscious experience" is a real thing (which is equivalent to "what it's like to be a zombie") but we nonetheless have hallucinated memories of experiences which, to be clear, we did not have because we don't really have conscious experiences at all. But how do we replay those memories without consciousness? Do we just have fake memories about remembering fake memories? And where do the fake fake fake memories get played, in light of the fact that we have no inner lives except in retrospect?
D.R. Hofstadter posited that we can extract/separate the software from the hardware it runs on (the program-brain dichotomy), whereas Searle believed that these were not two layers but consciousness was in effect a property of the hardware. And from that, as you say, follows that you may re-create the property if your replica hardware is close enough to the real brain.
IMHO, philosophers should be rated by the debate their ideas create, and by that, Searle was part of the top group.
> “No, his argument is that consciousness can't be instantiated purely in software…“
The confusion is very interesting to me, maybe because I’m a complete neophyte on the subject. That said, I’ve often wondered if consciousness is necessarily _embodied_ or emerged from pure presence into language & body. Maybe the confusion is intentional?
It's quite sad that people don't take the idea of consciousness being fundamental more seriously, given that's the only thing people actually deal with 100% of the time.
As for Searle, I think his argument is basically an appeal to common-sensical thinking, instead of anything based on common assumptions and logic. As an outsider, it feels very much that modern day philosophy is follows some kind of social media influencer logic, where you get respect for putting forward arguments that people agree with, instead of arguments that are non-intuitive yet rigorous and make people rethink their priors.
I mean, even today, here, you'd get similar arguments about "AI can never think because {reason that applies to humans as well}"... I suspect it's almost ingrained to the human psyche to feel this way.
I haven't read loads of his work directly, but this quote from him would seem to contradict your claim:
> I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. [1]
Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.
> "consciousness can't be instantiated purely in language" (mine)
> "we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else" (Searle)
I get that the mapping isn't 1:1 but if you think the loss of precision is significant, I'd like to know where.
> Unfortunately, it doesn't seem to me to have proven anything; it's merely made an accurate analogy for how a computer works. So, if "semantics" and "understanding" can live in <processor, program, state> tuples, then the Chinese Room as a system can have semantics and understanding, as can computers; and if "semantics" and "understanding" cannot live in <processor, program, state> tuples, then neither the Chinese Room nor computers can have understanding.
There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here: https://plato.stanford.edu/entries/chinese-room/#SystRepl
I'm by far an expert in this; my knowledge of the syntax / semantics distinction primarily comes from discussions w/ ChatGPT (and a bit from my friend who is a Catholic priest, who had some training in philosophy).
But, the quote says "purely formally or syntactically". My understanding is that Searle (probably thinking about the Prolog / GPS-type attempts at logical artificial intelligence prevalent in the 70's and 80's) is thinking of AI in terms of pushing symbols around. So, in this sense, the adder circuit in a processor doesn't semantically add numbers; it only syntactically adds numbers.
When you said, "consciousness can't be instantiated purely in language", I took you to mean human language; it seems to leave the door open to consciousness (and thus semantics) being instantiated by a computer program in some other way. Whereas, the quote from Searle very clearly says, "...the computer program by itself is not sufficient for consciousness..." (emphasis mine) -- seeming to rule out any possible computer program, not just those that work at the language level.
> There's a lot of debate on this point elsewhere in the thread, but Searle's response to this particular objection is here:
I mean, yeah, I read that. Let me quote the relevant part for those reading along:
> Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.
I mean, it sounds to me like Searle didn't understand the "Systems Response" argument; because as the end of that section says, he's just moved the program and state part of the <procesor, program, state> tuple out of the room and into his head. The fact that the processor (Searle's own conscious mind) is now storing the program and the state in his own memory rather than externally doesn't fundamentally change the argument: If that tuple can "understand" things, then computers can "understand" things; and if that tuple can't "understand" things, then computers can't "understand" things.
One must, of course, be humble when saying of a world-renowned expert, "He didn't understand the objection to his argument". But was Searle himself a programmer? Did he ever take a hard drive out of one laptop, pop it into another, and have the experience of the same familiar environment? Did he ever build an adder circuit, a simple register system, and a simple working computer out of logic gates, and see it suddenly come to life and execute programs? If he had, I can't help but think his intuitions regarding the syntax / semantic distinction would be different.
EDIT: I mean, I'm personally a Christian, and do believe in the existence of eternal souls (though I'm not sure exactly what those look like). But I'm one of those annoying people who will quibble with an argument whose conclusion I agree with (or to which I am sympathetic), because I don't think it's actually a good argument.
> When you said, "consciousness can't be instantiated purely in language", I took you to mean human language
No, I definitely meant the statement to apply to any kind of language, but it seems clear that I sacrificed clarity for the sake of brevity. You're not the only one who read it that way, but yeah, we're in agreement on the substance.
On your interpretation, are there any sorts of computation that Searle believes would potentially allow consciousness?
ETA: The other issue I have is with this whole idea is that "understanding requires semantics, and semantics requires consciousness". If you want to say that LLMs don't "understand" in that sense, because they're not conscious, I'm fine as long as you limit it to technical philosophical jargon. In plain English, in a practical sense, it's obvious to me that LLMs understand quite a lot -- at least, I haven't found a better word to describe LLMs' relationship with concepts.
It's... a little more complicated but basically yes. Language, by its nature, is indexical: it has no meaning without someone to observe it and ascribe meaning to it. Consciousness, on the other hand, requires no observer beyond the person experiencing it. If you have it, it's as real and undeniable as a rock or a tree or a mountain.
> On your interpretation, are there any sorts of computation that Searle believes would potentially allow consciousness?
I'm pretty sure (but not 100%) that the answer is "no"
> ETA: The other issue I have is with this whole idea is that "understanding requires semantics, and semantics requires consciousness". If you want to say that LLMs don't "understand" in that sense, because they're not conscious, I'm fine as long as you limit it to technical philosophical jargon.
Sure, if you want to think of it that way. If you accept the premise that LLMs aren't conscious, then you can consign the whole discussion to the "technical philosophical jargon" heap, forget about it, and happily go about your day. On the other hand, if you think they might be conscious, and consider the possibility that we're inflicting immeasurable suffering on sapient being that would rightly be treated with kindness (and afforded some measure of rights), then we're no longer debating how many angels can dance on the head of a pin. That's a big, big "if" though.
My first exposure was a video of Searle himself explaining the Chinese room argument.
It came across as a claim that a whole can never be more than its parts. It made as much sense as claiming that a car cannot possibly drive, as it consists of parts that separately cannot drive.
John Searle and George Lakoff walk into a bar.
Searle exclaims, "What do you know!"
The bar replies sardonically, "You wouldn't believe it."
Lakoff sighs, "This is 0.8 drinks with Lotfi Zadeh..."
First of all, what purpose the person in the room serves, but to confuse and misdirect? Replace that person with a machine, and argument looses any impact.
His response to system reply is extremely egregious. How can that have been made in good faith? (to paraphrase: "the whole system understands chinese" — "no, a person can run the system in their head, it means the system cannot understand anything that the person running it does not") What kind of nonsense response is that? Either the guy was LV80 troll, or I dunno..
Maybe I should look up some of my other heroes and heretics while I have the chance. I mean, you don't need to cold e-mail them a challenge. Sometimes they're already known to be at events and such, after all!
I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.
Two systems, one feels intuitively like it understands, one doesn’t. But the two systems are functionally identical.
Therefore either my concept of “understanding” is broken, my intuition is wrong, or the concept as a whole is not useful at the edges.
I think it’s the last one. If a bunch of valves can’t understand but a bunch of chemicals and electrical signals can if it’s in someone’s head then I am simply applying “does it seem like biology” as part of the definition and can therefore ignore it entirely when considering machines or programs.
Searle seems to just go the other way and I don’t under Why.
Second: the philosophically relevant point is that when you gloss over mental states and only point to certain functions (like producing text), you can't even really claim to have fully accounted for what the brain does in your AI. Even if the physical world the brain occupies is practically simulatable, passing a certain speech test in limited contexts doesn't really give you a strong claim to consciousness and understanding if you don't have further guarantees that you're simulating the right aspects of the brain properly. AI, as far as I can tell, doesn't TRY to account for mental states. That's partially why it will keep failing in some critical tasks (in addition to being massively inefficient relative to the brain).
> consciousness and understanding
After decades of this I’ve settled on the view that these words are near useless for anything specific, only vague pointers to rough concepts. I see zero value in nailing down the exact substrates understanding is possible on without a way of looking at two things and saying which one does and which one doesn’t understand. Searle to me is arguing that it is not possible at all to devise such a test and so his definition is useless.
Although for whatever it’s worth most modern AIs will tell you they don’t have genuine understanding (eg no sense of what pleasure is or feels like etc aside from human labeling).
The entire point of the thought experiment is that to outside observers it appears the same as if a fluent speaker is in the room. There aren’t questions you can ask to tell the difference.
This was why I have the tin of beans comparison.
The room has the property X if and only if there’s a tin of beans inside. You can’t in any way tell the difference between a room that has a tin of beans in and one that doesn’t without looking inside.
You might find that a property that has zero predictive power, makes (by definition) no difference to what either room can do, and has no use for any practical purposes (again by definition) is rather pointless. I would agree.
Searle has a definition of understanding that, to me, cannot be useful for any actual purpose. It is therefore irrelevant to me if any system has his special property just as my tin of beans property is useless.
> In reality the material difference between a computing machine and a brain is trivial
No it isn’t. You are making the strong statements about how the brain works that you argued against at the start.
> Among other practical differences such as guarantee of function over long term.
Once again ignoring the setup of the argument. The solution to the chinese room isn’t “the trick is to wait long enough”.
I don’t know why you want to argue about this given you so clearly reject the entire concept of the thought experiment.
I find the entire thing to be intellectual wankery. A very simple and ethical solution is that if two things appear conscious from the outside then just treat them both as such. Job done. I don’t need to find excuses like “ah but inside there’s a book!” Or “it’s manipulations are on the syntactic level if we just look inside” or “but it’s just valves!” I can simply not mistreat anything that appears conscious.
All of this feels like a scared response to the idea that maybe we’re not special.
The premise of the argument is that the Chinese Room passes the Turing Test for Chinese. There are two possibilities for how this happens: 1) the program emulates the brain and has the right relation to the external world more or less exactly, or 2) the program emulates the brain enough to pass the test in some context but fails to emulate the brain perfectly. We know that as it currently stands, we've "passed the Turing Test" but we do not go further and say that brains and AI perform "indistinguishably." Unless there are significant similarities to how brains work and how AIs work, on some fundamental level (case 1), even if they pass the Turing Test, it is possible that in some unanticipated scenario they will diverge significantly. Imagine a system that outputs digits of pi. You can wait until you see enough digits to be satisfied, but unless you know what's causing the output, you can never be sure that you're not witnessing the output of some rational approximation or some cached calculation that will eventually halt. What goes on inside matters a lot if you want a sense of certainty. This is simply a trivial logical point. Leaving that aside, assuming that you do have 1), which I believe we are still very far from, we're still left with the ethical consequences, which it seems you agree does hinge on whether the system is conscious.
You made a really strong claim, which is "I can simply not mistreat anything that appears conscious"--which is showing the difference in our intuitions. We are not beholden to the setup of the Chinese Room. The current scientific and rational viewpoint is at the very least that brains cause minds and they cause our mental world. I'm sure you agree with that. The very point we are disputing is that it doesn't follow that because what's going on on the outside is the same that what goes on on the inside doesn't matter. This is particularly true if we have clear evidence that the things causing the behavior are very different, that one is a physical system with biological causes and the other is a kind of simulation of the first. So when I say that a brain is trivially different from a calculating machine, what I mean is that the brain simply has different physical characteristics from a calculating machine. Maybe you disagree that those differences are relevant but they are, you will agree, obvious. The ontology of a computer program is that it is abstract and can be implemented in any substrate. What you are saying then, in principle, is that if I follow the steps of a program by tracking bits on a page that I'm marking manually, that somehow the right combination of bits (that decode to an insult) is just as morally bad as me saying those words to another human. I think many would find that implausible.
But there are some who hold this belief. Your position is called "ethical behaviorism," and there's a essay I argued against that articulated this viewpoint. You can read it if you want! https://blog.practicalethics.ox.ac.uk/2023/03/eth%C2%ADi%C2%...
> What goes on inside matters a lot if you want a sense of certainty. This is simply a trivial logical point
And yet entirely unrelated to this thought experiment. His point is not that the book isn't big enough, that the man inside the room will trip up at some point, or anything of the sort.
Now you might have a different argument about this all than Searle, and that's entirely fine. I'm saying that Searles definition of understanding is utterly pointless because he defines it as one that is not related to the measurable actions of a system but related to the way in which it works internally.
> The premise of the argument is that the Chinese Room passes the Turing Test for Chinese.
...
> enough to pass the test in some context but fails to emulate the brain perfectly
No. That is a far weaker argument than Searle makes. His argument is not that it'll be hard to tell, or convincing but you can tell the difference, or most people would be fooled.
From Searle, let's dig into this.
https://web.archive.org/web/20071210043312/http://members.ao...
> from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers.
Already we get to the point of being indistinguishable.
> I have inputs and outputs that are indistinguishable from those of the native Chinese speaker,
Again indistinguishable.
And then he doubles down on this to the point of fully emulating the brain not being enough
> imagine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes.
Searle has a problem - he looks at two different systems and says there is understanding in one and not in another. Then he ties himself in knots trying to distinguish between the two.
> The idea is that while a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible.
He cannot at all accept any sort of combination, he can't accept any concept of understanding being anything but binary. He cannot accept that it perhaps is not a useful term at all.
> in this paper I have tried to show that a system could have input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed
A programmed system *cannot* understand. It doesn't matter how it operates or how well, and again duplicating the capabilities of a real person.
As far as I can tell, since he leans heavily into the physical aspect, if we had two machines:
1. Inputs are received via whatever sensors, go through a physical set of components, and drive motors/actuators
2. Inputs are received via whatever sensors, go through a chip running an exact simulation of those same components, and drive motors/actuators
then machine 1 could understand but machine 2 could not because it has a program running rather than just being a physical thing.
Despite the fact that both simply follow the laws of physics, the very concept of a program is just how certain physical things are arranged.
To go back to my point because I'm rather frustrated yet again just pointing out what Searle explicitly says:
Searle defines understanding in a way that makes it, to me, entirely useless. It provides by definition no predictive power and can by definition not impact anything we want to do.
I am not arguing which of these things understands. I'm saying the term as a whole isn't very useful, and Searles definition has been pushed by him to a point of being entirely useless because he starts by insisting that certain things cannot understand.
So if you like, one is real and the other is fake. Or, one is physical and the other is symbolic or conventional. One actually had breakfast this morning and the other is lying about having breakfast to pass the Turing Test. One can feel pain, guilt, shame and the other one is just saying that it does because it’s running a program.
Searle says there is an empirical test for which domain a thinking object falls into (your machine 1 and machine 2)—to an outside observer, in the limit, there is no difference in behavior. They will do the same thing. For all that, if you have a metaphysical value for consciousness and “genuine” feeling, then you think the difference is important. If you don’t, you don’t.
FWIW—I think once AI has a full understanding of its ontology, even if it’s simulating a human brain perfectly, if it knows it’s a program it will probably explain to us why it is or is not necessarily conscious. Perhaps that will be more convincing for you.
If I understand your argument: if there's no empirical consequence, what's the point of the distinction, right?
btw--if you'd like to keep the conversation going, email is on my personal webpage in my bio.
Maybe he was cheating before or after, sure, but not during. No court would buy that.
...At least, that's how I interpret 'empirical consequence' - something observable or detectable, at very least in principle. Do you mean something different?
(Right this minute I'm coming from an empiricist framework where acts require consequences. If you're approaching this from a realist or rationalist view -which I suspect-, I'd be interested to hear it!)
I can imagine a lot of things, but the argument did not go this far, it left it as "obvious" well before this stage. Also, when I see trivial simulations of our biological machinery yielding results which are _very similar_, e.g. character or shape recognition, I am left wondering if the people talking about quantum wavefunctions are not the ones that are making extraordinary claims, which would require extraordinary evidence. I can certainly find it plausible that these _could_ be one particular way that we could be superior to the electronics / valves of the argument, but I'm not yet convinced it is a differentiator that actually exists.
There has to be a special motivation to instead cast understanding as “competent use of a given word or concept,” (judged by whom btw?). The practical upshot here is that without this grounding, we keep seeing AI, even advanced AI make trivial mistakes and requires the human to give an account of value (good/bad, pleasant/unpleasant) because these programs obviously don’t have conscious feelings of goodness and badness. Nobody had to teach me that delicious things include Oreos and not cardboard.
Well, no, that came from billions of years of pre-training that just got mostly hardcoded into us, due to survival / evolutionary pressure. If anything, the fact that AI is as far as it is, after less than 100 years of development, is shocking. I recall my uncle trounce our C64 in chess, and go on to explain how machines don't have intuition, and the search space explodes combinatorically, which is why they will never beat a competent human. This was ~10 years before Deep Blue. Oh, sure, that's just a party trick. 10 years ago, we didn't have GPT-style language understanding, or image generation (at least, not widely available nor of middling quality). I wonder what we will have in 10, 20, 100 years - whatever it is, I am fairly confident that architectural improvements will lead to large capability improvements eventually, and that current behavior and limitations are just that, current. So, the argument is that somehow, intuitively they can't ever be truly intelligent or conscious because it's somehow intuitively obvious? I disagree with this argument; I don't think we have any real, scientific idea of what consciousness really is, nor do we have any way to differentiate "real" from "fake".
On the other end of the spectrum, I have seen humans with dementia not able to make sense of the world any more. Are they conscious? What about a dog, rabbit, cricket, bacterium? I am pretty sure at their own level, they certainly feel like they are alive and conscious. I don't have any real answers, but it certainly seems to be a spectrum, and holding on to some magical or esoteric differentiator, like emotions or feelings, seems like wishful thinking to me.
I'm becoming less sure of this over time. As AI becomes more capable, it might start being more comparable to smaller mammals or birds, and then larger ones. It's not a boolean function, but rather a sliding scale.
Despite starting out from very skeptical roots, over time Ethology has found empirical evidence for some form of intelligence in more and more different species.
I do think that this should also inform our ethics somewhat.
On a side note: it's been a pleasure reading through the debates with you, and possibly we can continue over mail!
Personally, I'd say that there is a Chinese speaking mind in the room (albeit implemented on a most unusual substrate).
First, it is tempting to assume that a bunch of chemicals is the territory, that it somehow gives rise to consciousness, yet that claim is neither substantiated nor even scientific. It is a philosophical view called “monistic materialism” (or sometimes “naive materialism”), and perhaps the main reason this view is popular currently is that people uncritically adopt it following learning natural scientific fields, as if they made some sort of ground truth statements about the underlying reality.
The key to remember is that this is not a valid claim in the scope of natural sciences; this claim belongs to the larger philosophy (the branch often called metaphysics). It is not a useless claim, but within the framework of natural sciences it’s unfalsifiable and not even wrong. Logically, from scientific method’s standpoint, even if it was the other way around—something like in monistic idealism, where perception of time-space and material world is the interface to (map of) conscious landscape, which was the territory and the cause—you would have no way of proving or disproving this, just like you cannot prove or disprove the claim that consciousness arises from chemical processes. (E.g., if somebody incapacitates some part of you involved in cognition, and your feelings or ability to understand would change as a result, it’s pretty transparently an interaction between your mind and theirs, just with some extra steps, etc.)
The common alternatives to monistic materialism include Cartesian dualism (some of us know it from church) and monistic idealism (cf. Kant). The latter strikes me as the more elegant of the bunch, as it grants objective existence to the least amount of arbitrary entities compared to the other two.
It’s not to say that there’s one truly correct map, but just to warn against mistakenly trying to make a statement about objective truth, actual nature of reality, with scientific method as cover. Natural sciences do not make claims of truth or objective reality, they make experimentally falsifiable predictions and build flawed models that aid in creating more experimentally falsifiable predictions.
Second, what scientific method tries to build is a complete, formally correct and provable model of reality, there are some arguments that such model is impossible to create in principle. I.e., there will be some parts of the territory that are not covered by the map, and we might not know what those parts are, because this territory is not directly accessible to us: unlike a landmass we can explore in person, in this case all we have is maps, the perception of reality supplied by our mind, and said mind is, self-referentially, part of the very territory we are trying to model.
Therefore, it doesn’t strike me as a contradiction that a bunch of valves don’t understand yet we do. A bunch of valves, like an LLM, could mostly successfully mimic human responses, but the fact that this system mimics human responses is not an indication of it feeling and understanding like a human does, it’s simply evidence that it works as designed. There can be a very different territory that causes similar measurable human responses to arise in an actual human. That territory, unlike the valves, may not be fully measurable, and it can cause other effects that are not measurable (like feeling or understanding). Depending on the philosophical view you take, manipulating valves may not even be a viable way of achieving a system that understands; it has not been shown that biological equivalent of valves is what causes understanding, all we have shown is that those entities measurably change at the same time with some measurable behavior, which isn’t a causative relationship.
I feel like I could make the same arguments about the chinese room except my definition of "understanding" hinges on whether there's a tin of beans in the room or not. You can't tell from the outside, but that's the difference. Both cases with a person inside answering questions act identically and you can never design a test to tell which room has the tin of beans in.
Now you might then say "I don't care if there's a tin of beans in there, it doesn't matter or make any sort of difference for anything I want to do", in which case I'd totally agree with you.
> just like you cannot prove or disprove the claim that consciousness arises from chemical processes.
Like understanding, I haven't seen a particularly useful definition of consciousness that works around the edges. Without that, talking of a claim like this is pointless.
Not at all. The confusion you expressed in your original comment stems from that claim. If you want to overcome that confusion, we have to talk about that claim.
Your statement was that it’s unclear how a bunch of valves doesn’t understand, but chemical processes do, and maybe you have a wrong intuition. Well, it appears that your intuition is to make this claim of causality, that some sort of object (e.g., valves or neurons), which you believe is part of objective reality, is what would have to cause understanding to exist.
So, I pointed out that assumption of such causality is not a provable claim, it is part of monistic materialism, which is a philosophical view, not scientific fact.
Further hinting at your tendency to assume monistic materialism is calling the systems “functionally identical”. It’s fairly evident that they are not functionally identical if one of them understands and the other doesn’t; it’s easy to make this mistake if you subconsciously already decide that understanding isn’t really a thing that exists (as many monistic materialists do).
> Like understanding, I haven't seen a particularly useful definition of consciousness that works around the edges.
Inability to define consciousness is fine, because logically circular definitions are difficult. However, lack of definition for the phenomenon is not the same thing as denying its objective existence.
You can escape the necessity to admit its existence by waving it away as an illusion or “not really” existing. Which is absolutely fine, as long as you recognize that it’s simply a workaround to not have to define things (if it’s an illusion, whom does it act on?), that conscious illusionism is just as unfalsifiable and unprovable as any other philosophical view about the nature of reality or consciousness, and that logically it’s quite ridiculous to dismiss as illusion literally the only thing that we empirically have direct unmediated access to.
> It's not mostly mimicking, it's exactly identical.
> Both cases with a person inside answering questions act identically and you can never design a test to tell which room has the tin of beans in.
If you constructed a system A that produces some output, and there is a system B, which you did not construct and which you don't have an full understanding of how it works, which produces identical output but is also believed to produce other output that cannot be measured with current technology (a.k.a. feelings and understanding), you have two options: 1) say that if we cannot measure something today then it certainly doesn’t matter, doesn’t exist, etc., or 2) admit that system A could be a p-zombie.
Then you could tell the difference and the thought experiment is broken. The whole point is that outside observers can’t tell. Not that they’re too stupid, that there isn’t a way they could tell, no question they could ask.
> but is also believed to produce other output that cannot be measured with current technology
Are you suggesting that Searle was saying that there was a difference between the rooms and that we just needed more advanced technology to see inside them? Come on.
I tried to explain that outside observers may not observe the entirety of what matters, whether due to current technical limitations or fundamental impossibility. In fact, to assume externally observed behaviour (e.g., of a human) is all that matters strikes me as a pretty fringe view.
> Are you suggesting that Searle was saying that there was a difference between the rooms and that we just needed more advanced technology to see inside them
Perhaps you are trying to read too much into what the experiment itself is. I do not treat it as “Searle tried to tell us something this way”. If he wanted to say something more specific he probably had done it in relevant works. The thought experiment however is very clear and describable in a paragraph and is open to possible interpretations, which is what we are doing now. That is the beauty of thought experiments like this.
> A bunch of valves, like an LLM, could mostly successfully mimic human responses,
The argument is not "mostly successfully", it's identically responding. The entire point of the chinese room is that from the outside the two things are impossible to distinguish between.
> The argument is not "mostly successfully", it's identically responding.
This is a thought experiment. Thought experiments can involve things that may be impossible. For example, the Star Trek Transporter thought experiment involves an existence of a thing that instantly moves a living being: the point of the experiment is to give rise to a discussion about the nature of consciousness and identity.
Thing not possibly existing is one possible resolution of the paradox. There may be a limitation we are not aware of.
Similarly, in Searle’s experiment, the system that identically responds might never exist, just like the transporter in all likelihood cannot exist.
> The entire point of the chinese room is that from the outside the two things are impossible to distinguish between.
To a blind person, an orange and a dead mouse are impossible to distinguish between from 10 meters away. If you can’t distinguish between two things, it doesn’t mean the things are the same. Ability to understand, self-awareness and consciousness are things we currently cannot measure. You can either say “these things don’t exist” (we will disagree) or you have to say “the systems can be different”.
The Chinese room is setup so that you cannot tell the difference from the outside. That’s the point of it.
> If you can’t distinguish between two things, it doesn’t mean the things are the same.
But it does mean that the differences between them are irrelevant to you by definition.
> Ability to understand, self-awareness and consciousness are things we currently cannot measure. You can either say “these things don’t exist”
Unless you have a way they could be measured but we just lack the technology or skill then your definitions are of things that may as well not exist because you cannot define them. They are vague words you use and are fine if you accept you have three major categories “yes and here’s why, no and here’s why and no idea” that’s fine. I am happy saying I’m conscious and the pillow next to me is not. I don’t have a definition clear enough to say yes/no if the pillow was arguing with me.
So what’s the physical cause for consciousness and understanding that is not computable? If for example you took the hypothesis that “consciousness is a sequence of microtubule-orchestrated collapses of the quantum wavefunction” [1], then you can see a series of physical requirements for consciousness and understanding that forces all conscious beings onto: 1) roughly the same clock (because consciousness shares a cause), and 2) the same reality (because consciousness causes wavefunction collapses). That’s something you could not do merely by simulating certain brain processes in a closed system.
1) Not saying this is correct, but it invites one to imagine that consciousness could have physical requirements that play in some of the oddities of the (shared) quantum world. https://x.com/StuartHameroff/status/1977419279801954744
The same goes for "What Is It Like to Be a Bat?" by Thomas Nagel — one of the most cited essays in the philosophy of mind. I had heard numerous references to it and finally expected to read an insightful masterpiece. Yet it turned out to be slightly tautological: that to experience, you need to be. Personally, I think the word be is a philosopher’s snake oil, or a "lockpick word" — it can be used anywhere, but remains fuzzy even in its intended use; vide E-Prime, an attempt to write English without "be": https://en.wikipedia.org/wiki/E-Prime.
In "both" (probably more, referencing the two most high profile - Eugene and the LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. For instance here is dialog from a human in one of the tests:
----
[16:31:08] Judge: don't you thing the imitation game was more interesting before Turing got to it?
[16:32:03] Entity: I don't know. That was a long time ago.
[16:33:32] Judge: so you need to guess if I am male or female
[16:34:21] Entity: you have to be male or female
[16:34:34] Judge: or computer
----
And the tests are typically time constrained by woefully poor typing skills (is this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of just several words each. The above snip was a complete interaction, so you get 2 responses from a human trying to trick the judge into deciding he's a computer. And obviously a judge determining that the above was probably a computer says absolutely nothing about the quality of responses from the computer - instead it's some weird anti-Turing Test where humans successfully act like a [bad] computer, ruining the entire point of the test.
The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that. I suspect in a true run of the Turing Test we're still nowhere even remotely close to passing it.
So I'd say we're at least "remotely close", which is sufficient for me to reconsider Searle.
I think if you are having to accuse the humans of woeful typing and being smartphone gen fools you are kind scoring one for the LLM. In the Turing test they were only supposed to match an average human.
The LLM Turing Test was particularly abysmal. They used college students doing it for credit, actively filtered the users to ensure people had no clue what was going on, intentionally framed it as a conversation instead of a pointed interrogation, and then had a bot who's prompt was basically 'act stupid, ask questions, usually use fewer than 5 words', and the kids were screwing around most of the time. For instance here is a complete interrogation from that experiment (against a bot):
- hi
- heyy what's up
- hru
- I'm good, just tired lol. hbu?
The 'ask questions' was a reasonable way of breaking the test because it made interrogators who had no clue what they were doing waste all of their time, so there were often 0 meaningful questions or answers or any given interrogation. In any case I think that scores significantly above 50% are a clear indicator of humans screwing around or some other 'quirk' in the experiment, because, White Zombie notwithstanding, one cannot be more human than human.
This is ex-post-facto denial and cope. The Turing Test isn't a test between computers and the idealized human, it's a test between functional computers and functional humans. If the average human performs like the above, then well, I guess the logical conclusion is that computers are already better "humans (idealized)" than humans.
Appealing to the Turing test suggests a misunderstanding of Searle's arguments. It doesn't matter how well computational methods can simulate the appearance of intelligence. What matters is whether we are dealing with intelligence. Since semantics/intentionality is what is most essential to intelligence, and computation as defined by computer science is a purely abstract syntactic process, it follows that intelligence is not essentially computational.
> It's very close to the Chinese Room, which I had always dismissed as misleading.
Why is it misleading? And how would LLMs change anything? Nothing essential has changed. All LLMs introduce is scale.
From my experience with him, he'd heard (and had a response to) nearly any objection you could imagine. He might've had fun playing with LLMs, but I doubt he'd have found them philosophically interesting in any way.
Gotta agree here. The brain is a chemical computer with a gazillion inputs that are stimulated in manifold ways by the world around it, and is constantly changing states while you are alive; a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening. The two are vastly different entities that are similar in only the most abstract ways.
There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.
> You could equally make the statement that thought is by definition an abstract and strictly syntactic construct - one that has no objective reality.
This is what makes no sense, as I am not merely posing arbitrary definitions, but identifying characteristic features of human intelligence. Do you deny semantics and intentionality are features of the human mind?
> There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.
I have no idea what this means. The point is that computation as we understand it in computer science is purely syntactic (this was also Searle's argument). Indeed, it is modeled on the mechanical operations human computers used to perform without understanding. This property is precisely what makes computation - thus understood - mechanizable. Because it is purely syntactic and an entirely abstract model, two things follow:
1. Computation is not an objectively real phenomenon that computers are performing. Rather, physical devices are used to simulate computation. Searle calls computation "observer relative". There is nothing special about electronics, as we can simulate computation using wooden gears that operate mechanically or water flow or whatever. But human intelligence is objectively real and exists concretely, and so it cannot be a matter of mere simulation or something merely abstract (it is incoherent and self-refuting to deny this for what should be obvious reasons).
2. Because intentionality and the capacity for semantics are features of human intelligence, and computation is purely syntactic, there is no room in computation for intelligence. It is an entirely wrong basis for understanding intelligence and in a categorical sense. It's like trying to find out what arrangement of LEGO bricks can produce the number π. Syntax has no "aboutness" as that is the province of intentionality and semantics. To deny this is to deny that human beings are intelligent, which would render the question of intelligence meaningless and frankly mystifying.
I deny they are anything more than computation. And so your original argument was begging the question and so logically unsound.
> The point is that computation as we understand it in computer science is purely syntactic
Then the brain is also purely syntactic unless you can demonstrate that the brain carries out operations that exceeds the Turing computable, because unless that is the case the brain and a digital computer are computationally equivalent.
As long as your argument does not address this fundamental issue, you can talk about "aboutness" or whatever else you want all day long - it will have no relevance.
(And if anything is question begging - you didn't demonstrate what was question begging in my post - it's your amateurish reductionism and denial of the evidence.)
No.
I could jam a yardstick into the ground and tell you that it's now a sundial calculating the time of day. Is this really, objectively true? Of course not. It's true to me, because I deem it so, but this is not a fact of the universe. If I drop dead, all meaning attributed to this yardstick is lost.
Now, thoughts. At the moment I'm visualizing a banana. This is objectively true: in my mind's eye, there it is. I'm not shuffling symbols around. I'm not pondering the abstract notion of bananas, I'm experiencing the concretion of one specific imaginary banana. There is no "depends on how you look at it." There's nothing to debate.
> There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.
There's no "magic" because this isn't a thing. You can't transmute syntax into semantics any more than you can transmute the knowledge of Algebra into the sensation of a cool breeze on a hot summer day. This is a category error.
None of what you wrote is remotely relevant to what I wrote.
> There's no "magic" because this isn't a thing. You can't transmute syntax into semantics any more than you can transmute the knowledge of Algebra into the sensation of a cool breeze on a hot summer day. This is a category error.
We "transmute" syntax into semantics every time we interpret a given syntax as having semantics.
There is no inherent semantics. Semantics is a function of the meaning we assign to a given syntax.
The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.
And something that often happens whenever some phenomenon falls under scientific investigation, like mechanical force or hydraulics or electricity or quantum mechanics or whatever.
Whooha! If it's not physical what is it? How does something that's not physical interact with the universe and how does the universe interact with it? Where does the energy come from and go? Why would that process not be a physical process like any other?
Where we haven't made any headway on is on the connection between that and subjective experience/qualia. I feel like much of the (in my mind) strange conclusions of the Chinese Room are about that and not really about "pure" cognition.
To be fair to Searle, I don't think he advanced this as an agument, but more of an illustration of his belief that thinking was indeed a physical process specific to brains.
¹https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...
https://plato.stanford.edu/entries/consciousness-intentional...
Or even more fundamentally, that physics captures all physical phenomena, which it doesn't. The methods of physics intentionally ignore certain aspects of reality and focus on quantifiable and structural aspects while also drawing on layers of abstractions where it is easy to mistakenly attribute features of these abstractions to reality.
To me it seems highly likely that our knowledge of physics is more than sufficient for simulating the brain, what is lacking is knowledge of biology and the computational power.
Ok - I get that bit. I have always thought that physics is a description of the universe as observed and of course the description could be misleading in some way.
>the methods of physics intentionally ignore certain aspects of reality and focus on quantifiable and structural aspects
Can you share the aspects of reality that physics ignores? What parts of reality are unquantifiable and not structural?
Here's an article you might enjoy [0].
[0] http://edwardfeser.blogspot.com/2022/05/the-hollow-universe-...
And what's a few orders of magnitudes in implementation efficiency among philosophers?
Efforts to reproduce a human brain in a computer are currently at the level of a cargo cult: we're simulating the mechanical operations, without a deep understanding of the underlying processes which are just as important. I'm not saying we won't get better at it, but so far we're nowhere near producing a brain in a computer.
Unless you can demonstrate that the human brain can compute a function - any function - that exceeds the Turing computable, there is no evidence to even suggest it is possible for a brain not to be computationally equivalent to a computer.
So while it might well be we will need new architectures - maybe both software and hardware, it seems highly unlikely we won't be able to.
There's also as of yet no basis for presuming we need to be "complete accurate" or need to model the physical effects with much precision. If anything, what we've seen consistently over decades of AI research is that we've gotten far better results by ditching the idea that we need to know and model how brains work, and instead statistically modelling outputs.
I don't understand, could you explain what you mean?
I looked up enclitic - it seems to mean the shortening of a word by emphasizing another word, I can't understand why this would apply to the judgements of an intermediary
This depends entirely on how it's configured. Right now we've chosen to set up LLMs as verbally acute Skinner boxes, but there's not reason you can't set up a computer system to be processing input or doing self-maintenance (ie sleep) all the time.
Of course, also there are processes that are not expressible as computations, but those of these that I know about seem very very distant from human thought, and it seems very very improbable that they could be implemented with a brain. I also think that these are not observed in our universe so far.
0. https://www.academia.edu/30805094/The_Success_and_Failure_of...
When I studied in Ulaan Bataar some twenty years ago I met a romanian professor of linguistics who had prepared by trying to learn mongolian from books. He quickly concluded that his knowledge of russian, cyrillic and having read his books didn't actually give him a leg up on the rest of us, and that pronounciation and rhythm as well as more subtle aspects of the language like humour and irony hadn't been appropriately transferred through the texts he'd read.
Rules might give you some grasp of a language, but breaking them with style and elegance without losing the audience is the sign of a true master and only possible by having a foundation in shared, embodied experience.
There's a crude joke in that Searle left academia disgraced the way he did.
There's no unique way to implement a computation, and there's no single way to interpret what computation is even happening in a given system. The notion of what some physical system is computing always requires an interpretation on part of the observer of said system.
You could implement a simulation of the human body on common x86-64 hardware, water pistons, or a fleet of spaceships exchanging sticky notes between colonies in different parts of the galaxy.
None of these scenarios physically resemble each other, yet a human can draw a functional equivalence by interpreting them in a particular way. If consciousness is a result of functional equivalence to some known conscious standard (i.e. alive human being), then there is nothing materially grounding it, other than the possibility of being interpreted in a particular way. Random events in nature, without any human intercession, could be construed as a veritable moment of understanding French or feeling heartbreak, on the basis of being able to draw an equivalence to a computation surmised from a conscious standard.
When I think along these lines, it easy to sympathize with the criticism of functionalism a la Chinese Room.
Rest in peace.
I also didn't love the "observer-relative" vs. "observer-independent" terminology. The concepts seem to map pretty closely to "objective" vs. "subjective" and I feel like he might've confused fewer people if he'd used them instead (unless there's some crucial distinction that I'm missing). Then again, it might've ended up confusing things even more when we get to the ontology of consciousness (which exists objectively, but is experienced subjectively), so maybe it was the right move.
Most concisely: could we ask, "What is it like to be Claude?" If there's no "what it's like," then there's no consciousness.
Otherwise yeah, agreed on LLMs.
You can be completely paralyzed and completely concious.
Multimodal LLMs get input from cameras and text and generate output. They undergo reinforcement learning with some analogy to pain/pleasure and they express desires. I don't think they are conscious but I don't think they necessarily fail these proposed preconditions, unless you meant while they are suspended.
https://www.theguardian.com/world/2025/oct/05/john-searle-ob...
His most famous argument:
The human running around inside the room doing the translation work simply by looking up transformation rules in a huge rulebook may produce an accurate translation, but that human still doesn't know a lick of Chinese. Ergo (they claim) computers might simulate consciousness, but will never be conscious.
But is the Searle room, the human is the equivalent of, say, ATP in the human brain. ATP powers my brain while I'm speaking English, but ATP doesn't know how to speak English just like the human in the Searle room doesn't know how to speak Chinese.
Neither the man, nor the room "understand" Chinese. It is the same for the computer and its software. Jeffery Hinton has sad "but the system understands Chinese." I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.
Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.
There are two possibilities here. Either the Chinese room can produce the exact same output as some Chinese speaker would given a certain input, or it can't. If it can't, the whole thing is uninteresting, it simply means that the rules in the room are not sufficient and so the conclusion is trivial.
However, if it can produce the exact same output as some Chinese speaker, then I don't see by what non-spiritualistic criteria anyone could argue that it is fundamentally different from a Chinese speaker.
Edit: note that here when I'm saying that the room can respond with the same output as a human Chinese speaker, that includes the ability for the room to refuse to answer a question, to berate the asker, to start musing about an old story or other non-sequiturs, to beg for more time with the asker, to start asking the akser for information, to gossip about previous askers, and so on. Basically the full range of language interactions, not just some LLM style limited conversation. The only limitations in its responses would be related to the things it can't physically do - it couldn't talk about what it actually sees or hears, because it doesn't have eyes, or ears, it couldn't truthfully say it's hungry, etc. It would be limited to the output of a blind, deaf, mute Chinese speaker confined to a room whose skin is numb and who is being fed intravenously, etc.
Indeed. The crux of the debate is:
a) how many input and response pairs are needed to agree that the rule-provider plus the Chinese room operation is fundamentally equal/different to a Chinese speakers
b) what topics can we agree to exclude so that if point a can be passed with the given set of topics we can agree that 'the rule-provider plus the Chinese room operation' is fundamentally equal/different to a Chinese speaker
Sounds like circular logic to me unless you make that assumption explicit
That's not at all clear!
> Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.
All of that is called into question with some LLM output. It's hard to understand how some of that could be produced without some emergency model of the world.
LLM output doesn't call that into question at all. Token production through distance function in high-dimensional vector representation space of language tokens gets you a long way. It doesn't get you understanding.
I'll take Penrose's notions that consciousness is not computation any day.
I know that it doesn't feel like I am doing anything particularly algorithmic when I communicate but I am not the hommunculus inside me shuffling papers around so how would I know?
Hopefully we have all experienced what genuine inspiration feels like, and we all know that experience. It sure as hell doesn't feel like a massively parallel search algorithm. If anything it probably feels like a bolt of lightning, out of the blue. But here's the thing. If the conscious loop inside your brain is something like the prefrontal cortex, which integrates and controls deeper processing systems outside of conscious reach, then that is exactly what we should expect a search algorithm to feel like. You -- that strange conscious loop I am talking to -- are doing the mapping (framing the problem) and the reducing (recognizing the solution), but not the actual function application and lower level analysis that generated candidate solutions. It feels like something out of the blue, hardly sought for, which fits all the search requirements. Genuine inspiration.
But that's just what it feels like from the inside, to be that recognizing agent that is merely responding to data being fed up to it from the mess of neural connections we call the brain.
You can take this insight a step further, and recognize that many of the things that seem intuitively "obvious" are actually artifacts of how our thinking brains are constructed. The Chinese room and the above comment about inspiration are only examples.
I cannot emphasize enough how much I dislike linking to LessWrong, and to Yudkowsky in particular, but I first picked up on this from an article there, and credit should be given where credit is due: https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-alg...
By the way, the far more impactful application of this principle is as a solution (imho) to the problem of free will.
Most people intuitively hold that free will is incompatible with determinism, because making a choice feels unconstrained. Taken in the extreme, this leads to Penrose and others looking for quantum randomness to save their models of the mind from the Newtonian clockwork universe.
But we should have some unease with this, because choices being a random roll of the dice doesn’t sit right either. When we make decisions, we do so for reasons. We justify the choices we make. This is because so-called “free will” is just what a deterministic decision making process feels like from the inside.
Philosophically this is called the “compatibilist” position, but I object to that term. It’s not that free will is merely compatible with determinism—it requires it! In a totally random universe you wouldn’t be able to experience the qualia of making a free choice.
To experience a “free choice” you need to be able to be presented with alternatives, weight the pro and con factors of each, and then make a decision based on that info. From the outside this is a fully deterministic process. From the inside though, some of the decision making criteria are outside of conscious review, so it doesn’t feel like a deterministic decision. Weighing all the options and then going with your gut in picking a winner feels like unconstrained choice. But why did your gut make you choose the way you did? Cause your “gut” here is an unconscious but nevertheless deterministic neural net evaluation of the options against your core principles and preferences.
“Free will” is just what a deterministic application of decision theory feels like from the inside.
The success of LLMs imitating human speech patterns, often better than most people (ask an LLM to write a poem about some topic in a certain style and it will do better than 99% of people and do it faster than 100% of people) is pretty impressive. "But it is just a thought-free statistical model, unlike people". I agree it is a thought-free statistical model.
But most of the things we all say in conversation is of the same quality. 99% of the time in conversation words tumble out of my mouth and I learn what I said when I hear my words in the same moment by conversation partner does. How is that any different from how today's LLM models behave? Is such dialog any more thoughtful than what LMMs produce?
The problem with the people who buy Searle's argument is they don't really think through the magnitude of what would really be required to pull it off. It wouldn't just be a static book, or a wall full of encyclopedias. It would have to be a stateful system that modifies that state and deduces new rules that affect future transformations as flexibly as the human mind does. To me it is clear that such a system really does think in the same way that humans do, no dualism required.
Unless we suppose those books describe how to implement a memory of sorts, and how to reason, etc. But then how sure are we it’s not conscious?
It's implied, since they enable someone who does not know Chinese to respond equally well to questions as someone with Chinese as a native language.
I'm not even sure what you are asking for, tbh, so any answer is fine.
Wiki
I'm very certain that issues of justice are complicated, and that allegations of misconduct are not always correct and that allegations in and of themselves must not be immediately treated as substantiated; yet surely, if it is justice we are interested in, we must be careful to ensure our fact-seeking methods do not not unduly rely on testimonies of those accused to the detriment of all other lines of inquiry.
I understand in McGinn's case that actual documents of the harassment are available, and I think that if some academics believe they need to push back against allegations of sexual harrassment they consider wrongful, a person with documented harassment is profoundly inappropriate to be spearheading that.
https://www.insidehighered.com/quicktakes/2017/04/10/earlier...