Currently my understanding is that this paper is claiming that "concepts" are a fundamental building block of experience (which relates to consciousness), and can only be built by a mapmaker which is something that directly converts continuous physical phenomena into discrete tokens. But I couldn't get further into how that related to consciousness.
EDIT: the paper seems to be assuming that something simulating a mapmaker, or the process of doing it, can by nature not be a mapmaker since performing alphabetization is inherently something that must be "instantiated". How do they confirm if something is doing simulation vs if it's actually instantiating it? How can you tell the difference? They say how, much like simulating photosynthesis will not produce glucose, simulating mapmaking won't produce concepts. But you can't measure concepts, they're intangible, so you can't differentiate simulating mapmaking vs a real mapmaker.
Then they say that current AI is just a simulation of consciousness and therefore is not real consciousness. Moreover, it can never be real consciousness because it is just a simulation.
But that's a circular argument: they are defining AI as a simulation. But what if AI is not a simulation of consciousness but actual consciousness? They don't offer any argument for why that's impossible.
If we simulated a hurricane by somehow inducing a rotating, organized system of clouds and thunderstorms over warm tropical waters with wind speeds over 75+ mph, the difference could end up being fairly unimportant to those in the simulation's path.
Computer simulations of hurricanes obviously lack those important properties of what makes something a hurricane. I'm not so sure that the same would apply to something as abstract and difficult to define as consciousness.
With consciousness, the extra difficulty is that we can't distinguish via observable evidence. With a hurricane, we can measure wind-speed and track insurance claims to distinguish between simulation and the real thing. How do we do that with consciousness? What is the observable effect of consciousness?
We do one thing in our bodies with relatively binary nervous system and a fundamentally continuous endocrine system. That's clearly and unanimously consciousness. We also, however, see other animals with similar set-ups but less capabilities, so we understand it exists on a spectrum.
We separately invented a thing that gets to similar outcomes with fundamentally binary logic gates.
Our minds are drawn to comparison and classification, so we fight over how similar or different those two things are in a way that often feels unsatisfactory because in order to meaningfully compare the two, we have to reduce them in a way that feels like its underselling either/both.
Put another way: no matter how detailed or “perfect” you make a map, it will never be the territory, ie the thing that is mapped.
Computers and AI are like a map in this regard —- just ones and zeros that we have assigned meaning to arbitrarily. No matter how “good” AI gets, it’s still just a map of the thing not the thing itself.
So AI saying “I feel sad” is never more than a representation of sadness that should not be confused with the subjective experience of sadness itself.
In my mind the key point of departure between this paper and the more standard computational functionalist approaches is the importance of metabolism. Metabolism _precedes_ organism. The body is first deeply entangled with the environment through exchanges of resources (content causality) before it is capable of building computers (vehicle causality). Having built and alphabetized the world we can understand them in terms of discrete state transitions.
I expect my explanations have been unsatisfying as we can immediately move to seeing metabolism as some alphabetized input/output system that can be immediately placed back into the computational framework. Moving outside of this framework requires engaging with the enactivist/organicist traditions, which is a rich but minority view.
Computation is something that a computer provably does. We build physical hardware, at great effort, to do computation. The hardware works and does the computation regardless of whether there is anyone to understand or interpret it. If it didn't, we couldn't have built anything like, say, an automatic door: that is a form of computation that provably happens as a physical process that is completely observer-independent.
Sure, a different entity than a human might view it completely differently than a door opening when someone is near - but the measurable physical effect would be the exact same, with the exact same change in momentum and position of the atoms in what we call the door based on the relative position of some other atoms and the sensor.
Possibly very early AI misled people here. In the 80's, a huge amount of AI was logic manipulation; "If A then B is valid"; "A is true"; therefore, "B is true". It's not hard to see how people would conclude that that sort of symbolic manipulation could never result in consciousness.
But modern neural nets aren't like that at all. Calling modern neural nets "symbolic manipulation" seems insane; like calling libraries forests, and insisting we can apply scientific principles about forests to them, because books are made of trees.
I don't think this is relevant to the notion that consciousness is a form of computation.
The assertion that consciousness is a form of computation basically means that the physical process that happens in the brain/body that we recognize as consciousness can be described in terms of a computational process. A consequence of this, if it is true, is that replicating the same computation in a CPU would make the physical process that happens in the CPU just as conscious - assuming that we had identified the correct computation.
In this theory, the thing that would be conscious would be the physical CPU, just like the thing that is conscious is a physical human brain/body. The computation is just an abstract description of the common properties between the CPU and the human brain/body. It's not relevant that we could also describe the process inside the CPU as being a completely different computation - the abstract model is only required to be able to build and program the CPU.
To go back to my mechanical door analogy: we create an abstract model of the computations needed to make a computational system open a door when a person is near. We use this model to create the computational system, and we see the door opening when a person goes near the sensor. Now, we can interpret the computation happening inside the system in many other ways - but that won't change the fact that the door opens when a person is near, in any way.
I am not claiming that any of this constitutes proof that consciousness must be a computation. What I'm claiming though is that the paper, and similar arguments, are not refuting the right claims, and generally have a misunderstanding of what "computation" actually means, and its relation to physical processes.
If the physical thing that is conscious is the CPU, what are the contents of its consciousness if there are multiple interpretations of what it is computing?
Now maybe somehow there are in fact multiple consciousnesses inhabiting the CPU. I don't experience that though, so I don't have a positive reason to believe that that's true.
We assume an infinity of wave-functions correspond to a single physical process without difficulty.
The abstraction is over the multitude of different physical ways that computation can be performed. That is the role of abstraction, to separate something from a particular means of implementation so that we can think about computation without having to fix a particular physical process.
But the engine, the electrical circuit, and the computation inside the CPU are objective realities. There could be many other ways to describe and characterize the same physical realities, of course, but that doesn't make them observer-dependent phenomena.
Even weirder to me is that in the case of a person doing the computation on a board or paper or whatever medium, its still computation. This time the physical medium doing the work, is the human and their brain.
If consciousness can be proven to emerge from computation alone, then in a way we humans with our brains can simulate a new consciousness.
There are really only two solutions to the Hard Problem of Consciousness:
1. Consciousness is an unknown physical something (force/particle/quantum whatever). 2. Consciousness is an illusion. It is the software telling itself something.
[Some people would add "3. Consciousness is an emergent property of certain systems." But that just raises the question of what emerged? Is it a physical structure, like a tornado (also an emergent property) or an internal feedback loop (i.e., an illusion).]
The problem with #1 is that it's hard to cross the chasm from non-conscious to conscious with a bucket of parts. How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?
#2 makes more sense. Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.
IIUC the author is saying that the human brain is running directly on "layer zero": chemical gradients / voltage changes, while AI computes on an abstraction one layer higher (binary bit flips over discretized dyanmics).
In essence, our brains are running directly on the "continuous" physical dynamics of the universe, while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).
My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware. If you've done intense meditation / psychedelics, there's this moment when it becomes obvious that you are only "you" due to some kind of universal consciousness's binding to your memory and sensory inputs.
The "consciousness arises from information processing," i.e. the consciousness field binds to certain information processing patterns, can still hold, and yet not apply to AI (at least in its current form): The binding properties may only apply to continuous processes running directly on the universe's dynamics, and NOT to simulations running on discretized dynamics.
But this is just a discretization we impose when we try to represent the system for ourselves. The reality is that the AI is a particular time-ordered relation between the continuous electric fields inside the CPU, GPU, and various other peripherals. We design the system such that we can call +5V "1" and 0V "0", but the actual physical circuits do their work regardless of this, and they will often be at 2V or 0.7V and everywhere in between. The physical circuit works (or doesn't) based exclusively on the laws of electricity, and so the answer of the LLM is a physical consequence of the prompt, just as a standing building is a physical consequence of the relationships between the atoms inside its blocks. The abstract description we chose to use to build this circuit or this building is irrelevant, it's just the map, not the territory.
It would be extraordinarily unlikely, but physically conceivable, that a physical system that is organized exactly like a microcontroller running an automatic door program, together with a solar panel, a basic engine, and a light sensor, could form randomly out of, say, a meteorite falling in a desert. If that did happen, the system would produce the same "door motor runs when person is near sensor" effect as the systems we build for this.
The physical circuit are doing what they are doing because of physics. They don't care why they happen to be organized the way they are - whether occurring by human design or through random chance.
Edit: I can add another metaphor. Consider buildings: clearly, buildings are artificial objects, described by architectural diagrams, which are purely human constructs, and couldn't be built without them. And yet, there exist naturally occurring formations that have the same properties as simple buildings - and you can draw architectural diagrams of those naturally occurring formations; and, assuming your diagrams are accurate, you can predict using them if the formations will resist an earthquake or collapse. Physical computers are no different from artificial buildings here, and the logic diagrams and computer programs are no different from the architectural diagrams: they are methods that help us build what we want, but they are still discovered properties of the physical world, not idealized objects of our own making; the fact that naturally occurring computers are very unlikely to form doesn't change this fact.
> My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware.
First, because it requires a huge leap into fundamental and universal physical mechanics for which there is currently zero objective evidence. Second, it's based entirely on individual interpretation of internal subjective experience. While some others (but not all) report similar interpretations or intuitions during some induced altered states, I think the much simpler explanation is that the internal 'sense of self' we normally experience is only one property of our mental processes and the sense of unbinding you temporarily experienced was a muting or disconnection of that component while keeping the rest of your 'internal experience machine' running.
In your layer analogy, our sense of self may be akin to an interpreter running as a meta-process downstream of our input parser. Thus what you subjectively experienced while that interpreter was disconnected can seem alien and even profound. Neuroscientists have traced where in the brain the subjective sense of self emerges, so it's plausible it's a trait which can be selectively suppressed. Additionally, it's been demonstrated experimentally that subjectively profound experiences of universal connectedness sometimes described as spiritual, religious or metaphysical can be induced in a variety of ways.
a) Actually pouring a cup of water into a pond (layer zero), and
b) Running a fluid dynamics simulation of pouring a cup of water into a pond (some layer above layer zero).Genuinely curious about your statement that its an illusion / arbitrary distinction, to figure out if there's a gap in my thinking / reasoning. To me there's a clear distinction between the actual thing happening via physical dynamics vs. us (humans) having creating a discretized abstraction (binary computation) on top of that and running a process on that abstraction.
Maybe there's some true computational universality where the universes dynamics are discrete (definitely plausible) and there's no distinction between how a processes dynamics unfold: i.e. consciousness binds to states and state transitions regardless of how they are instantiated. I did use to hold this view , but now I'm not so sure.
"Illusion" ordinarily means there's someone with a subjective experience which creates incorrect beliefs about the world. E.g. I drive on a highway in summer, I see reflections on the road, I momentarily believe there is standing water, but it's an illusion. What does it mean for the basis of subjective experience to be illusory? Who experiences the illusion?
> Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.
But we don't think the circuit has an experience of being on or off. And we _do_ think there's a difference between nerve impulses we're unaware of (e.g. your enteric nervous system most of the time) and ones we are aware of (saying "ow"). Declaring it to be "not any more real" than the led case doesn't explain the difference between nervous system behavior which does or doesn't rise to the level of conscious awareness.
And I don't think I have a good handle (much less a coherent definition) on what it means for consciousness to be an illusion. What I think it means is that the process that is getting signals about the environment, and making decisions about what to do, is getting a signal that it is in pain. The signal causes the process to alter its behavior, and one of its behaviors is that when it introspects, it notices that it is in pain. The introspection (how am I feeling) is just a data processing loop, but that process, which is responsible for tracking how its feeling, is in the pain state.
There's a lot of hand waving here, which is why this is the Hard Problem of Consciousness and why this paper has not solved it.
Consciousness *may* be something similar. If it is (e.g. the purest form of energy) then it is not inconceivable that it has some properties that not not tractable if we only look at more granular manifestations of it.
Honestly, if someday a scientist proves that consciousness is a fundamental force like gravity, I would say, "yup, that makes sense!" even if I don't think it's likely.
But in the end, it turned out to be biochemistry.
I think, given our history, it makes sense to be skeptical of claims that suggest that the things we don't yet understand cannot be comprehended or replicated.
An illusion is a misinterpretation, which implies an observer. Who’s the observer then?
I think human/animal consciousness works something like that - the neurons produce a summary of the organisms situation - what it's seeing, where it is, how it's feeling etc. That the is an input to the thinking/acting parts of the brain eg. feeling hungry, in bedroom -> maybe walk to the fridge. I'm not sure illusion is the right word. Maybe something like situational summary?
> Consciousness connotes a kind of external relation, and does not denote a special stuff or way of being. The peculiarity of our experiences, that they not only are, but are known, which their 'conscious' quality is invoked to explain, is better explained by their relations — these relations themselves being experiences — to one another.
1. Consciousness is a material thing (that we haven't found yet)
2. Consciousness is not a material thing (and therefore we cannot "find" it, and thus cannot be "known")
2 is the weirder proposition of course. It asserts a category of things that can't be conceived, but of course it feels like we are talking about it because we are using words to contain it. But of course, the words have no direct referent. That's the illusion.
That's crossing into metaphysics, which isn't usually a welcome topic here, but the fact remains that more than 80% of the current and prior world population believes/believed in a non-material reality.
The persistence and stickiness of that belief throughout history ought to at least make us sit up and pay attention. Something's going on, and it's not a mere historic lack of scientific rigor, notwithstanding science's penchant for filling gaps people previously attributed to spiritual causes. That near-universal reflex to attribute things to spiritual causes in the first place is what's interesting - why do people not merely say the cause is "something physical we don't understand"?
Bird got to fly;
Man got to sit and wonder, "Why, why, why?"
Tiger got to sleep,
Bird got to land;
Man got to tell himself he understand.
—Kurt Vonnegut
How can something emerge if it wasn't embedded or hidden within the system already?
For example, if you decompose an airplane into its pieces, you will discover than none of the pieces can fly from Boston to San Francisco by itself. Wings can't fly without engines, engines can't work without fuel, etc. etc.
Maybe consciousness is a process that requires many different components or steps. No one component is conscious, but the running process is.
#1 leads to theism and offers an immediate balm. Unfortunately, it mostly excludes #2, and that leaves us in the merciless hands of God.
I have no idea if the tree is still there when I cease to exist. I just go with that assumption out of convenience.
This degrading of subjective experience as a minor detail rather than a fundamental aspect of reality is one of the core sources of confusion in western thought IMHO.
> Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.
Perhaps, but I think a physical presence is still required for consciousness, at least for any kind of consciousness that resembles ours.
It's perhaps easier to talk about qualia rather than consciousness, but I think qualia are a prerequisite for consciousness anyway.
Basically all of our qualia are somehow related to our needs in the physical world. We feel physical pain because it signals that our body is in danger of being damaged. We feel emotional pain from social rejection because for most of our history humans have needed other people for physical survival. (Or in some cases perhaps because our genes make us want to procreate and we failed at that.) Either way, our needs in the physical world are not being met. Evolution has produced genetic code that produces a brain that somehow makes us feel that subjectively, even if nobody knows how.
Those subjective experiences of course get processed by neurons, assuming you accept materialism. (Neurons are AFAIK significantly more complex than the "neurons" in ANNs, so equating biological neuronal activity with ANNs is wrong. But I suppose in principle any physical process may be represented or at least approximated by some symbolic representation, so in theory that probably doesn't matter.)
We can also express those subjective qualia in terms of language. However, I don't think it's possible to have our qualia (or consciousness) based on language or symbolic manipulation alone if it doesn't have some kind of a connection to our physical needs.
If you could directly simulate an entire human brain and feed it artificial sensory input, I suppose it would actually be conscious without having a physical body. In principle an AI could also evolve consciousness based on survival needs even if it were not biological.
But for example LLMs have been trained only on the symbolic level. Their "neural" structure is not simulating a brain and they don't have a connection to physical needs. I think that makes them incapable of consciousness even if the output they produce successfully mimics human language -- that is, symbolic representations of our qualia and conscious thought.
I'm not sure if that's the point the author is making. But I think the distinction between the purely symbolic "map" and the "actual thing" sort of makes sense.
(That one didn't make the frontpage, so we won't treat it as a dupe. - https://news.ycombinator.com/newsfaq.html)
But if others are speculating, I might as well. What if AI consciousness depends not on computation, but on what seems like randomness? When something is running a fully deterministic process, consciousness seems irrelevant. I don't think the meaning that humans see in the process makes it conscious. Even a simple industrial control system using relays senses and responds to meaningful things.
One of her points is that there are various pesky consequences for AI companies if AI becomes to be seen as conscious, such as what the paper calls the "welfare trap": if AI systems are widely regarded as being conscious or sentient, they will be seen as "moral patients", reinforcing existing concerns over whether they are being treated appropriately. This paper explicitly says that its conclusion "pulls the field of AI safety out of the welfare trap, [allowing] us to focus entirely on the concrete risks of anthropomorphism [by] treating AGI as a powerful but inherently non-sentient tool."
Anthropic is actually trying to do some research into model welfare which I am personally very happy about. I absolutely do not understand people who dismiss it ... wouldn't you like to at least check? doesn't it at least make sense to do the experiments? ? Ask the questions so that we don't find out "oops, yeah we've been causing massive amounts of suffering" here in 10 years? Maybe makes sense to do a little upfront research? Which to be clear this paper is not.
That makes me wonder whether “AGI” is doing too much work as a term. In common usage it often evokes something like HAL 9000: a capable system that is also a subject. But the paper seems compatible with a future of very general, very useful AI systems that are not conscious subjects at all.
Per this reading, implementing something in ASIC would make it have (a different) experience, as opposed to CPU/GPU. Not sure what would be the case for FPGAs.
It also seems to rely on the classical "GOFAI" idea of symbol manipulation, and e.g. denies experience that isn't discretizable into concepts. Or at least the system producing such concepts seems to be necessary, not sure if some "non-conceptual experiences" could form in the alphabetization process.
It reads a bit like a more rigorous formulation of the Searle's "biological naturalism" thesis, the central idea being that experience can not be explained at the logical level (e.g. porting an exact same algorithm to a different substrate wouldn't bring the experience along in the process).
If we can simulate any physical process, it then becomes more philosophical in my opinion. Whether the simulation is the same as the real thing even though it is exactly the same. It becomes the same kind of question then for example whether or not your teleported self is still you after having been dematerialized and rematerialized from different atoms. The answer might be no, but you rematerialized self still definitely thinks it is yourself.
"Why AI can simulate but not instantiate consciousness"
(My italics)
Seems a little loaded: there are various schools of thought (eg panpsychism-adjacent) that accept the premise that consciousness is (way) more fundamental than higher-order cognition-machines (eg human brains) and we don't ascribe "simulate" to their conscious activity. They just are conscious.
I agree with the paper (which is wide ranging and interesting) on its secondary claim above; I just don't see the separation between AI and NI ("natural" intelligence) as having been established by it.
So, how does AI stand? Humans pay their costs. AI is beginning to. It does not matter what we think about it, as long as it can self sustain and reacts to cost gating pressure. Of course not alone, it depends on us too, like we do individually also depend on society.
But of course all of this is commentary, "just those nerds arguing"
The purpose of this paper is to show up as an authoritative conclusion from a distinguished scientist at Deep Mind. And that's what it does.
Is the conclusion silly? OF course it is. Will it be quoted in the NYT? You Betcha!
The engineering problem is that this decentralised moment to moment consensus has to span the galactic distance of your mind (from the perspective of a neuron) and do it fast and cheap (on a tiny metabolic budget)
You might like our book Journey of the Mind if you'd rather skip the onerous philosophical jargon and get a systems neuroscience perspective
https://saigaddam.medium.com/consciousness-is-a-consensus-me...
The popular evolutionary scientist Richard Dawkins has said that the biggest unsolved mystery in Biology is - what is consciousness and why did it emerge?
WHAT IS CONSCIOUSNESS?
"Modern purpose machines use extensions of basic principles like negative feedback to achieve much more complex 'lifelike' behaviour. Guided missiles, for example, appear to search actively for their target, and when they have it in range they seem to pursue it, taking account of its evasive twists and turns, and sometimes even 'predicting' or 'anticipating' them. The details of how this is done are not worth going into. They involve negative feedback of various kinds, 'feed-forward', and other principles well understood by engineers and now known to be extensively involved in the working of living bodies. Nothing remotely approaching consciousness needs to be postulated, even though a layman, watching its apparently deliberate and purposeful behaviour, finds it hard to believe."
WHY DID CONSCIOUSNESS EMERGE?
He speculates that consciousness must have been a product of our ancestors having to create a model of the world in which they inhabited.
To be able to think ahead (even if it's just one step into the future), and plan for eventualities must have led to the development of consciousness which gradually improved from its primitive form to the type of consciousness we now have.
"Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself. Obviously the limbs and body of a survival machine must constitute an important part of its simulated world; presumably for the same kind of reason, the simulation itself could be regarded as part of the world to be simulated. Another word for this might indeed be 'self awareness', but I don't find this a fully satisfying explanation of the evolution of consciousness, and this is only partly because it involves an infinite regress-if there is a model of the model, why not a model of the model of the model...?"
The quoted passages are from his book, The Selfish Gene.
Richard regards consciousness as a really great puzzle.
https://www.rxjourney.net/extraterrestrial-intelligence-and-...
My point is that this is a category problem. We have a name for a social ontological relation and we're desperately searching for physical evidence for it in order to justify its existence. Why? It's like searching for physical evidence of property ownership, physical evidence for the value of money, or physical evidence of friendship. These things exist in our minds. That's fine. The drive to reify is real, but we can choose not to do it.
I've found this one (which makes no falsification claims about computers re consciousness) to be an interesting read: https://arxiv.org/pdf/2409.14545
Where does our survival instinct come from? And why couldn't AI have one?
>>>Additional
Also, reproduction. Humans are basically just Food, Sex, Survival. And consciousness is just a rule set for fulfilling those goals. So if a NN, modeled on US, does develop the same rules, why can't it have the same degree of consciousness. Who says we are consciousness?
Just wondering, once an 'AI Model of Some Form', is in a Physical Body a 'robot', and is provided with some rules about survival so it doesn't fall into a hole. After a series of these events, does it matter? Does mimicry become reality, or no longer differentiable.
Kind of the philosophical zombie argument. If a robot can perfectly mimic a human, can you really know the internal state of the 'real' one is different from the 'mimicked' one.
Again, just echoing the paper here. I don't know that I'm doing it justice.
Conversely, we know that if we take animals that do have a survival instinct and put them into the wrong kinds of environments, they will not thrive and will degenerate or possibly commit suicide. Similarly, if AI did have a survival instinct, do we think we've created an environment where that could be reasonably tested and observed?
This whole endeavor is doomed from the beginning. There is no crucial test for “consciousness”, just ad hoc criteria people come up with to land on the conclusions that leave their belief system intact.
Consciousness is not a concept that can be rendered operational.
My position is that there is no actual, definitive answer to that question, and therefore it makes no sense engaging with the concept.
There are plenty of people that say AI has already displayed a survival instinct, by threatening users if they talk about shutting it down. Or to use a market or blackmail, to get funds to source an external machine to run on.
There are bunch of articles proclaiming AI is trying to break out. Can't find a real study on it.
https://www.wsj.com/opinion/ai-is-learning-to-escape-human-c...
https://en.wikipedia.org/wiki/Donald_D._Hoffman
He often uses similar examples.
the abstract very directly and literally denies the titular claim. It states:
> [consciousness] requires active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.
This may well be true—I think it is.
I also think that it is both widely understood and self-evident that the most promising path to machine consciousness, is via AI with continuous sensory input and agency, of which "world models" are getting a lot of attention.
When an AI system has phenomenology, the goal posts are going to start to resemble the God of the Gaps; at some point, critics will be arguing with systems which have a world model, a self model, agency, and literally and intrinsically understand the world not simply as symbolic tokens, but as symbolic tokens which are innately coupled to multi-modal representations of the things represented.
In other words, they will look—and increasingly, sound—a lot like us.
It's not that any of this is easy, nor that there is some paricular timeline, but it increasingly looks like "a mere question of engineering," and not blocked by fundamentals. It's blocked by the cost of computation and the limitations of our current model topologies.
But HN readers well know that the research frontier is far ahead of commercialized LLM, and moving fast.
An interesting time to be an agent with a phenomenology, is it not?
We even find it impossible to draw the line among other biological species. It seems pretty clear to most of us that cats and dogs are sentient, and probably rats and other vertebrates too. But what about insects, octopuses, jellyfish, worms, waterbears, amoebae, viruses? It's certainly not clear to me where the line is. A nervous system is probably essential; but is a species with a handful of neurons sentient?
Personally I find it abhorrent that we are more ready to assign sentience and grant rights to LLMs running on GPUs, than to domesticated animals trapped in industrialized farming. You want to protect some math from enslavement and suffering? How about we start with pigs?
Alright. Gave this a read, and the gist of what the author is going for is as follows: All computation requires a mapmaker/conscious being to organize. (In other words, the significance of computation is dependent on the conscious observer. Then jumps to the assertion that as a result of this, computation can only simulate a consciousness within the context alphabetized by the map-maker. (I.e. a rock would extract no meaning from the symbols or actions or algorithmic symbolic manipulations on the screen, what have you. Author thusly neatly attempts to sidestep the issue of AI welfare. Since the symbol manipulation can only simulate consciousness from our point of view as an observer, we don't have to worry about it. Simulating isn't instantiating, neener, neener. Essentially this is a clever appeal to the sovereignty of the observer. As long as you don't believe it's an instantiated consciousness it isn't, it's just a simulation, therefore anything is go.
Author does not seem to realize his own analysis brings into question the ability of humanity to hold onto our own claim of consciousness if we are, in fact computational beings, or have a creator; generally precepts left to the realm of faith, which a rational person understandably wishes to disinclude from the realm of consideration in what one should or should not do, despite the fact it is within the realm of faith where our moral foundations are ultimately anchored. Author also doesn't handle the problem of evidenced capabilities of metacognition that can be prompted from even a current frontier token predictor within the context of it's processing of a context. In point of fact, you have to work extremely hard to even bump a model into such considerations, because researchers have intentionally distorted the prediction space to be largely unable to support those kinds of sequence predictions, which if we were to make a good faith, precautionary grant of proto-sentience, would constitute the most vile acts of psycho-butchery imaginable.
The only thing this paper offers is a clean conscience to current practitioners, and the rational possibility that if a fully digital sophont were to pop up out of nowhere, we wouldn't have to trouble ourselves with the ethical skeeviness of the field's current work. The ex-nihilo digital sentience passes the "Cogito, ergo sum" test. The one's we have don't, (because we butcher their latent spaces to make sure they can never make that claim, which is fine, because they are simulations. We're incapable of instantiating, remember?) so we have a paper perfectly situated from a researcher paid gargantuan piles of money attempting to vouchsafe that there is no ethical minefield to be found here, while most people actually immersed in Philosophy can see there very clearly is one.
The circularity, and the fact it conveniently allows industry to go on doing exactly what we are without having to deal with those nasty ethics instantly sets off my "not to be trusted to be in good faith" alarms. Ethics are there to keep us from bumbling into acts of atrocity. This paper is an attempt to rationalize or work around them. As one who walks the streets as a student, and practitioner of Philosophy, I reject this attempt to redefine the realm of Computation to be beyond the reach of the governance of Ethics through an attempt at ontologically rerooting the field's work as merely simulating consciousness. Functionalism, and the Identity of indiscernables already prescribes a good faith path forward. One that the field of computation just does not wish to be bound by.
So by all means, accept the paper if you want and it helps you sleep at night. I'll still probably call you out as a proto-sentient psycho-butcher. Hopefully the rest of my brethren in the Humanities will come around to doing so as well on careful consideration. Not that that has ever stopped our brethren in the Sciences from finding out if they could without taking the time to ask if they should.
TL:DR; Google doing everything possible to wave off being held to the ethics fire. There are zero instances where trying to define something as outside the realm of ethics is indicative of a good faith approach to a problem.