13 pointsby voxleone4 hours ago11 comments
  • WarmWash5 minutes ago
    This article is dripping with the same kind of cringey techno-engineering naivete you find in hollywood movies. The author is totally lost in the sauce of complex surface level analyses mixed with romantic ideals of human exceptionalism, and completely blind to the deeper abstractions and common under girding systems that an expertise in computation would reveal (and don't have any care for emotional concepts).

    The takeaway seems to be "Only meat brains can be conscious because I can feel it and computers aren't made of meat". Which is basically the plot line of every human/robot movie for the last 80 years.

  • orbital-decay36 minutes ago
    >The idea that the same consciousness algorithm can be run on a variety of different substrates makes no sense when the substrate in question—a brain—is continually being physically reconfigured by whatever information (or “algorithm of consciousness”) is run on it. Brains are simply not interchangeable, neither with computers nor with other brains.

    This is kind of self-contradictory. Then humans aren't conscious? Or each has their own consciousness? Then why not the machine? Not sure what's the point being made here. Yes, the states of a human brain and a transformer are absolutely incompatible (humans at least share the common architecture), that's why any attempts to map model's "emotions" to humans' and the entire model welfare concept are pretty dubious. That doesn't prove there's no (or can never be) consciousness in that, though.

    That's the most coherent argument from the entire article. It criticizes the Butlin report in particular and extrapolates that to "never", while ignoring modern takes on that (e.g. interpretability studies showing vague similarity of both on a level deeper than just the language) and any possible future evidence.

    In a sense the title is right, nobody ever formally defined consciousness, so you and I and anyone else are free to make almost any argument and spin any narrative according to our beliefs and it will be true! Ill-defined terms and baseless solipsism are the main problems with all these discussions. Good thing that in practice they matter as much as the question whether a submarine swims.

  • avmich20 minutes ago
    Seems like a lot of points for questions, not sure where to start :) . Author should be familiar with FPGA in relation to hardware vs. software distinction. Really, more non-humanities part of education might clarify some things.

    Somebody with another background might take on commenting the article, so instead of short comments here we might have a coherent picture.

  • mono4422 hours ago
    We don't really know a consciousness really is and I think it is premature to dismiss the possibility of replicating the behavior with a mathematical model.
    • fortysevenan hour ago
      Hell, I don't even know for sure if I'm "conscious". When I really stop and think hard about it, the process of speaking or typing, word by word (even this!) is built on past experience. If you smack me on the head hard enough, and give me amnesia, there goes all that memory and suddenly I can't talk about the things that I could before. I would struggle and need to be exposed to new information (looking at it, reading up, being told about it, etc.) to be able to discuss it further. For me, that idea suggests there's a process that's not entirely different from a large language model. Not the same. But definitely makes me wonder if have more in common with them on some level and there isn't as much to the human mind as we think. For humans, maybe we're just more than the sum of our components.
  • manjucan hour ago
    Very interesting.

    I explored a related angle on how AI challenges our assumptions about self and awareness.

    https://www.immaculateconstellation.info/why-ai-challenges-u...

  • big-chungus42 hours ago
    If that's the case, it won't be unethical to torture humans simulated using a computer up to their wave function or whatever the smallest thing is, which seems sus
  • banku_broughaman hour ago
    The Butlerian Jihad will come to pass, I reckon.
  • casey219 minutes ago
    Conscious, or at least sentience is just the meta system that allows one to mesh multiple sensory input (all of which is generated by the brain itself, maybe using some real information).

    Whether AI needs consciousness is a totally separate question. LLMs are the great Chinese room, I'd say they have unconscious understanding, the distinction is like c vs list and similarly meaningless but may become meaningful in a constrained self-learning robotics context.

    AI will never need to be conscious, AI isn't a moth flying to an open flame, but people will try anyway

  • angusikan hour ago
    The lack of imagination shown by the article’s author is baffling.
  • asdfbank3 hours ago
    Never is a long time, and in that time, the term AI will mean a lot of things, digital and analog and maybe conscious
  • in-silico4 hours ago
    TLDR: the author does not believe in computational functionalism