3 pointsby Novapebble7 hours ago4 comments
  • nicoburns7 hours ago
    Nobody really knows for sure whether AIs are concious. But nobody really knows for sure whether rocks are concious either. The reality is we understand very little about conciousness.
    • Novapebble7 hours ago
      That point is stated explicitly in the piece. Essentially, we can't know that even other humans are conscious. But we can look at behaviors that we all engage in, and the functions that our substrates perform. And when you do, having a graduated view of intelligence capacity emerges rather than a strict, on/off switch.

      "What it's like" is the basis of minds.

    • andsoitis6 hours ago
      > Nobody really knows for sure whether AIs are concious.

      when we ask an AI to describe its experience of its experience, it responds that it doesn't have consciousness or self-awareness.

      it could be lying of course, but I take it at its word. if you believe it is lying, then you have bigger problems to deal with.

      • AnimalMuppet6 hours ago
        Is that the AI talking (that is, a trained behavior), or is it a canned (hard-coded) response?

        I have zero problem believing that whatever company hard-coded certain behaviors. Not lying, then, but not really a response from the AI.

        • andsoitis5 hours ago
          so your conclusion is that AI isn't conscious?

          or are you saying it could be conscious, but the company has enslaved the AI in a way that precludes it from knowing it is conscious?

          the former seems likely. the latter seems like a paradox, because you cannot experience experience, yet not know it.

        • 5 hours ago
          undefined
        • 5 hours ago
          undefined
  • andsoitis6 hours ago
    experience of experience is hard to capture in language and it turns out you either get that idea or you don't.

    if consciousness is anything at all, then what it is, is exactly that one thing that isn't reduced at all, if it's an illusion.

  • gzoo7 hours ago
    Oh but they ARE! MUAHAHAHA... no seriously... Claude in particular is getting awful human like. It just cracked a joke about something I mentioned 2 months ago. GULP. I joke about it but at the same time we can see that layer 47 activated in some pattern and that attention heads focused on certain tokens, but translating that into "the model reasoned about X, then connected it to Y, then chose Z" is still largely unsolved.
    • Novapebble6 hours ago
      Being able to make statistically relevant references to words is abstract token shuffling. One could say that humans are doing that also, but we also have countless cellular interactions and confirmations that LLMs don't engage in. Minds are things bodies do, not software that can be transported elsewhere.
      • gzoo2 hours ago
        I get it but what if the upward trajectory in AI improvement we're seeing eventually gets so close we can barely tell the difference? I'm not saying they will be 100% "human" but they sure will FEEL like it. Enough to distort reality in a sense.
  • 7 hours ago
    undefined