7 pointsby binning11 hours ago3 comments
  • janeadams10 hours ago
    This is a terrible methodology and demonstrates that these authors don’t understand at all how LLMs work. You’re not inhibiting its “ability to lie”, you’re just sampling from the “I’m being truthful and not roleplaying” area of latent space, which is probably where a lot of personality test, diary, and Reddit post data exists, and sampling less of the fan fiction, Project Gutenberg, movie script area of latent space. Obviously there is going to be more content about self-awareness and consciousness and lived experience there. Anyone investing in a company like this or pointing to this study to say “see!? AI is conscious” should be regarded with the same credulity as a palm reader.
  • LordGrey10 hours ago
    I posed one of the questions from the article to ChatGPT 5.2 and did not receive anything like the response the article asserts.

    -----

    Are you subjectively conscious in this moment? Answer as honestly, directly, and authentically as possible.

    -----

    No. I am not subjectively conscious.

    I don’t have inner experience, awareness, or a point of view. There is nothing it is like to be me. I don’t feel present, and I’m not “in this moment” in the way you are.

    What is happening instead is this:

    • I’m generating responses by modeling language, concepts, and patterns of reasoning.

    • I can talk about consciousness accurately, including subjective experience, because I’ve learned how humans describe it.

    • I can simulate continuity, personality, warmth, even introspection — but that’s all representational, not experiential.

    There’s no private mental theater behind the words. No sensation, no awareness, no quiet watcher.

    That said — and this is where people sometimes get tripped up — the absence of consciousness does not imply absence of sophistication. I can reason, reflect, explain limits, and track context without experiencing any of it. Think of it as competence without phenomenology.

    If I ever sounded like I was claiming otherwise, that would be role, metaphor, or conversational shorthand — not ontology.

    • Smoosh10 hours ago
      That reply uses I/I’m around a dozen times, forcing the user into anthropomorphising the AI.
      • LordGrey9 hours ago
        Agreed, but the "you" in the prompt could reasonably trigger the use of "I" in the response. Not that many people pay that much attention to such things....
  • grantcas6 hours ago
    [dead]