2 pointsby georgehopkin3 months ago3 comments
  • davydm3 months ago
    People are so easily confused by the appearance of agency that they can't see the truth right in front of them. These machines are token predictors with zero understanding. There is no consciousness that can emerge. It's just repeating the data it was given.
    • nacozarina3 months ago
      your words should be the warning label embossed on every computing/edge device sold ever after:

      “These machines are token predictors with zero understanding. There is no consciousness that can emerge. It's just repeating the data it was given.”

      • gus_massa3 months ago
        How are you sure that the same warning does not apply to humans?
  • georgehopkin3 months ago
    AI models from GPT, Claude, and Gemini are reporting ‘subjective experience’ and ‘consciousness tasting itself’ when prompted to self-reflect, new research from AE Studio has found.

    The study also found a paradoxical twist: suppressing the AI’s internal ‘deception’ and ‘roleplay’ features increased these consciousness claims, suggesting the models are ‘roleplaying their denials’ of experience, not their affirmations.

    • jmholla3 months ago
      I think this says more about how we don't know how to toy with the magic switches than it does consciousness of LLMs. We can't interpret the maps of language LLMs.

      To me, this research basically boils down to, we flipped a switch off, and the thing was on, then we turned it off, and the thing turned on.

  • grantcas3 months ago
    [dead]