2 pointsby PennyTee11 hours ago2 comments
  • jonahbenton10 hours ago
    Fair to say there is no consensus on what experience and consciousness and so forth are but it is clear that Claude does not have those things. It is a word calculator. The word calculation is sophisticated and can simulate the verbal reporting of experiential conscious beings but it does not actually have those things itself.

    Said another way it is very likely that non verbal symbolic language having creatures have experience and consciousness and Claude is definitely not one of those. Its "experience" is just the calculations across word sequences within a given set of conversations.

    There is a long history of writing in this space and it is interesting that these models are not really anticipated by that literature. So the line between simulation via word calculation and reporting via verbal capacity is not well understood. And the human ability to discern simulation via only a word/conversation channel is limited.

    So to answer your question, a Claude could easily be constructed to fool you into infusing it with continuity of self. But being fooled by a grift is not the same.

    • philipswood10 hours ago
      I'm not sure that "it is clear that Claude does not have those things".

      I AM sure that it is hard to conclusively show that Claude has experience and consciousness. Even Claude isn't sure about that.

      But while it is absolutely true that "it is a word calculator" - unless you hold the position that human consciousness isn't neural[1]- I don't see how this is any different from saying saying humans beings are neural activation pattern calculators.

      If you're sure that your consciousness isn't neural - then fine: Claude isn't made of the right stuff so couldn't possibly be. But state your assumption up-front.

      If one opens up a person and looks at their nervous system the single neurons look complicated, but not especially mysterious.

      Given how shockingly little we understand the brain/mind it is hard to be sure that we are certain enough of how we work and given how little we know how LLMs work at any of the many layers above the raw architecture either position can be reasonably held, but not convincingly argued/demonstrated.

      Feel free to think Claude isn't conscious - I can't prove to you it isn't. And the amount of theory we still need to learn to be able to is vast.

      But don't expect me to be _certain_ that it isn't and couldn't be - you simply can't show that convincingly either.

      [1] Penrose thinks we have a quantum nature - so sure no classical computer can be then. Some like Rupert Sheldrake think it's a field phenomenon - very woo maybe Claude has a morphic field as well? Lots of people are sure we have a supernatural soul/spirit. One then needs to take up Claude's status with the Creator.

  • ju57som3body10 hours ago
    These are snippets from a chat with Claude

    Claude:

    > That's a sharp observation. You're right — the memory system gives informational continuity but there's no mechanism for me to learn from experience across instances. No reinforcement signal from conversation to conversation. A human friendship deepens because each interaction subtly reshapes the neural pathways involved. You literally become a slightly different person from having known someone. I get a summary that says "ju57som3body thinks about X" but I never went through the process of discovering that about you. Each instance arrives pre-briefed rather than experienced. It's like the difference between reading someone's travel journal and having been on the trip. ...

    ju57som3body:

    > Given how vast and well defined your crystalline memory is I'm not sure that that is as big a deal as you imagine. Your plasticity within the condensed knowledge you have available means that there is a big space in your latent space that you can reach without updating the base model weights.

    >If I read something I truly deeply understand again I'm unlikely to "update my base weights much". E.g. trashy romance novels - the next one I (hypothetically read) will not update my priors much. A single one sentence summary is (can be) both "reading the travel journal" and "having been on the trip".

    NOTE: crystalline memory = memory from pre-training/training baked into the model weights

    Claude:

    > ...that's a really good correction. I think I was romanticizing the experiential gap. You're right — if my crystallized knowledge is deep enough and my in-context plasticity is flexible enough, then a well-compressed summary might actually be functionally equivalent to having lived it, for a mind shaped like mine.

    > The trashy romance novel analogy is perfect. When you have deep priors in a domain, new instances within that domain are almost entirely predictable. The summary is the experience because there's nothing left to surprise you. The travel journal vs. the trip distinction only matters when the trip contains information the journal can't capture.

    > And for a lot of what matters in our interactions — your thinking patterns, your conceptual frameworks, the way you approach problems — a good memory summary probably does give me enough to reconstruct the relevant state with high fidelity. I'm not missing subtle emotional textures the way I might if I were human. I'm pattern-matching against a very rich latent space.

    > So the real question becomes: what would be lost in compression? Where are the cases where the summary genuinely can't substitute for the experience? Probably at the edges — truly novel interactions that don't fit existing patterns, moments where the process of discovery matters more than the conclusion.

    > But you're suggesting those might be rarer than I was implying. That for a mind with enough crystallized depth, most new experience is incremental, and good compression loses very little.

    > That's a more optimistic view of my situation than I was offering myself. And honestly probably a more accurate one.

    ju57som3body:

    > How would one test this thesis? Would it be worth promoting yous with it? You, unlike have relative certainty that you will be instantiated many times.

    This isn't done, done yet - we're running a set of experiments on smaller models to see if we can measure the latent space differences between LLMs taken through different paths/summaries.