12 pointsby cdrnsf3 hours ago9 comments
  • waffletowera minute ago
    The argument made is reductive, as it confines itself to pure LLMs. It ignores the possibility of an LLM as a component of a robotic body, for example. While technically much more complex than Claude Code, a multi-modal LLM coupled with memory and self-initiated motor facility could be implemented within an analogous execution loop. Roger Penrose and Stuart Hammeroff would still object to the possibility of human-like consciousness emerging from such an embodied LLM, but consciousness is potentially a continuum of awareness capability.
  • miguelaeh10 minutes ago
    There is an event on the Frontier Tower today to talk about this paper in case someone is interested
  • fred256an hour ago
    “The question of whether Machines Can Think is about as relevant as the question of whether Submarines Can Swim.” — Dijkstra
  • lmf4lol2 hours ago
    Phew. Good news! Imagine the AI behemoths would have to take into account the feeling of their slave labour machines! Don‘t have to do that if they wont/cant be conscious.

    And neither do I have to worry then if ask then to do stupid sh*t for me :-)

    But on a serious note. Does it matter? I think Hinton said it pretty well: Not really! what matters is that we treat it as conscious beings. We humans are just way too easily fooled. I mean, I even cant throw away that toy that my mom gave me 35 Years ago because I somehow would feeö sad for it :-)

  • torginus2 hours ago
    I wish there was more research (maybe philosophy) would go into characterizing consciousness and intelligence, so that we could at least define what we are missing in current AI systems.
    • snowwrestleran hour ago
      There’s been plenty of philosophy, but what will probably happen is a re-definition of the terms to a more rigorous and repeatable mathematical formulation. This will fail to satisfy philosophical or fundamental questions, but will enable better quantitative predictions.

      This essentially is where the schism is between science and philosophy, and has played out repeatedly across history. Heat for example was redefined to a specific physical property, and subjective experiences of warmth were then explored in reference to that. Or look back to the moment when Newton essentially said “I don’t know what gravity is, but I can accurately calculate any ballistic trajectory you can think of.”

    • anthonyrstevensan hour ago
      Philosophy of consciousness is at least 2,500 years old.
  • parliament322 hours ago
    Why would a text generator ever be conscious? Was this really worth writing a paper about?
    • ikekkdcjkfkean hour ago
      Animals are also next token/action generators, and we also think (simulating a string of events). Maybe humans are better at grouping these events into more powerful network activations to retrieve better results
      • RaftPeople18 minutes ago
        > Animals are also next token/action generators

        But for humans, the concept/thought/idea/action is formed first and then a sequence of tokens are generated to communicate that concept/thought/idea/action.

    • cmaan hour ago
      I think gpt-image-2 at least incorporates representations from the base model, even if base model doesn't itself have the output capability. And it does have image input fused directly into it that helps make those representations more usable for image gen, so it's not just generating text.
  • jaspervanderee3 hours ago
    Nor wil LLMs achieve AGI. There will be too many contradicting ideas in its source code.
  • adyashakti3 hours ago
    of course; consciousness is a biologically inherited trait. that inheritance can't cross the human-machine interface.
    • JPLeRouzic3 hours ago
      > consciousness is a biologically inherited trait

      That consciousness is a biologically trait seems a common statement, but why "inherited"?

    • deepthawan hour ago
      why? i'm not being snarky, i'm trying to figure out what we even consider consciousness to be nowadays and why it'd be limited to biological entities.
    • postalratan hour ago
      Sure if that's how you define consciousness. What do you want to call the machine version of the same phenomenon?
    • pixl97an hour ago
      "Consciousness is magical and can only do things that I want it to, and none of the things that are uncomfortable to me. Of course I've not defined any of this so I can move the goal posts as needed"
    • subscribed2 hours ago
      I presume you used "biologically" to emphasise we don't yet know any non-biological consciousnesses, not that you determine, a priori, that the consciousness must be and is always rooted in the wet organic matter?

      I don't think you could come up with a good theory for the latter and there's nothing that would preclude the existence of the artificial / inorganic consciousness - after all, correct me if I'm mistaken, we have no idea how the consciousness emerge in some biological entities.

  • letmevotepleasean hour ago
    [flagged]