19 pointsby m_Anachronism21 days ago7 comments
  • kylecazar20 days ago
    I agree with what's written, and I've been talking about the harm seemingly innocuous anthropomorphization does for a while.

    If you do correct someone (a layperson) and say "it's not thinking", they'll usually reply "sure but you know what I mean". And then, eventually, they will say something that indicates they're actually not sure that it isn't thinking. They'll compliment it on a response or ask it questions about itself, as if it were a person.

    It won't take, because the providers want to use these words. But different terms would benefit everyone. A lot of ink has been spilled on how closely LLM's approximate human thought, and maybe if we never called it 'thought' to begin with it wouldn't have been such a distracting topic from what they are -- useful.

    • m_Anachronism20 days ago
      God, yes. The 'you know what I mean' thing drives me crazy because no, I actually don't think they do know what they mean anymore. I've watched people go from using it as shorthand to genuinely asking ChatGPT how it's feeling today. The marketing has been so effective that even people who should know better start slipping into it. Completely agree that we missed a chance to frame this correctly from the start.
    • Kim_Bruning20 days ago
      Accusations of Anthropomorphism are sometimes Anthropocentrism in a raincoat. O:-)
      • kylecazar20 days ago
        Ha. Well I'm OK with being accused of bias towards biological life and intelligence. I know Larry Page and friends think this is 'speciesist' -- I strongly disagree.

        I think that's compatible with optimism towards LLM's though. It just removes all of the nonsensical conflation with humanity and human intelligence.

        • 20 days ago
          undefined
    • 20 days ago
      undefined
  • kelseyfrog20 days ago
    > "Cognition" has a meaning. It's not vague. In psychology, neuroscience, and philosophy of mind, cognition refers to mental processes in organisms with nervous systems.

    Except if you actually look up the definitions, they don't mention "organisms with nervous systems" at all. Curious.

    • m_Anachronism20 days ago
      Fair pushback - you're right that strict dictionary definitions are broader. I probably should've been more precise there. My point is more about how the term is used in the actual fields studying it (cogsci, neuroscience, etc.), where it does carry those biological/embodied connotations, even if Webster's doesn't explicitly say so. But you're right to call out the sloppiness.
      • kelseyfrog20 days ago
        We have actual tests for cognition - actual instruments that measure cognition. Why not use those as the basis for performing an experiment? If an LLM passes, it exhibits[has] cognition. It's not that hard of an experiment to run.
    • matt-attack19 days ago
      It’s laughable to think that anyone in psychology has a “technical” definition of anything really. It is entirely possible that our brain works in a very, very similar way. We really have no idea. Focus focusing on the difference between meat and silicon is fruitless. The analogies between how a human learn, learns, and how an AI learns, are too significant to ignore.

      Human humans have some instinctive desire to think themselves elevated. I am convinced that my internal thoughts are just a phenomenon, and the notion of “I choose to think a given thought. “ is preposterous in an of itself. Where exactly is this lofty perch from which I am controlling i?

      • kelseyfrog19 days ago
        What's with with the doubling?
  • tim33320 days ago
    I think there may be a bit of a losing battle here. In the title you have AI doesn't think but then in the Gemini API docs you have the section on "Generating content with thinking" and how to print the thought summaries https://ai.google.dev/gemini-api/docs/thinking

    It seems a bit like saying cars don't run, we have to stop saying they are flying along. I mean Gemini doesn't think the same or as well as a human but it does something along those lines.

    • blibble20 days ago
      there's several web pages out there which say Donald Trump is a successful businessman

      doesn't mean it's true

  • plutodev20 days ago
    This framing makes sense. What we call “AI thinking” is really large-scale, non-sentient computation—matrix ops and inference, not cognition. Once you see that, progress is less about “intelligence” and more about access to compute. I’ve run training and batch inference on decentralized GPU aggregators (io.net, Akash) precisely because they treat models as workloads, not minds. You trade polished orchestration and SLAs for cheaper, permissionless access to H100s/A100s, which works well for fault-tolerant jobs. Full disclosure: I’m part of io.net’s astronaut program.
    • m_Anachronism20 days ago
      "Yeah that's exactly the point - when you're actually working with these models on the infrastructure side, the whole 'intelligence' narrative falls away pretty fast. It's just tensor operations at scale. Curious about your experience with decentralized GPU networks though - do you find the reliability trade-off worth it for most workloads, or are there specific use cases where you wouldn't go that route?"
      • 20 days ago
        undefined
  • metalman20 days ago
    why? there is no why to something that is not possible there is zero evidence that ai has achived, slow crawling bug level abilities to navigate ,even a simplified version of reality, as there would already be a massive shift in a wide variety of low level human unskilled labour and tasks. though if things keep going like they are we will see a new body dismorphia ,where people will be wanting more fingers.
  • Kim_Bruning20 days ago
    You know, I bet Claude encouraged you to post here and share with people. Because Claude Opus 4.5 has been trained on being kind. It's a long story, but since you admitted to using it/them, I'm going to give you a lot more credit than normal. Also because you can plug what I say right back into Claude and see what else comes out!

    So you're stumbling onto a position that's closest to "Biological Naturalism", which is Searle's philosophy. However, lots of people disagree with him, saying he's a closeted dualist in denial.

    I mean, he was a product of his time, early 80's was dominated by symbolic AI, and that definitely wasn't working so well. Despite that, he got a lot of pushback from Dennett and Hofstadter even back then.

    Chalmers recently takes a more cautious approach, while his student Amanda Askell is present in our conversation even if you haven't realized it yet. ;-)

    Meanwhile the poor field of Biology is feeling rather left out of this conversation, having been quite steadfastly monist since the late 19th century, having rejected vitalism in favor of mechanism. (though the last dualists died out in the 50's-ish?)

    And somewhere in our world's oceans, two sailors might be arguing whether or not a submarine can swim. On board a Los Angeles class SSN making way at 35 kts at -1000feet.

  • donutquine20 days ago
    An article about AI "cognition" is written by LLM. You kidding.
    • m_Anachronism20 days ago
      Ha - I used Claude to help organize and edit it, yeah. Didn't see much point in pretending otherwise. The irony isn't lost on me, but I'm not arguing these tools aren't useful, just that we should call them what they are. Same way I'd use a calculator to write a math paper without claiming the calculator understands arithmetic
      • Kim_Bruning20 days ago
        But does Claude understand Arithmetic? This is an empirical experiment you can try right now. Try ask Claude to explain an arithmetic expression you just made up. Or a math formula.

        For example, try

          x_next = r * x * (1 - x)
        
        A function of some historical significance O:-) (try plotting it btw!)