15 pointsby m_Anachronism9 hours ago5 comments
  • kylecazar8 hours ago
    I agree with what's written, and I've been talking about the harm seemingly innocuous anthropomorphization does for a while.

    If you do correct someone (a layperson) and say "it's not thinking", they'll usually reply "sure but you know what I mean". And then, eventually, they will say something that indicates they're actually not sure that it isn't thinking. They'll compliment it on a response or ask it questions about itself, as if it were a person.

    It won't take, because the providers want to use these words. But different terms would benefit everyone. A lot of ink has been spilled on how closely LLM's approximate human thought, and maybe if we never called it 'thought' to begin with it wouldn't have been such a distracting topic from what they are -- useful.

    • m_Anachronism32 minutes ago
      God, yes. The 'you know what I mean' thing drives me crazy because no, I actually don't think they do know what they mean anymore. I've watched people go from using it as shorthand to genuinely asking ChatGPT how it's feeling today. The marketing has been so effective that even people who should know better start slipping into it. Completely agree that we missed a chance to frame this correctly from the start.
    • 8 hours ago
      undefined
  • plutodevan hour ago
    This framing makes sense. What we call “AI thinking” is really large-scale, non-sentient computation—matrix ops and inference, not cognition. Once you see that, progress is less about “intelligence” and more about access to compute. I’ve run training and batch inference on decentralized GPU aggregators (io.net, Akash) precisely because they treat models as workloads, not minds. You trade polished orchestration and SLAs for cheaper, permissionless access to H100s/A100s, which works well for fault-tolerant jobs. Full disclosure: I’m part of io.net’s astronaut program.
    • m_Anachronism33 minutes ago
      "Yeah that's exactly the point - when you're actually working with these models on the infrastructure side, the whole 'intelligence' narrative falls away pretty fast. It's just tensor operations at scale. Curious about your experience with decentralized GPU networks though - do you find the reliability trade-off worth it for most workloads, or are there specific use cases where you wouldn't go that route?"
  • kelseyfrog5 hours ago
    > "Cognition" has a meaning. It's not vague. In psychology, neuroscience, and philosophy of mind, cognition refers to mental processes in organisms with nervous systems.

    Except if you actually look up the definitions, they don't mention "organisms with nervous systems" at all. Curious.

    • m_Anachronism31 minutes ago
      Fair pushback - you're right that strict dictionary definitions are broader. I probably should've been more precise there. My point is more about how the term is used in the actual fields studying it (cogsci, neuroscience, etc.), where it does carry those biological/embodied connotations, even if Webster's doesn't explicitly say so. But you're right to call out the sloppiness.
  • metalman2 hours ago
    why? there is no why to something that is not possible there is zero evidence that ai has achived, slow crawling bug level abilities to navigate ,even a simplified version of reality, as there would already be a massive shift in a wide variety of low level human unskilled labour and tasks. though if things keep going like they are we will see a new body dismorphia ,where people will be wanting more fingers.
  • donutquine8 hours ago
    An article about AI "cognition" is written by LLM. You kidding.
    • m_Anachronism30 minutes ago
      Ha - I used Claude to help organize and edit it, yeah. Didn't see much point in pretending otherwise. The irony isn't lost on me, but I'm not arguing these tools aren't useful, just that we should call them what they are. Same way I'd use a calculator to write a math paper without claiming the calculator understands arithmetic