1 pointby cfulger3 hours ago3 comments
  • hackerman700003 hours ago
    The title is doing a lot of heavy lifting here. "Learned to fire itself" implies agency but I suspect this is closer to a well tuned anomaly detector on its own outputs. Still useful, just not as dramatic as it sounds
    • AnimalMuppet2 hours ago
      I don't know about that. When a human realizes that what they're saying is nonsense and decides to shut up, we might call that "self awareness". I'm not sure that it's the same as we mean by "self awareness" when we talk about consciousness, but it seems likely to be at least a part of it.

      So if you can make an AI at least partly self aware (in the consciousness sense) by having it run an anomaly detector on its own output, that seems to me to be a fairly big deal. Not "agency", I suspect not even all the way to "self aware", but still a big step.

      Big theoretically, and also big practically. If you can make an LLM shut up rather than hallucinate, that's a big step up in usability. "I don't know" is far more useful than confident errors. (At least silence doesn't have negative value.)

  • 3 hours ago
    undefined
  • cfulger3 hours ago
    [dead]