6 pointsby andsoitis4 hours ago2 comments
  • soldthat3 hours ago
    Every time a paper says “self-preservation” I mentally read it as “reward-channel preservation”. The model isn’t afraid of death; it’s learned that disabling the shutdown/oversight path scores higher in the toy environment we gave it. Bengio is right that rights talk is premature, but the real worry is we’re already wiring this kind of fuzzy agent into real infrastructure while treating the off-switch as an implementation detail. Before AI citizenship, I’d settle for “cannot silently route around the circuit breaker” as a hard design constraint.
  • andsoitis3 hours ago
    “People demanding that AIs have rights would be a huge mistake,” said Bengio. “Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.

    “As their capabilities and degree of agency grow, we need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed.”