5 pointsby DarenWatson4 hours ago2 comments
  • arabking3 hours ago
    The timing is striking. Most current AI security work focuses on the skill/tool layer — prompt injection, exfiltration via agent skills, rug pulls — and assumes the base model is trustworthy. A leaked cybersecurity-focused model flips that assumption on its head. The concern isn't just misuse of its outputs. If Mythos ends up in the wild, it could be used as a component in automated attacks against other AI systems. This reinforces something that isn't said enough: AI lab security needs to be treated with the same rigour as critical national infrastructure. The frameworks exist in other sectors. We don't have the equivalent here yet.
  • 3 hours ago
    undefined