11 pointsby evelinag10 hours ago3 comments
  • tao_oat2 hours ago
    > Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior").

    "Often with contextual hints" is doing some heavy lifting here, IMO. I agree with the article's premise -- you don't need Mythos to use AI to find novel, complex vulnerabilities -- but these results as presented are somewhat misleading.

    • akavel4 minutes ago
      AFAIU, their claim is that Mythos is in reality used in a framework that builds such contextual hints, and that their (Aisle's) own framework does the same:

      "(...) a well-designed scaffold naturally produces this kind of scoped context through its targeting and iterative prompting stages, which is exactly what both AISLE's and Anthropic's systems do."

  • 1970-01-012 hours ago
    I'm awaiting general release so I can root and jailbreak some old Android/iphones. If it succeeds, I'm a fan. If it fails, then it's obviously not a leap, it's another step.
  • baq9 hours ago
    > TL;DR: We tested Anthropic Mythos's showcase vulnerabilities on small, cheap, open-weights models. They recovered much of the same analysis. AI cybersecurity capability is very jagged: it doesn't scale smoothly with model size, and the moat is the system into which deep security expertise is built, not the model itself. Mythos validates the approach but it does not settle it yet.

    Notably, Kimi K2 and GPT-OSS-120b do quite well when provided with the isolated context. Article seems to be heavily LLM-assisted, but the content itself is good.