36 pointsby rs_rs_rs_rs_rs6 hours ago5 comments
  • JellyYelly5 minutes ago
    They say its mythos like, without actually comparing it to Mythos (fair enough, it's not public) but the bar for a model to be mythos-like has to be that you can produce as many novel and high severity security vulns outlined in the Mythos redteam blog. I haven't seen any other lab produce a report like that yet. The proof is in the pudding.
  • WhiteDawnan hour ago
    First you need to get through the safety net. I’ve had many productive gpt5.4 sessions hit a roadblock of “ethicality” and pollute the context with multiple rounds of trying to convince it to continue
  • mertcikla2 hours ago
    why does this read like an openai ad?
  • nsingh23 hours ago
    These plots are terrible. Why is categorical data connected across categories with lines? Why not just use bar plots?

    Like in the "Web Vulns in OSS" plot, white box data for Opus 4.7 is not available, but the absurd linear interpolation across categories implies it should be near 60.

    • scottyah2 hours ago
      It's just an ad thinly disguised as useful data.
    • wmf2 hours ago
      I think the x axis is meant to be time but they screwed it up.
  • strange_quark2 hours ago
    Wasn't it already confirmed that small open-weight models were able to detect most of the same headline vulns as mythos? How is this any different?
    • stanfordkid2 hours ago
      No, they are able to detect errors when pointed at them but they have a lot of false positives... making them functionally useless for a large unknown codebase. They also can't build and run an exploit post-identification. Mythos can find vulnerabilities (purportedly) and actually validate them by building and running exploits. This makes it functional and usable for hacking.
    • nardons2 hours ago
      Do you have a source for this? Not doubting it, but I would like to have something concrete the next time the Mythos horse manure is cited.