7 pointsby Imustaskforhelp6 hours ago3 comments
  • acheong082 hours ago
    This reminds me of when GPT-4 first released, the image capabilities were in preview and limited. A couple companies, including Perplexity, was leaking their API key on Replit & had early access by a couple weeks.

    The dumb me at the time used it to do biology homework to do with diagrams instead of anything interesting...

    I think the API endpoint was codenamed "rainbow" if I remember correctly. How time flies

  • gnabgib6 hours ago
    Anthropic's Mythos Model Is Being Accessed by Unauthorized Users (70 points, 15 days ago, 7 comments) https://news.ycombinator.com/item?id=47855093
    • Imustaskforhelp6 hours ago
      Oh alright, thanks for linking this! I didn't know a discussion had happened already. Although that being said as the discussion has been >14 days, the comments are now restricted so perhaps its might be worth another discussion.

      Also I disliked how most said unauthorized users/rogue users, although that is correct but I find it a bit more informational to also say that it was found within a discord group as to not consider these to be institutional hackers that is

      Edit: and also how the way that they gained access was via guessing rather than what being accessed might mean to many people and the image it might generate in general.

      • gnabgib6 hours ago
        It's the whole story (that your article references/links to via other reposts).

        1. Rogue discord users got access by guessing a URL

        > The group of users made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models

        2. Oh wait, they had valid credentials from a third party (presumably former 3rd party now)

        > Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic’s AI models.

        https://archive.is/9Oxlr

        • Imustaskforhelp5 hours ago
          > https://archive.is/9Oxlr

          Thanks, this article is really interesting too, I have uploaded it to archive.org too in case someone else wants to read it (https://web.archive.org/web/20260507015350/http://serjaimela...)

          > 2. Oh wait, they had valid credentials from a third party (presumably former 3rd party now)

          I find it interesting but how many other software have a similar achilles heel? recently vercel had something similar happen where all their env's which sometimes included really sensitive information like database passwords got leaked because an AI company that they used got compromised because an employe e at that company got compromised because of using an roblox cheat software iirc.

          Are there any reports on who the third party is exactly, It doesn't inspire confidence that an third party used by Anthropic got compromised enough to leak access to mythos but also wouldn't more companies also rely on the said third party who could've also gotten compromised or (maybe already might've?)

          Do you know how the story got to bloomberg as well? I mean I wonder how anyone outside of that group and perhaps maybe Anthropic's logs came to know about all of this/the discord group.

  • Imustaskforhelp6 hours ago
    The original title is "Discord group guessed the URL to Anthropic’s most dangerous AI and used it before CISA did" for context but it was a bit longer than usual so I changed it to Anthropic's Mythos model which is also what is true and I think that it also might give more info/be more accurate saying Mythos instead of Most dangerous AI but that's just my personal opinion.

    Edit: instead of CISA did, Its now before CISA used it. I wish to add it as before even CISA used it but its one character too long :-(