3 pointsby 0xkato6 hours ago2 comments
  • palata6 hours ago
    What I don't like is that they tend to make this one example sound like Mythos developed an exploit so hard that no group of expert humans could hope to ever do it.

    It sounds like "millions of experts have been looking for this bug for 27 years, and a machine found it". And not only looking, but they ran millions of tests to track this specific bug.

    But that's not what it is, is it? It's a new tool that can detect flaws. Similar to how the first time people had fuzz testing software, I guess it helped them find new flaws.

    The difference here being that one private company owns that new tool, and it's not easily accessible (one cannot rewrite it in their garage, unlike a fuzz testing program).

  • palata6 hours ago
    > What happens when someone else who’s not only giving this to folks who want to patch up holes gets access to this technology?

    Are they implying that there is no way that some entity in the US is currently using Mythos to find offensive opportunities? Really?

    • ben_w5 hours ago
      Given the nature of large organisations, and that they are obviously important vectors for anyone who desires to attack whoever they please, I assume that all the Big Tech companies have not only US government agents embedded inside them, but also non-US government agents and criminal agents also, and all of them will be attempting to exploit whatever they can including Mythos (and everything else) for gain.

      However, the rate of change we're currently seeing says that before 2030 everyone will be able to access this kind of power on a local model hosted on their own phone, which means it goes from being "the NSA are using it to hack BYD, while China is using it to hack Premier Election Solutions' voting machines" becomes "local drug dealer hacks all local surveillance cameras[0] they walk past, replaces footage with deepfake of someone else so they can't get recognised in footage".

      Ideally, things like Mythos close the security vulnerabilities. I am deeply cynical that this will be successful, because of all the times a boss has demanded special privileges to the detriment of security (famously both Hillary Clinton and Donald Trump, the latter in multiple different ways).

      [0] Before anyone says "surely the CCTV cameras would be secure": (1) before LLMs happened, people were saying "surely we'd never let the AI out of the box and onto the internet"[1]; (2) see all recent news about Flock.

      [1] 2002, and most of the pre-LLM discussions about this involved people who just did not believe him: https://www.yudkowsky.net/singularity/aibox