26 pointsby chillax14 hours ago4 comments
  • cedws11 hours ago
    Until prompt injection is fixed, if it is ever, I am not plugging LLMs into anything. MCPs, IDEs, agents, forget it. I will stick with a simple prompt box when I have a question and do whatever with its output by hand after reading it.
    • hu310 hours ago
      I would have the same caution, if my code was any special.

      But the reality is I'm very well compensated to summon CRUD slop out of thin air. It's well tested though.

      I wish good luck to those who steal my code.

      • mdaniel6 hours ago
        You say code as if the intellectual property is the thing an attacker is after, but my experience has been that folks often put all kinds of secrets in code thinking that the "private repo" is a strong enough security boundary

        I absolutely am not implying you are one of them, merely that the risk is not the same for all slop crud apps universally

  • wunderwuzzi234 hours ago
    Great work!

    Data leakage via untrusted third party servers (especially via image rendering) is one of the most common AI Appsec issues and it's concerning that big vendors do not catch these before shipping.

    I built the ASCII Smuggler mentioned in the post and documented the image exfiltration vector on my blog as well in past with 10+ findings across vendors.

    GitHub Copilot Chat had a very similar bug last year.

  • mdaniel6 hours ago
    Running Duo as a system user was crazypants and I'm sad that GitLab fell into that trap. They already have personal access tokens so even if they had to silently create one just for use with Duo that would be a marked improvement over giving an LLM read access to every repo in the platform
  • nusl11 hours ago
    GitLab's remediation seems a bit sketchy at best.
    • reddalo11 hours ago
      The whole "let's put LLMs everywhere" thing is sketchy at best.