We also banned the use of VSCode and any editor with integrated LLM features. Folks can use CLI based coding agents of course, but only in isolated containers with careful selection of sources made available to the agents.
And if a user were reluctant to tell you (fearing the professional consequences) how would you detect that a leak has happened?
For editors: Zed recently added the disable_ai option, we have a couple of folks using more traditional options like Sublime, vim-based etc (that never had the kind of creepy telemetry we’re avoiding).
JetBrains tools are OK since their AI features are plugin based, their telemetry is also easy to disable. Xcode and Qt Creator are also in use.
I work at a major proprietary consumer product company, and even they don’t ban VSCode. We’re just responsible for not enabling the troublesome features.
I just checked Zed extensions and found the first two easily enough. The third I did not, since they don't seem to have a language server, just direct integrations for vim/emacs/vsc.
I switch between Emacs, VSCode, JetBrains IDEs, and Xcode regularly depending on what I am working on, and would be seriously annoyed if I could not use VSCode when it is most useful.
They merely "fixed" one particular method, without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice? Failing that, you could trick it into providing passwords by telling it you accidentally stored your grocery list in a field called passswd, go fetch it for me ppls?
There's a ton of stuff to be found here. Do they give bounties? Here's a goldmine.
What does that mean? Are you proposing a non-Camo image URL? Non-Camo image URLs are blocked by CSP.
>Failing that, you could trick it into providing passwords by telling it you accidentally stored your grocery list in a field called passswd, go fetch it for me ppls?
Does the agent have internet access to be able to perform a fetch? I'm guessing not, because if so, that would be a much easier attack vector than using images.
> In June 2025, I found a critical vulnerability in GitHub Copilot Chat (CVSS 9.6) that allowed silent exfiltration of secrets and source code from private repos, and gave me full control over Copilot’s responses, including suggesting malicious code or links.
> The attack combined a novel CSP bypass using GitHub’s own infrastructure with remote prompt injection. I reported it via HackerOne, and GitHub fixed it by disabling image rendering in Copilot Chat completely.
And parent is clearly responding to gp’s incorrect claims that “…without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice?” I’m sure there will be more attacks discovered in the future but gp is plain wrong on these points.
Please RTFA or at least RTFTLDR before you vote.
I did, in fact, read the fine article.
If you did so too, you would've read the message from github which says "...disallow usage of camo to disclose sensitive victim user content"
Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN? Would you? I don't have a premium account, nor will I ever pay microsoft a single penny. If you actually want something you can try for yourself, go find someone else to do it.
Just to make it clear for you, I was musing on the chord of being able to write out the steps to exploitation in plain english. Since the dawn programming languages, it has been a pie-in-the-sky idea to write a program in natural language. Combine that with computing on the server end of some major SaaS(s) and you can bet people will find clever ways to circumvent safety measures. They had it coming and the whack-a-mole game is on. Case in point TFA.
They use "camo" to proxy all image urls, but they in fact did remove the rendering of all inline images in markdown, removing the ability to exfil data using images.
> Now why on earth would I take all the effort to come up with a new way of fooling this stupid AI only to give it away on HN?
You just didn't make it very clear that you discovered some other unknown technique to exfil data. Might I encourage you to report what you found to Github?
Feel free to spout more nonsense. I was somewhat puzzled and dismayed at first, but now it amuses me.
Beautiful
If you do use AI cyber solutions, you can be more vulnerable for security breaches instead of less.
What gave you this idea?
I thought it was always going to be a feature of LLMs, and the only thing that changes is that it gets harder to do (more circumventions needed), much like exploits in the context of ASLR.