AI coding agents inherit your shell environment, which means every secret in your env vars and dotfiles is one prompt injection away from exfiltration. I wrote up a quick post the low-cost method to reduce the attack surface. It does not prevent the problem completely, but combined with other classical mitigations like sandboxing it can surely help to reduce the chances of pwnage.
I'd be curious to learn what else you all are doing in this domain.