Yep. When the credentials were used earlier on in the session they'd been scrubbed from the logs - so there's some checking, but not on the code that's committed.
LLMs are not intelligent machines, they are lying engines that predict the next most likely thing to do or say. If publishing your credit card details, home address and blood type meshes with the last thing it ingested, it'll do it.