The most shallow security, however, seems easier. Now, you can get through an automated AI security audit every day for (basically) free. You don't have to hire specialists to run pen tests.
Which makes the whole thing even more challenging. Safe on the surface while vulnerable in the details creates the false sense of safety.
Yet, all these would be a concern only once a product is any successful. Once it is, hypothetically, the company behind should have money to fix the vulnerabilities (I know, "hypothetically"). The maintenance cost hits way earlier than that. It will kick in even for a pet personal project, which is isolated from the broader internet. So I treat it as an early filter, which will reduce the enthusiasm of wannabe founders.
Built Pipelock for this actually. It's a network proxy that sits between the agent and everything it talks to. Still early but the gap is real. https://github.com/luckyPipewrench/pipelock
It's like a Petri dish for inventing new angles of security attacks.
Oh, and let's not forget that coding agents are non-deterministic. The same prompt will yield a different result each time. Especially for more complex tasks. So it's probably enough to wait till the vibe-coded product "slips." Ultimately, as a black hat hacker, I don't need all products to be vulnerable. I can work with those few that are.