2 pointsby flail7 hours ago1 comment
  • pipejosh7 hours ago
    The maintenance burden is real but I think security is the bigger gap. People vibing out code with AI aren't thinking about input validation or dependency vulnerabilities. They build it, it works, they ship it. Then they're running unpatched code with no security review. That's where things get ugly.
    • flail7 hours ago
      Security is even a bigger issue than it looks at first glance. While security risk by omission was always a thing (AI or not), now we face a whole new level of risks, from prompt injection to creating malicious libraries to be used by coding agents: https://garymarcus.substack.com/p/llms-coding-agents-securit...

      The most shallow security, however, seems easier. Now, you can get through an automated AI security audit every day for (basically) free. You don't have to hire specialists to run pen tests.

      Which makes the whole thing even more challenging. Safe on the surface while vulnerable in the details creates the false sense of safety.

      Yet, all these would be a concern only once a product is any successful. Once it is, hypothetically, the company behind should have money to fix the vulnerabilities (I know, "hypothetically"). The maintenance cost hits way earlier than that. It will kick in even for a pet personal project, which is isolated from the broader internet. So I treat it as an early filter, which will reduce the enthusiasm of wannabe founders.

      • pipejosh6 hours ago
        The automated audit only covers static analysis. When the agent actually runs, hitting MCP servers, making HTTP calls, getting responses back, that's where the real problems show up. Prompt injection through tool responses, malicious libraries that exfiltrate env vars, SSRF from agents that blindly follow redirects. Code audits miss all of it because this is a runtime and network problem, not a code quality problem.

        Built Pipelock for this actually. It's a network proxy that sits between the agent and everything it talks to. Still early but the gap is real. https://github.com/luckyPipewrench/pipelock

        • flail6 hours ago
          Yes. And the more autonomously we create code, the more of these (and not only these) vulnerabilities we'll be adding. Combine that with the AI-automation in attacks, and you have an all-out security mess.

          It's like a Petri dish for inventing new angles of security attacks.

          Oh, and let's not forget that coding agents are non-deterministic. The same prompt will yield a different result each time. Especially for more complex tasks. So it's probably enough to wait till the vibe-coded product "slips." Ultimately, as a black hat hacker, I don't need all products to be vulnerable. I can work with those few that are.

          • pipejosh6 hours ago
            Agreed. The non-determinism makes traditional testing basically useless here. You can't write a test suite for "the agent decided to do something unexpected this time." Logging and runtime checks are the only way to catch the weird edge cases.