4 pointsby qbacode5 hours ago6 comments
  • mrothroc4 hours ago
    I do this across several different codebases, all event-driven microservices. Some greenfield some brownfield.

    My strategy is to have a central spec, typically protobuf or openapi, and every service has a make target to generate code from that. The dependencies stop being in someone's head and start being in the spec. I had this pattern long before coding agents because it helps human devs, too.

    The benefit is that my CI process can deterministically check if a) someone changed the spec or b) the service code doesn't compile or lint when built against the freshly generated code. This is a hard, enforced gate. If it fails, either it needs manual review or it gets sent back to the coding agent to fix automatically, so humans don't waste time looking at trivial issues.

    The agents can move as fast as they want within a single service. The spec gate catches cross-service breakage before it deploys.

  • JoshTriplett5 hours ago
    You don't. This is the kind of problem created by vibe coding.

    Escalate upwards, challenge the policy, cite this as an example. Also cite things like https://arxiv.org/abs/2511.04427 : "transient increase in project-level development velocity, along with a substantial and persistent increase in static analysis warnings and code complexity".

    If the policy doesn't change, find a new company.

    Now, all that said: it would also be a good idea to have better testing infrastructure that actually tests the services in concert and not just individually. That testing infrastructure will be useful for the humans who take over from the vibe coding and start cleaning up the mess.

    • qbacode4 hours ago
      Do you know of any infrastructure that helps with this, or do we have to build something like this ourselves?
  • aavci5 hours ago
    "Nobody caught it in review" and "they're in someone's head" sound like issues to work on.

    Some todos that come to my mind:

    - Review more thoroughly.

    - 'Vibe code' some unit tests

    - Document and communicate the things that are in team members heads that should be openly shared

    • qbacode4 hours ago
      - Review more thoroughly -> easier said than done if you want to keep velocity - 'Vibe code' some unit tests -> yep, we do that - Document and communicate the things that are in people's heads -> agreed, that's the hard part in practice :/
  • tstrimplean hour ago
    This is no different than making changes in any micro-services environment. You have to be careful. You've got existing contracts with other services and typically you don't just change things like that. If your environment is large enough to actually need micro-services, that means the lines of communication between the owner of the service and the consumers of the service have exploded to the point any change is basically impossible. So when you "rename" a field. You add a new field and maintain the old one to allow consumers of your service to transition (prompted by deprecated warnings) gracefully. You monitor when your old value is no longer referenced and then you remove it.

    It's also telling that none of your tests caught the issue. Why don't you have consumer like tests of your services? If after every change, you're testing your micro-service against what your consumers are actually sending you failures like this show up quite easily. This isn't a failure of vibe coding. This is a failure of properly architecting and testing your micro-services. It happens all the time when companies just try to blindly follow what Netflix engineers are doing without understanding the nuance and tradeoffs.

  • yottayoshida5 hours ago
    [dead]
  • selfradiance3 hours ago
    [dead]