1 pointby seekerXtruth2 hours ago3 comments
  • seekerXtruth2 hours ago
    I’ve been experimenting quite a bit with AI-assisted development recently (Copilot, Cursor, Claude, etc.), both in larger systems and in smaller side projects.

    What keeps surprising me is not hallucinations or model output quality as such, but how easy it is to lose shared architectural context over time.

    At first everything feels great. Things move fast. Demos work. Features pile up.

    But after a while I notice that parts of the system were built differently than I would have designed them myself. not necessarily wrong, just inconsistent in ways that are hard to see early.

    Nothing breaks right away. But at some point I realize I wouldn’t feel comfortable taking full responsibility for the system anymore, especially under (long-term) production constraints..

    What I’m struggling with is this: How do you keep architectural intent explicit when AI is writing a lot of the code?

    For me it seems less like a prompt or a context window problem and more like a governance issue: Which decisions must stay stable? What must not change? Where is AI allowed to explore, and where should it be constrained?

    I’ve started experimenting with more explicit roles, workflows, and step-by-step development phases, but honestly I’m not convinced yet this is the right balance.

    I’m curious how others handle this in practice: How do you get real speed from AI tools without slowly drifting into a system you no longer fully understand or trust?

  • 2 hours ago
    undefined
  • beardywan hour ago
    It's exactly the same problem as with humans. The only difference is you can control (willing) people more easily.
    • seekerXtruthan hour ago
      I agree. The difference for me is speed and feedback loops though: with AI, architectural drift can accumulate much faster and stay invisible longer. That’s what makes governance feel harder here.