https://codeaholicguy.com/2026/01/31/ai-coding-agents-explai...
Right now, models are good at solving small, local problems, but much weaker at keeping large systems aligned over time. So having humans own the overall design, break work into small tasks, and integrate the results is a very pragmatic approach.
I see this less as a permanent limitation and more as a workflow gap. When AI is used purely as a conversational tool, humans end up doing all the convergence manually.
Concepts like rules, skills, scoped agents, and verification feel like early attempts to move some of that convergence into the system itself, not to replace human judgment, but to reduce how much needs to be constantly reapplied.