If AI is going to write a large percentage of the code, the highest-leverage thing a developer can do might actually be slowing down and deeply understanding the system (not generating more code faster).
I noticed I was spending more time reconstructing context than actually building: – figuring out what changed – tracing data flow – rebuilding mental models before I could even prompt properly (without breaking other features) - debugging slop with more slop
Better understanding → better prompts, fewer breaking changes, and more real debugging.
Over the weekend I hacked on a small prototype exploring this idea. It visualizes execution flow and system structure to make it easier to reason about unfamiliar or AI-modified codebases.
Not really a polished “product” — more a thinking tool / experiment.
I’m curious whether others are running into the same bottleneck, or if this is just a local maximum I’ve fallen into.