So far AI has shown it cannot understand this layer of software. There are studies of how LLMs derive their answers to technical questions and it is not based on the first principals or logical reasoning, but rather sparse representations derived from training data. As a result it could answer extremely difficult questions that are well represented in the training data but fail miserably on the simplest kinds of questions, i.e. some simple addition of ten digits.
This is what the article is talking about with small teams with new projects being more productive. Chances are these small teams have small enough problems and also have a lot more flexibility to produce software that is experimental and doesn't work that well.
I am also not surprised the hype exists. The software industry does not value software design, and instead optimize their codebases so they can scale by adding an army of coders that produce a ton of duplicate logic and unnecessary complexity. This goes hand-in-hand with how LLMs work, so the transition is seamless.
Where I’d push back slightly is on framing this primarily as an LLM limitation. I don’t expect models to reason from first principles about entire systems, and I don’t think that’s what’s missing right now. The bigger gap I see is that we haven’t externalised design knowledge in a way that’s actionable.
We still rely on humans to reconstruct intent, boundaries, and "how work flows" every time they touch a part of the system. That reconstruction cost dominates, regardless of whether a human or an AI is writing the code.
I also don’t think small teams move faster because they’re shipping lower-quality or more experimental software (though that can be true). They move faster because the design surface is smaller and the work routing is clear. In large systems, the problem isn’t that AI can’t design; it’s that neither humans nor AI are given the right abstractions to work with.
Until we fix that, AI will mostly amplify what already exists: good flow in small systems, and friction in large ones.