The old argument was humans read code, so write for humans. The new reality is LLMs also read code, and they read it differently. They don't get lost in deep nesting the way a junior dev does, but they are surprisingly sensitive to naming ambiguity, inconsistent abstraction levels, and context fragmentation across files. A well-structured codebase gives the model a coherent "world model" to reason within. A spaghetti codebase gives it conflicting signals, and it hallucinates confidently in the gaps.
So the argument for style hasn't disappeared it's been redirected. You're now writing for two audiences the human reviewer and the model that will modify it next.
The part I'd push back on in your framing "the object design is terrible" is not a style issue. That's the part that actually breaks down under AI-assisted development. Models are very good at generating locally coherent code and very bad at maintaining global architectural integrity across iterations. Bad object design doesn't become invisible to LLMs, it compounds. Every subsequent generation inherits and amplifies the structural confusion.
Is there any published work on this? Or just more of an observed accusation?
Go create a big monolith in an LLM versus a high compartmentalized and decomposed/modular codebase that does the same thing.
The LLM is going to optimize the shit out of the decomposed version and maintain it well, and it's going to struggle to find even trivial bugs in the monolithic version.
Why? Context and token cost.