Reason being: memetic monoculture.
Everyone has some blind spots, same for LLMs: No matter how much you use any particular model (or ask a particular human), when you hit a blind spot, they can't reflect on it and actually resolve it — even if you ask them to, they'll make a new "solution" with the same flaw, whatever that flaw happens to be.
Code review gets an extra pair of eyes to check for such things (or at least can, I have experienced coworkers who were write-only-read-never for any criticism of their code or architectural choices).
In chess, this is "being a centaur" (human + AI hybrid working better than either alone); people have been arguing that this is no longer true since about 2013, but given that was 16 years after Deep Blue finally beat Kasparov, even that is an economically useful delay for those of us who want to keep getting paid.
(Was the Ask HN itself AI generated? The trouble is, humans copying each other is also a way to get a memetic monoculture, and we also copy LLMs…)
But assuming we find a way to make gold grow on trees, the value of gold will also drop considerably.
Now if we look at the current situation, AI does not replace developers. It allows non-developers to generate code that looks like it works, which may be enough in some situations and not in other. It most definitely isn't capable of building a big software project that must work. There is a big difference between a small script that sorts my photos and a big program that is used by professionals to accomplish real work. If Excel starts making calculation errors, suddently it's not a viable tool for many fields.
AI is just a tool, like syntax highlighting or linting. Those did not replace developers.