That said, one thing review can't fully cover is runtime behavior under real traffic. Not saying that's a review problem – it's just a separate layer that still needs attention after the merge.
In my opinion, you have to review it the way you always review code. Does the code do what it's supposed to do? Does it do it in a clean, modular way? Does it have a lot of boilerplate that should be reduced via some helper functions, etc.
It doesn't matter how it was produced. A code review is supposed to be: "Here's this feature {description}" and then, you look at the code and see if it does the thing and does it well.
Even without LLMs, there was a thought process that led to the engineer coming to a specific outcome for the code, maybe some conversations with other team members, discussions and thoughts about trade offs, alternatives, etc... all of this existed before.
Was that all included in the PR in the past? If so, the engineer would have to add it in, so they should still do so. If not, why do you all the sudden need it because of an AI involved?
I don't see why things would fundamentally change.
What does help is requiring a short design note in the PR explaining the intent, constraints, and alternatives considered. That gives the context reviewers actually need without turning the review into reading a chat transcript.