What always gets me with this type of thing is that folks don't trust the LLM to not make things up (rightfully so), but will solve that problem with more LLM. What makes the LLM-based quality checks less prone to derailment?
It's not so surprising to read this further down
> One person can now do what a newsroom did and I don’t think people fully understand what that means yet.
This kind of naïveté is a major contributor to the ongoing spread of LLM-powered disinformation.