So if you can make an AI at least partly self aware (in the consciousness sense) by having it run an anomaly detector on its own output, that seems to me to be a fairly big deal. Not "agency", I suspect not even all the way to "self aware", but still a big step.
Big theoretically, and also big practically. If you can make an LLM shut up rather than hallucinate, that's a big step up in usability. "I don't know" is far more useful than confident errors. (At least silence doesn't have negative value.)