I can't think of a harder job than changing public expectations on how to use AI. Landing a man on the surface of Mars and returning him to Earth would be easier.
In the end, real hallucination comes down to incentives built into the AI: where no answer is weighted just as negatively as a wrong answer. Therefore, the AI is incentivized to provide any kind of an answer, even if it has no bearing in reality.