It doesn't actually KNOW anything.
AI has no actual comprehension, and it never will. It can mimic, obviously, to varying degrees of success. But it doesn't actually KNOW what it is 'saying', and without comprehension there will forever be wrong answers or hallucinations.
I see current AIs as tools—a sophisticated lathe, not a thinking partner. The question isn't whether it "knows" anything.
The interesting question is: why does AI with correct information in its weights still give wrong answers? That's an engineering problem, not a metaphysics problem.
But here's what bothers me about the "AI doesn't truly know" argument: do we? When a senior dev answers "use Kubernetes" without asking about team size or user count, are they "comprehending" or pattern-matching on what sounds authoritative? The AI failure I described is identical to what I see in human experts daily.
Maybe the flaw isn't unique to AI. Maybe it's a mirror.