2 pointsby andreyandrade2 days ago1 comment
  • lax4ever2 days ago
    This exposes, yet again, what should be the most commonly acknowledge flaw to exist in AI;

    It doesn't actually KNOW anything.

    AI has no actual comprehension, and it never will. It can mimic, obviously, to varying degrees of success. But it doesn't actually KNOW what it is 'saying', and without comprehension there will forever be wrong answers or hallucinations.

    • andreyandrade2 days ago

        I see current AIs as tools—a sophisticated lathe, not a thinking partner. The question isn't whether it "knows" anything.
      
        The interesting question is: why does AI with correct information in its weights still give wrong answers? That's an engineering problem, not a metaphysics problem.
      
        But here's what bothers me about the "AI doesn't truly know" argument: do we? When a senior dev answers "use Kubernetes" without asking about team size or user count, are they "comprehending" or pattern-matching on what sounds authoritative? The AI failure I described is identical to what I see in human experts daily.
      
        Maybe the flaw isn't unique to AI. Maybe it's a mirror.
      • allears2 days ago
        Why not both? It's certainly true that human 'experts' often rely on pattern-matching without fully understanding a problem. But AI has no understanding at all, so pattern matching is its only skill, whereas human capacity for understanding isn't only greater than AI, it's fundamentally different. In what ways? That seems to be the multi-trillion dollar question.