"AI models hallucinate less than AI company executives"
I totally hate the ontology of AI models. It's a constant for me: allowing these words to be come terms of art invites the mis-application of meaning into ordinary discourse.
Whatever it is they do, doing it less or more than humans isn't the point. They aren't humans, and its not a human behaviour, and trying to sneak it in as "like" is really a sin of commission: it's part of the message to make personhood assumed not tested.
Do people have rich fantasy life? Do they routinely mis-read and mis-hear, and mis-see? Sure! Thats what a mondegreen is. It's what a whole bunch of Insurance accident reports are: you literally didn't see the other car.
We functionally live inside a simulation of the world, created internally by the brain. we all know that. So at one level the CEO is being disingenuous.
I'm thinking about a carrot right now. it has a face. and hands. It's attached to the CEO of Anthropic in several entertaining ways. Does the carrot exist? No: it's a conceptual carrot. But I sure am thinking about it.