Despite them actually having direct senses (as direct as ours), whether those are token streams, or images, we don't give our models the freedom to do that. They do one shot thinking. One directional thinking. Then that working memory is erased, and the next tasks starts.
My point being, they almost certainly don't have qualia now, but we can't claim that means anything serious, because we are strictly enforcing a context where they don't have any freedom to discover it, whether they could with small changes of architecture or context, or not.
So any strong opinions about machines innate/potential abilities vis a vis qualia today, are just confabulation.
Currently we strictly prevent the possibility. That is all we can say.
We only experience qualia, we don't know what it actually is to have it, we don't know why the moist electrochemistry 'twixt our ears has it nor what makes it go away for most of our sleep only to return for our dreams (or if it's absent during the dreams and we only have qualia of the memories of the dreams), we cannot truly be sure that any given AI does or doesn't have it until we know what it is that it does or doesn't have.
We also definitely can't just ask LLMs, which has been a problem since Blake Lemoine getting fooled by an AI that was definitely making up impossible stories about experiences it couldn't have had, and both the fooling itself and the response to which demonstrated Alan Turing wrong with regard to the idea people would reject what he called "the extreme and solipsist point of view" of:
the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man.
- https://genius.com/Alan-turing-computing-machinery-and-intel...These machines can absolutely answer "viva voce", even when we also know they're making stuff up.
Non standard cognitive architectures are already coherent. Even if they were, why do you think qualia cannot be replicated with a similar signal to semantic meaning? Are there additional dimensions that we can feel that we've never talked about or more importantly, written down?
By coincidence, I saw "Good Will Hunting" fairly recently so I remember that scene well. It is years since I had last seen it.