Assembling a plausible sounding sentence doesn't mean that you know what you're talking about.
The number of people who fail to grasp this is mind boggling.
According to a 2025 Stanford HAI report, large language models fail basic multi-step arithmetic up to 40% of the time without external tools.
https://medium.com/@dojolabs.main/why-does-ai-get-math-wrong...
You may know this somehow --- but I don't. Without a fundamental re-design, the basic problem will remain.
I don't believe it is possible to apply statistics to predict answers without significant errors.
Humans adopted the use of computers because they provided accurate answers at low cost.
At least until recently. Now, LLMs provide questionable answers at high cost.
Isn't a P-zombie about consciousness, not intelligence?