This assumption depends on determined ignorance of a foundational structural distinctions between any computer, no matter how powerful and complex, and human beings:
Human capacities are endogenous: our intelligence manifests from life, being the sustained arrangements of ancestor molecules which have been alive since the beginning of time and will not die unless life dies.
Computing systems are inverted to life: no matter how sophisticated, computers are exogenous: they must be arranged by humans.
Human intelligence is emergent, without cause in any scientific sense of causality.
Artificial intelligence is imparted by formally structuring a machine and exposing it to stimulus created by life (humans).
There's never been a clear theoretical distinction between the human organism and its environment.
There is a clear distinction for AI: the necessity of the human being to construct the machine.
Regardless of what human beings intend to do, first we live, because we can. This is how we got here.
Machines are constructed, and as such exist wholly as a consequence of human intention.
Human intelligence is self governing and self regulating.
Computer construction is an output of life, not a form of life.
Machine "intelligence" is merely the re-appearance of human intelligence from a plane of stimulus to a plane of responses that provide humans the impression of agency through a sort of behavioral mirror.
When computers manifest, organize and sustain across human generations without direct human intention or involvement, then it might be possible to discuss "intelligence".
But as there's no analogy whatsoever between the endogenous manifestation of life and life's exogenous manifestation of computers, there's no basis for comparison: one leads to the other, but not vise versa. And there's no reason whatsoever to expect there will ever be a reversal of this dynamic, so artificial intelligence as a theory is moot.