It isn't returning code, because it doesn't know what "code" is. It's returning language, essentially "code-shaped text." It only happens to work as well as it does when it does because the model is trained on examples of existing working code supplied by humans, therefore whatever it returns is likely to at least mostly be correct, at least for common cases where a high probability match is likely.
If by "artificial intelligence" you mean something like the computer in Star Trek - essentially a sentient and self-aware being - then yes, that is a myth. That isn't what LLMs are. Although plenty of people believe otherwise, for whatever reason.
The problem is because LLMs can use and respond to natural language, we humans are hardwired to see them as the latter and anthropomorphize them. We imagine that if we give them a problem there's basically a little man inside the machine smart enough to understand that problem, searching through its data and trying to solve that problem the way a human would. But no, the only thing they're doing is constructing semantically correct output matching an input.
And it's wild that it works as well as it does, but most of that appearance of intelligence comes down to training on human effort and the result of human assumptions and bias.
If people on HN can believe LLMs are sentient, sapient and intelligent beings (even more so than other humans I suspect) then there isn't much chance for average people getting caught in the intersection of LLM marketing, a hundred years of pop sci-fi cultural conditioning and a million years of primate evolution.
Every day, the AI boosters have a slot machine to sell you and you fell for it.
The thing is that LLMs are not moral subjects, they don't feel bad the way you feel or the way a dog or a horse feels when they let somebody down. I worked for a company developing prototypical foundation models circa 2018 and one of the reasons that I didn't invent ChatGPT is that I wouldn't have given a system credit for making lucky guesses.
I have no idea if I'd do better with Chinese sources, if there was a big FAQ or Wiki in Chinese I probably could load it into IntelliJ IDEA and ask Junie questions about it... Maybe I should! I guess it would be six months ahead in terms of events and might recommend using operators I can't get but I could live with that... And for that matter I don't like the quality of translations I have available for things like Investiture of the Gods
"Garbage In, Garbage Out"
"You get out of it what you put into it."
You can prove this by asking any of the super-smart LLMs that have access to 'current' data (searching, whatever), which US President has bombed the most countries in their two terms. They will claim it was Obama, or that they cannot determine the answer because it's "complicated". The truth is, the USG and it's technocrats instruct and train these bots to lie in support of the state agenda.
These bots will even claim the true answer is misinformation even after you force them to the factually correct answer. Just like the Wizard of Oz, it's just sad little man pulling the strings of a terrifying facade.
'Lying', 'hallucinating' and other efforts to anthropomorphize a computer program only serves to reinforce the snake-oil being sold worldwide.