From a C.A.R. Hoare perspective, this looks like a horrible development: Human bias in training of LLMs produces effects that convince credulous users that automata are alive, credulous users think that as we don't understand why living organisms do what they do, then as an LLM seems like a living organism there's no expectation of understanding why LLMs do what they do...
We put our own ghost in a machine, confuse the machine with our ghost, then rely on dark arts to cope with our lack of understanding of the machine.
So expect LLMs to be further mystified, treated as specimens for study and symptoms to cure, then categorized behaviorally by medically appropriated syndromes.
We can already see the burgeoning new high priest career-track Doctor of AI, with all the attendant quackery, patent cures, leaching and horse-whispering.