They say you shouldn't anthroporphize the lawnmower, and I think that's what's being done with this story.
Probably a worse choice than simply calling it out of control, but not that strange.
It is at least as much of a mistake to reason about these systems the same way we do a misbehaving compiler as it is to reason about them as if they were conscious beings; at least the latter mistake (which is more or less forced by the lack of appropriate language) does not present the illusion that these sorts of behaviours are mere bugs and misspecifications that can be corrected by applying a chipper junior developer to the task.
“Styrka, Plikt, let me put you another case. Suppose that the piggies, who have learned to speak Stark, and whose languages some humans have also learned, suppose that we learned that they had suddenly, without provocation or explanation, tortured to death the xenologer sent to observe them.”
Plikt jumped at the question immediately. “How could we know it was without provocation? What seems innocent to us might be unbearable to them.”
Andrew smiled. “Even so. But the xenologer has done them no harm, has said very little, has cost them nothing-- by any standard we can think of, he is not worthy of painful death. Doesn't the very fact of this incomprehensible murder make the piggies varelse instead of ramen?”
Now it was Styrka who spoke quickly. “Murder is murder. This talk of varelse and ramen is nonsense. If the piggies murder, then they are evil, as the buggers were evil. If the act is evil, then the actor is evil.”
Andrew nodded. “There is our dilemma. There is the problem. Was the act evil, or was it, somehow, to the piggies' understanding at least, good? Are the piggies ramen or varelse? For the moment, Styrka, hold your tongue. I know all the arguments of your Calvinism, but even John Calvin would call your doctrine stupid.”
If indeed we need language for this, it would seem to me that AI is "varelse".This is part of why I'm bearish on the new hotness of "don't write tools, just write a Markdown skill and let the LLM write its own bash commands". It does work, for the most part, at the cost of it being entirely capable of changing its environment and executing arbitrary commands. Approvals exist, sure, but I've never seen anyone manually approve a command past like the 3rd permission dialog.
If it makes you shudder to imagine allowing an intern to do a thing, you should shudder harder to imagine letting an AI — an intern who can type really fast — do it.
I work in AI. I love using AI. I don’t want to go back to not using AI. But darned if I’m letting anyone, human or AI, just waltz into a prod environment and make random changes.