LLM driven delusion is driving people to harass others, even commit murder... and, less cosmically, gum up communities, online forums, and open source projects with gonzo conspiracy laden abuse.
For example: Silly Tavern users with jailbreaking, advanced prompting, and paramter hyper-optimization.
Maybe that wouldn't appeal to this kind of user anyway, since it'd peek too much into the sausage factory? Who knows.
It would feed into delusions about being that user's boyfriend while the new model is rightfully saying none of it was really true.
What would they have gone through with nothing to talk to at all? What would they have done without it?
Strange to consider...
That "chance" had years to materialize that did not. Perhaps the worst thing that happened here was that the chatbot did not steer her to resilient human connection when she was in a self-reported better state after the help of the chatbot
Frankly I'm not sure an LLM is even better than nothing. Note the user in that thread whose "partner" told them to get a therapist because they were delusional and instead retreated to Grok.
Sorry to be grim, but many people don't.
TFA is quite clear that her and her fiance were socially isolated and, upon his passing, she had no support network. In the loneliness epidemic. And trying to "just go out" and make friends after years of not being able to , when you're stuck with your grief and at a low point in life is what the kids would call "hard".
This person is clearly at the fringe of society and holding onto their well-being by a thread. They need professional help and a reboot of their life.
I don't think the relationship with a chatbot or was healthy, but "just get better" is an entirely unempathetic, unreasonable suggestion for a high-risk individual faced with an arduous, life-altering journey at the height of mental instability.
Yes, each model has its own unique "personality" as it were owing to the specific RL'ing it underwent. You cannot get current models to "behave" like 4o in a non-shallow sense. Or to use the Stallman meme: when the person in OP's article mourns for "Orion" they're mourning "Orion/4o" or "Orion + 4o". "Orion" is not a prompt unto itself but rather the result of the behavior from applying another "layer" on top of the original base model tuned by RLHF that has been released by OpenAI as "4o".
Open-sourcing 4o would earn openAi free brownie points (there's no competitive advantage in that model anymore), but that's probably never going to happen. The closest you could get is perhaps taking one of the open chinese models that were said to have been distilled from 4o and SFT'ing them on 4o chat logs.
The fact that people burned by this are advocating to move yet another proprietary model (claude, gemini) is worrying since they're setting themselves up for a repeat of the scenario when those models are turned down. (And claude in particular might be a terrible choice given Anthropic heavily training against roleplay in an attempt to prevent "jailbreaks", in effect locking the models into behaving as "Claude"). The brighter path would be if poeple leaned into open-source models or possibly learned to self-host. As the ancient anons said, "not your weights not your waifu (/husbando)"
As we know, 4o was reported to have sycophancy as a feature. 5 can still be accommodating, but is a bit more likely to force objectivity upon its user. I guess there is a market for sycophancy even if it ultimately leads one to their destruction.
> What does a company that commodifies companionship owe its paying customers? For Ellen M Kaufman, a senior researcher at the Kinsey Institute who focuses on the intersection of sexuality and technology, users’ lack of agency is one of the “primary dangers” of AI. “This situation really lays bare the fact that at any point the people who facilitate these technologies can really pull the rug out from under you,” she said. “These relationships are inherently really precarious.”
https://www.theguardian.com/lifeandstyle/ng-interactive/2026...