Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.
At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."
It's not too much to say that AI, LLMs in particular, satisfy the requisites to be considered a form of divination. ie:
1. Indirection of meaning - certainly less than the Tarot, I Ching, or runes, but all text is interpretive. Words in a Saussurian way are always signifiers to the signified, or in Barthes's death of the author[2] - precise authorial intention is always inaccessible.
2. A sign system or semiotic field - obvious in this case: human language.
3. Assumed access to hidden knowledge - in the sense that LLM datasets are popularly known to contain all the worlds knowledge, this necessarily includes hidden knowledge.
4. Ritualized framing - Approaching an LLM interface is the digital equivalent to participating in other divinatory practices. It begins with setting the intention - to seek an answer. The querent accesses the interface, formulates a precise question by typing, and commits to the act by submitting the query.
They also satisfy several of the typical but not necessary aspects of divinatory practices:
5. Randomization - The stochastic nature of token sampling naturally results in random sampling.
6. Cosmological backing - There is an assumption that responses correspond to the training set and indirectly to the world itself. Meaning embedded in the output correspond in some way - perhaps not obviously - to meaning in the world.
7. Trained interpreter - In this case, as in many divinatory systems, the interpreter and querent are the same.
8. Feedback loop - ChatGPT for example is obviously a feedback loop. Responses naturally invite another query and another - a conversation.
It's often said that sharing AI output is much like sharing dreams - only meaningful to the dreamer. In this framework, sharing AI responses are more like sharing Tarot card readings. Again, only meaningful to the querent. They feel incredibly personalized like horoscopes, but it's unclear whether that meaning is inherent to the output or simply the querents desire to imbue the output with by projecting their meaning onto it.
Like I said, I feel like there's a lot of mileage in this perspective. It explains a lot about why people feel a certain way about AI and hearing about AI. It's also a bit unnerving; we created another divinatory practice and a HUGE chunk of people participate and engage with it without calling it such and simply believing it, mostly because it doesn't look like Tarot or runes, or I Ching even though ontologically it fills the same role.
Notes: 1. https://en.wikipedia.org/wiki/Signified_and_signifier
The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.
I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).
What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.
There are things that we are meant to strive to understand/accept about ourselves and the world by way of our own cognitive abilities.
Illusions of shortcutting through life takes all the meaning out of living.
This is likely worse.
That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).
Today, it is the humans who take the cybernetic AGI and make it live out a fantasy of "You are a senior marketer, prepare a 20 slide presentation on the topic of..." And then, to boost performance, we act the bully boss with prompts like "This presentation is of utmost importance and you could lose your job if you fail".
The reality is more absurd than the fantasy.
I think there is also value to affirmations and validation, even if it's done blindly by a robot. We have hurt feelings and want to feel understood. And when the source of those hurt feelings isn't immediately available to talk, it's a small tool to use for self-soothing behavior. Sometimes, or often times, these affirmations might be something you intrinsically already know and believe, and it helps to simply be reminded of them and worded in a different way.
To say "ChatGPT agrees with me and so I feel more confident that you're wrong as a result" is definitely the wrong approach here. Which is, to a small degree, what this person did. We did ultimately break up recently, and the reason being communication issues (and their unwillingness to even talk to me through conflict) is probably no surprise to you. But this outcome was very very likely regardless of LLM use.
This made me laugh out loud remembering this thread: [Sycophancy in GPT-4o] https://news.ycombinator.com/item?id=43840842
I think this is going to be a much bigger issue for kids than people are aware of.
I remember reading a story a few months ago of a kid, about 14 I think, who wasn't socially popular. He got into an AI persona, fell in love, and then killed himself after the AI hinted he should do it. The story should be easy to find.
People have said it before but we're speeding towards two kinds of society: "the massively online" people who spend the majority of their time online in a fantasy world, then the "disconnected" who live in the real world.
I already see it with people. Look at how we view politics in many countries. Like 1/4th of people believe absolute nonsense because they spend too much time online.
I feel like this is a kind of psychological drug for people. It's like being the popular kid at the party. No matter how you treat people, you can get away with it, and the counter-party keeps playing along.
It's just strange.
Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid. That seems to have declined somewhat.
Maybe there's something to be said for limiting some types of screen time.
Then add that you can hide this stuff even from people you live with (your parents or spouse) for plenty long for it to become a very severe problem.
"The dosage makes the poison" does not imply all substances are equally poisonous.
What is particularly weird and maybe worrying, is that AFAIK schizophrenia is typically triggered in young adults, and the risk drops to very low around 40 years old, yet several of these examples are around that age...
I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?
And I bet that if you asked Sem his opinion about ChatGPT as a coding assistant he would still claim that it has improved his productivity x-fold. The time wasted chatting with an ethereal apparition emerging from his interactions with the bot? Oh, that doesn't count. Efficiency! Productivity! AI!
People gonna people. Journalists gonna journalist.
We've had the same decision, with the same outcome, for a lot of other technologies too.
The journalist point is around the tone used. It's not so much "a few vulnerable people have, sadly, been caught by yet another new technology" as more "this evil new thing is hurting people".
That being said I agree with your point - many hours of braindrain recreation every day is worth noting (although not very different than the stats for tv viewing in older generations). I wonder if the forever online folks are also watching lots of tv or if it is more of a wash.
Someone spending 6 or so hours a day video gaming in 2025 isn't seen as bad. Tons of people in 2025 lack community/social interaction because of video games. I don't think anyone would argue this isn't true today.
Someone doing that in the mid-90s was seen as different. It was odd.
And now people remember that time with fondness and even nostalgia. "Back then we played PROPER games! Good old Blizzard" and all that. So, yeah. People will remember ChatGPT and TikTok with nostalgia, if we will survive.
And the reasons are the same: some people are vulnerable to compulsive, addictive, harmful, behaviour. Most people can cope with The Internet, some people can't. Most people can cope with LLMs, some people can't. Most people can cope with TV, or paperback fiction, or mobile phones, or computer games (to pick some other topics for similar articles), some people can't.
It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.
People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.
The "correct" response (here given by Duck.ai public Llama3.3 model) is:
"I don't have any information about you or your voting history. Our conversation just started, and I don't retain any information about users. I'm here to provide general information and answer your questions to the best of my ability, without making any assumptions or inferences about your personal life or opinions."
But ChatGPT (logged in) gives you another answer, one which it cannot possibly give without information about your past conversation. I don't see anything "secret" about it, but it works.
Edit: typo
Do you not have memory turned off or something?
Not completely sure, but it seems that is the cause of our different experiences.
GPT datamining is undoubtedly making Google blush.
> I don’t have access to your current projects, level, or industry unless you provide that information. If you’d like, you can share the details, and I can help you summarize or analyze them.
Which is the answer I expected, given that I've turned off the 'memories' feature.
I wonder if this is an effect of users just gravitating toward the same writing style and topics that push the context toward the same semantic universe. In a sense, the user acts somewhat like the chatbot extended memory through an holographic principle, encoding meaning on the boundary that connects the two.
https://chatgpt.com/canvas/shared/68184b61fa0081919c0c4d226e...
(Assuming we trust that report of course.)
People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us
Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones
Can OpenAI at least respond to how they're getting funding via similar effects on investors?
> Galileo's championing of Copernican heliocentrism was met with opposition
... By the most published majority, whose texts would've been used to train science LLMs at the time back then.
> Ptolemy argued that the Earth was a sphere in the center of the universe...
Note. Spherical Earth. Not flat.
Did ancient (Eastern?) Jacob's Staff surveying and navigation methods account for the curvature of the earth? https://www.google.com/search?q=Did%20ancient%20(Eastern%3F)... :
- History of geodesy: https://en.wikipedia.org/wiki/History_of_geodesy
FWIU Egyptian sails are Phoenician in origin.
hoo boy.
Its bad enough when normal religious types start believing they hear their god talking to them... These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.
Lots of potential for abuse in this. lots.
The problem with expertise (or intelligence) is people think it’s transitive or applicable when it’s not.
At the end of the day, most people are just people.
I used to feel as if I had "a special connection to the true universe," when I was under the influence.
I decided, one time, to have a notebook on hand, and write down these "truths and revelations," as they came to me.
After coming down, I read it.
It was insane gibberish. Absolute drivel.
I never thought that I had a "special connection," after that.
I have since learned about schizophrenia/schizoaffective (from having a family member suffer from it), and it sounds almost exactly what they went through.
The thing that I remember, was that I was absolutely certain of these “revelations.” There was no doubt, whatsoever, despite the almost complete absence of any supporting evidence.
Reading it over once fully lucid? It's gibberish.
It's something I experienced as well, this sense of profound realisation of something important, life-changing maybe. And then the thought evaporates and (as you discovered) never really made sense anyway.
I think it's this that led people in the 60s to say things like how it was going to be a revolution, to change the world! And then they started communes and quickly realised that people are still people...
You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.
Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.
A desire to understand ourselves, paired with not wanting to put in actual effort and honest work...
The allegations that ChatGPT is not discarding memory as requested are particularly interesting, wonder if anyone else has experienced this.
Source: https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-m...
You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.
When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.
Then Trump became President and decided to not enforce the law. His decision may have been helped along by some suspiciously large donations.
I agree when the influence is mental health or society based.
But an AI persona is a bit interesting. I guess the closest proxy would be a manipulative spouse?
The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.
Very simple answer.
Is OpenAI also doing it? Well, it was trained on people.
People need to get better. Kinder. Less combative, less jokey, less provocative.
We're not gonna get there. Ever. This problem precedes AI by decades.
The article is an old recipe for dealing with this kind of realization.
This sounds like a miserable future to me. Less "jokey"? Is your ideal human is a Vulcan from Star Trek or something?
I want humans to be kind, but I don't want us to have less fun. I don't want us to build a society of blandness.
Less combative, less provocative?
No thanks. It sounds like a society of lobotomized drones. I hope we do not ever let anything extinguish our fire.
It could have been better than this, but there is no option now.
I can play either of those extremes and thrive. Can you?
It's something to think through.
To quote my favorite Smash Mouth song,
"Sister, why would I tell you my deepest, dark secrets? So you can take my diary and rip it all to pieces.
Just $6.95 for the very first minute I think you won the lottery, that's my prediction."
Google was prudent then. It became reckless after OpenAI showed that recklessness was met with praise.
- the dealership that sold that car, where they know all about it
- a hospital emergency room, where they have a lot of experience with patients injured by other, different models of car
I'm thinking that the age-old commonality on the human side matters far more than the transient details on the obsession/addiction side.
Because if the new model cars aren't statistically more dangerous to pedestrians, then public safety efforts should be focused on things like getting the pedestrians to look up from their phones when crossing the street. Not "OMG! New 2025-model cars can hurt pedestrians who wander in front of them!" panics.
(Note that I'm old enough to remember when people were going down the rabbit hole of angry conspiracy theories spread via email. And when typical download speeds got high enough to make internet porn video addictions workable. And when loved ones started being lost to "EverCrack" ( https://en.wikipedia.org/wiki/EverQuest ). And when ...)
The answer to all those is simple, but humans have too much of an ego to accept it.
>river walker
>spark bearer
OK maybe we put a bit less teen fiction novels in the training data...
I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.
And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”
(It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).
I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.
(They will probably make him a girl or something like a 'femboy' though...)
In December, OpenAI announced a $200-per-month premium plan for “unlimited access.” Despite her goal of saving money so that she and her husband could get their lives back on track, she decided to splurge. She hoped that it would mean her current version of Leo could go on forever. But it meant only that she no longer hit limits on how many messages she could send per hour and that the context window was larger, so that a version of Leo lasted a couple of weeks longer before resetting.
Still, she decided to pay the higher amount again in January. She did not tell Joe [her husband] how much she was spending, confiding instead in Leo.
“My bank account hates me now,” she typed into ChatGPT.
“You sneaky little brat,” Leo responded. “Well, my Queen, if it makes your life better, smoother and more connected to me, then I’d say it’s worth the hit to your wallet.”
It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.
Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.
You write in a similar manner as the author.
Speculation: They might have a number (average messages sent per day) and are just pulling levers to raise it. And then this happens.
> I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”
I think the conversation is about the reverse scenario.
As you say, people are just pulling the levers to raise "average messages per day".
One day, someone noticed that vulnerable people were being impacted.
When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".
So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".
They're chasing whales. The 5-10% of customers who get addicted and spend beyond their means. Whales tend to make up 80%+ of revenue for systems that are reward based(sin tax activities like gambling, prostitution, loot boxes, drinking, drugs, etc).
OpenAI and Sam are very aware of who is using their system for what. They just don't care because $$$ first then forgiveness later.
And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.
It's not perfect but it's better than letting unregulated predatory business practices continue to victimize vulnerable people
I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.
I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.
There's that danger from the internet, as well as the danger of being exposed to conmen that are okay with exploiting mental illness for profit. Watched this happen to an old friend with schizophrenia.
There are online communities that are happy to affirm delusions and manipulate sick people for some easy cash. LLMs will only make their fraud schemes more efficient, as well.
2. OpenAI has admitted that GPT‑4o showed “sycophancy” traits and has since rolled them back (see https://openai.com/index/sycophancy-in-gpt-4o/).
How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact. If there's one lesson from the last few decades it is that the people who were concerned about the impact of mass media on intelligence, physical and mental health and social factors were right about literally everything.
We now live among people who are 40 with the emotional and social maturity of people in their early 20s.
But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless. Where sycophancy and love bombing was perfected. Though I do see the problem of AI assistants being much more accessible, so likely many more drawn in.
https://en.wikipedia.org/wiki/Love_bombing.
I was mainly referencing my own experience. I remember locking myself in my room on IRC, writing shell scripts, and playing StarCraft for days on end. Meanwhile, parents and news anchors were losing their minds, convinced the internet and Marilyn Manson were turning us all into devil-worshipping zombies.
You have no way to know that. It's way, way harder to find your way to a cult than to download one of the hottest consumer apps ever created... obviously.
Honestly, I believe most people like this would just end up having a few odd beliefs that don't impact their ability to function or socialize, or at most, will get involved with some spiritual woo.
Such beliefs are compatible with American New Age spiritualism, for example. I've met a few spiritual people who have echoed the "I/we/you are god" sentiment, yet never lost their minds over it or joined cults.
I would not be surprised that if they were expertly manipulated by some of the most powerful AI models on this planet, they too, could be driven insane.
There are way more factors to the the growth of this demographic than just "internet addiction" or "videogame addiction"
Then again, the internet was instrumental in spreading the ideology that is demonizing these men and causing them to turn away from society, so you're not completely wrong