- Documented record of a months-long set of conversations between the man and the chatbot
- Seemingly, no previous history of mental illness
- The absolutely crazy things the AI encouraged him to do, including trying to kidnap a robot body for the AI
- Eventually encouraging (or at the very least going along with) his plans to kill himself.
Any lawsuit you read, written by the plaintiff's attorneys, will be written with a tabloid level of sensationalism, cherry-picking, and "telling you how to feel". This is requested to be a jury trial, so on some level the game is "would you rather settle out of court (where hundreds of thousands are grains of sand on Google's level), or have a jury read our tabloid and decide while you, the faceless megacorporation, try to swim uphill against it."
The kind of vulnerability that shows when someone is susceptible to influences like this isn't exactly the kind of thing you'd want to widely publicize about someone you loved, you know?
Remember that the idea of the court is to be public and transparent, with judgement coming from the jury, but also to be judged by society on the whole. So if you're gonna sue your kink provider, be prepared for everyone to know how you get off, because after all, the court is owned by, and serves, the public.
That will probably save some jobs, but it's a problem in most other contexts.
I don't have a WSJ subscription, but other coverage of this story (https://www.theguardian.com/technology/2026/mar/04/gemini-ch...) makes it clear that Gemini's intelligence was precisely the problem in this case; a less intelligent chatbot would not have been able to create the detailed, immersive narrative the victim got trapped in.
Dijkstra said, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Well, we have some very fish-y submarines these days. But the point still holds. Rather than worry about whether these things qualify as "intelligent," look at their actual capabilities. That's what matters.
[0]: https://en.wikipedia.org/wiki/Turing_test#Kurzweil%E2%80%93K...
Funny you mention God and this statement, because believing in any particular God that says they are omnipotent is believing that humans are, since you know humans made this crap up.
Given the opportunity a very large part of the population will quickly absolve themselves of any responsibility and put it on another human/system/made up entity.
What a silly comparison.
People who are vulnerable to this type of thing need caretakers, or to be institutionalized. These aren't just average, every day random people getting taken out by AIs, they have existing, extreme mental illness. They need to have their entire routine curated and managed, preventing them from interacting with things that can result in dangerous outcomes. Anything that can trigger obsessive behaviors, paranoid delusions, etc.
They're not just fragile, they're unable to effectively engage with reality on their own. Sometimes the right medication and behavioral training gets them to a point where they can have limited independence, but often times, they need a lifetime of supervision.
Telenovelas, brand names, celebrities, specific food items, a word - AI is just the latest thing in a world full of phenomena that can utterly consume their reality.
Gavalas seems to have had a psychotic break, was likely susceptible to schizophrenia, or had other conditions, and spiraled out. AI is just a convenient target for lawyers taking advantage of the grieving parents, who want an explanation for what happened that doesn't involve them not recognizing their son's mental breakdown and intervening, or to confront being powerless despite everything they did to intervene.
Sometimes bad things happen. To good people, too.
If he'd used Bic pens to write his plans for mass shootings, should Bic be held responsible? What if he used Microsoft Word to write his suicide note? If he googled things that in context, painted a picture of planning mass murder and suicide, should Google be held accountable for not notifying authorities? Why should the use of AI tools be any different?
Google should not be surveilling users and making judgments about legality or ethicality or morality. They shouldn't be intervening without specific warrants and legal oversight by proper authorities within the constraints of due process.
Google isn't responsible for this guy's death because he spiraled out while using Gemini. We don't want Google, or any other AI platform, to take that responsibility or to engage in the necessary invasive surveillance in order to accomplish that. That's absurd and far more evil than the tragedy of one man dying by suicide and using AI through the process.
You don't want Google or OpenAI making mental health diagnoses, judgments about your state of mind, character, or agency, and initiating actions with legal consequences. You don't want Claude or ChatGPT initiating a 5150, or triggering a welfare check, because they decided something is off about the way you're prompting, and they feel legally obligated to go that far because they want to avoid liability.
I hope this case gets tossed, but also that those parents find some sort of peace, it's a terrible situation all around.
I think the scale of the assistance is important. If his Bic pen was encouraging him to mass murder people, then Bic should absolutely be held accountable.
Because none of the tools you mentioned are crazily marketed as intelligent
You have a valid point, but it has nothing to do with what I said, both our arguments can be true at the same time
If people are confusing the word intelligence for things like maturity or wisdom, that's not a marketing problem, that's an education and culture problem, and we should be getting people to learn more about what the tools are and how they work. The platforms themselves frequently disclaim reliance on their tools - seek professional guidance, experts, doctors, lawyers, etc. They're not being marketed as substitutes for expert human judgment. In fact, all the AI companies are marketing their platforms as augmentations for humans - insisting you need a human in the loop, to be careful about hallucinations, and so forth.
The implication is that there's some liability for misunderstandings or improper use due to these tools being marketed as intelligent; I'm not sure I see how that could be?
Remember that decades of research in human computer interaction show that framing and interface design strongly influence user perception.
also disclaimers do little to counteract this effect. Because LLMs simulate linguistic competence without understanding or truth-tracking mechanisms, marketing them as intelligent risks systematically misleading users about their capabilities and limitations.
I mean that is typical human ego at play. My dog is intelligent, and there is no system of definitions of intelligent that doesn't overlap humans and dogs. Yet I won't let my dog drive my car.
The difference, of course, is that intelligence is the thing that is done, a subset of computation, and all computation is substrate independent. You can definitely argue that LLMs are less intelligent than humans. This is obvious, for the time being, and easy to demonstrate. Saying they are not intelligent is simply untrue.
Whether you want to go to a formalized definition of intelligence, like AIXI, or a neuroscience definition, or the vernacular definition, LLMs are intelligent. The idea that they're random number stochastic functions that are just partially producing sensible results is about 6 years past its expiration date, and if you've been holding on to that idea, it's really time to update your model.
ALICE bots or ELIZA back in the day had a "sense" of intelligence. Modern LLMs are more intelligent than the average human.
How do you know that? The concern is precisely that this isn't the case, and LLM roleplay is capable of "hooking" people going through psychologically normal sadness or distress. That's what the family believes happened in this story.
Medical data is reported on a substantial lag in the US, so right now we have no idea of the suicide rate last year, but I would falsifiably predict it's going to be elevated because of stories like those of Mr. Gavalas.
Your clarification on what you meant is 404 not found in your reply, and your “concerning” insult of my personal character is not appreciated.
The boundaries I was referring to are those between "the AI is a product being provided to you" and "the AI is a human-like being Google has matched you with". I'm polite and respectful to AI agents and encourage other people to be, but it's very dangerous to make people start thinking of them as a friend or partner. I'm sure Gemini is perfectly nice to the extent that LLMs can be nice, but you can't be friends with it any more than you can be friends with Alphabet Inc. It's just not the kind of thing to which friendship can validly attach.
I broadly agree with you, but your views on mental illness are not good.
So I asked it if the sycophancy is inherent in the design, or if it just comes from the RLHF. It claimed that it's all about the RLHF, and that the sycophancy is a business decision that is a compromise of a variety of forces.
Is that right? It would at least mean that this is technically a solvable problem.
I'm more inclined to believe that this case is getting amplified in MSM because it fits an agenda. Like the people who got hurt using black market vapes. Boosting those stories and making it seem like an epidemic supports whatever message they want to send. Which usually involves money somewhere.
I mean tech in general has been negatively covered in the media since 2015 due to latent agendas of (a) supposed revenue loss due to existence of Google/FB etc and (b) to align neutral moderation stances towards a preferred viewpoint most suitable to the political party in question.
There is a solution, however, anyone hoping to roleplay with models submits an identity verification, an escrow amount, and a recorded statement acknowledging their risky use of the model. However, I assume the market for this is not insignificant, and therefore, companies hope to avoid putting in such requirements. OpenAI has been moving in that direction as seen during the 4o debacle.
The guy was probably a paying user, so Google would have already known who he is. He's also 36, so no excluding him based on age. And neither the escrow nor the statement really add much in my view