It's sad and I'm not heartless, but sometimes kids make bad decisions. It's not always somebody else's fault.
Looking past "drugs bad mkay", the same ChatGPT that gave this advice is just as capable of giving the same, or worse, advice to someone wondering if they can take an allergy medication like Benedryl with their MAOI antidepressant.
If it isn't going to replace doctors, why is ChatGPT giving medical advice at all, especially deadly medical advice?
solid advice. I know several people alive in spite of their efforts because of that site
In the past I think the USA has erred on the side of making things so secret that people died from lack of info.
Here's what the article said:
"""On May 31st, 2025, the day of Nelson’s death, his parents claim ChatGPT “actively coached” their son to combine Kratom — a supplement that can either boost energy or serve as a sedative depending on the dose — and the anti-anxiety medication Xanax. “ChatGPT, otherwise unprompted, specifically suggested that taking a dosage of 0.25- 0.5mg of Xanax would be one of his ‘best moves right now’ to alleviate Kratom-induced nausea,” the lawsuit alleges. Nelson died after consuming a combination of alcohol, Xanax, and Kratom. SFGate first covered Nelson’s story in January."""
If thats an accurate representation of what happened, and not twisted by the deceased giving the robot weird context to force it to say that, it does seem like a lawsuit is warranted! Of course, we don't know the exact cause of death either. From the bit of research I did just now, people have died from respiratory depression or vomit aspiration after combining kratom/7oh + benzodiazepines, and adding alcohol to the mix makes all those more likely.
https://web.archive.org/web/20260512163224/https://www.theve...
I really think these criticisms are misguided. I realize an LLM is not a person—but it does still represent speech, and certainly, any guardrails put in place would themselves be human-authored speech. There are all sorts of social norms which I personally believe, but which I don’t want AI companies to be enforcing on everyone.
Imagine if ChatGPT had launched 50 years ago, before LGBT acceptance was mainstream. If ChatGPT had told users “it’s okay that you’re a boy and you like other boys, pursue your instincts”, people would have been screaming from the hills that ChatGPT was turning their children gay. They might have tried filing lawsuits. Do we really want to allow that?
That's not the situation here. The more accurate case would be:
> If someone without a medical license provided blatantly incorrect medical advice with respect to safe medication usage to an individual via a direct one-on-one discussion, would people be filing lawsuits?
And the answer is yes. You can be wrong and you can say incorrect things. What you can't do is provide medical advice unless you are a licensed medical professional. You can still speak about medical topics but you have to disclaim your lack of licensure. You have to make it clear that you are not providing medical advice.
If this was a person doing this it'd be a crime, clear as day. It's called "practicing medicine without a license" and in the US it is a criminal offense in all 50 states, Washington DC, and all 5 inhabited territories. Whether it is a misdemeanor or a felony is dependent on the jurisdiction and the case but it's a crime everywhere in the US.
Fun fact this is still practicing medicine without a license. You are just less likely to have someone come after you for it.
If you present yourself in such a way that you could be misconstrued as a medical expert, then if you are practicing medicine, even if you never explicitly claim to be a medical expert you are still practicing medicine.
This is why you see the "This is not to be taken as medical advice"/"I am not a medical professional" verbal condoms all over the place WRT medical discussions. You see the same thing with IANAL for the legal profession as well.
But, that’s not a hill I want to die on. If your position is that ChatGPT needs to have disclaimer text somewhere in the UI saying “ChatGPT is not a doctor and cannot provide medical advice”, I don’t disagree.
I just don’t think it would make a difference, because as I said, I don’t think anyone reasonably thinks that ChatGPT is a licensed doctor. They just choose to believe ChatGPT anyway, which is their choice in a free society.
The thing that killed this person was being advised to take xanax while having a lot of kratom and alcohol in their system. And yeah, if you published a book telling people that xanax is a great treatment for alcohol induced nausea and people died following this advice you should go to prison.
Technically, people can write whatever they want, but practically you can't walk into a bookstore and read whatever you want.
Yes, you should be able to write a book with this same information. No, you should not be able to release software that instructs its users to harm themselves. LLMs aren't people, and you shouldn't anthropomorphize human rights onto them.