21 pointsby 1vuio0pswjnm77 hours ago6 comments
  • h4kunamata9 minutes ago
    Read: Bad parenting got their son attached to ChatGPT leading to killing himself.
  • kbelder5 hours ago
    I feel like the son should take the blame. There's never been any shortage of bad advice being passed around. He made the credulous decision to take a mix of party drugs and drink, and I can't believe he had never been told that's a stupid idea.

    It's sad and I'm not heartless, but sometimes kids make bad decisions. It's not always somebody else's fault.

    • heavyset_go29 minutes ago
      If a real person gave them this advice, like a doctor or pharmacist, there would be standing for a lawsuit, might even be criminal.

      Looking past "drugs bad mkay", the same ChatGPT that gave this advice is just as capable of giving the same, or worse, advice to someone wondering if they can take an allergy medication like Benedryl with their MAOI antidepressant.

      • spoiler27 minutes ago
        Yes, but if chokemegently420 on some random sub Reddit gave them that advice, nobody would be the wiser. It's not like ChatGPT is a certified clinician
        • heavyset_go17 minutes ago
          Why would they believe that when AI is smarter than any human and is going to replace doctors and themselves?

          If it isn't going to replace doctors, why is ChatGPT giving medical advice at all, especially deadly medical advice?

    • vablings4 hours ago
      I agree, there are a few simple hard and fast rules you can follow to be a safe drug user and never mixing drugs is paramount. That is one thing I will always explain to my children that mixing drugs is another layer of gambling on top of you already being dose-unaware and purity-unaware.
    • ComplexSystems4 hours ago
      Surely there's room for the view that this is misaligned behavior for ChatGPT to have. I would guess this is during the "sycophantic" phase last year.
    • novemp4 hours ago
      If it's the son's fault, then AI companies need to stop acting like their products are genius machines. Can't have it both ways.
  • sda26 hours ago
    Bad parenting, they should have pointed their kid to Erowid.
    • cultofmetatron6 hours ago
      > should have pointed their kid to Erowid.

      solid advice. I know several people alive in spite of their efforts because of that site

    • OneDeuxTriSeiGo5 hours ago
      Seriously. Like as much as ChatGPT shouldn't be providing advice on medical topics, it almost certainly will be tricked/coaxed into doing so anyways so instead they should be training it to defer to accessible expert sources/publishers like Erowid instead of attempting to extrapolate advice on its own.
  • awakeasleep5 hours ago
    There's a middle ground of harm reduction between [prohibiting information about] and [encouraging drug use].

    In the past I think the USA has erred on the side of making things so secret that people died from lack of info.

    Here's what the article said:

    """On May 31st, 2025, the day of Nelson’s death, his parents claim ChatGPT “actively coached” their son to combine Kratom — a supplement that can either boost energy or serve as a sedative depending on the dose — and the anti-anxiety medication Xanax. “ChatGPT, otherwise unprompted, specifically suggested that taking a dosage of 0.25- 0.5mg of Xanax would be one of his ‘best moves right now’ to alleviate Kratom-induced nausea,” the lawsuit alleges. Nelson died after consuming a combination of alcohol, Xanax, and Kratom. SFGate first covered Nelson’s story in January."""

    If thats an accurate representation of what happened, and not twisted by the deceased giving the robot weird context to force it to say that, it does seem like a lawsuit is warranted! Of course, we don't know the exact cause of death either. From the bit of research I did just now, people have died from respiratory depression or vomit aspiration after combining kratom/7oh + benzodiazepines, and adding alcohol to the mix makes all those more likely.

    https://web.archive.org/web/20260512163224/https://www.theve...

  • tencentshill3 hours ago
    What if someone relied on random number generator software for dosage information? What if the number generator added affirmative, pleasing text to every number it generated? Will be interesting to see where the line is drawn in these cases. Especially risky for openAI since they now market it to "support, not replace, medical care". https://openai.com/index/introducing-chatgpt-health/
  • Wowfunhappy6 hours ago
    If someone published a book advising people to take drugs, would people be filing lawsuits? No—we would agree that people are allowed to write whatever they want, even if what they say is terrible, right?

    I really think these criticisms are misguided. I realize an LLM is not a person—but it does still represent speech, and certainly, any guardrails put in place would themselves be human-authored speech. There are all sorts of social norms which I personally believe, but which I don’t want AI companies to be enforcing on everyone.

    Imagine if ChatGPT had launched 50 years ago, before LGBT acceptance was mainstream. If ChatGPT had told users “it’s okay that you’re a boy and you like other boys, pursue your instincts”, people would have been screaming from the hills that ChatGPT was turning their children gay. They might have tried filing lawsuits. Do we really want to allow that?

    • OneDeuxTriSeiGo5 hours ago
      > If someone published a book advising people to take drugs, would people be filing lawsuits? No—we would agree that people are allowed to write whatever they want, even if what they say is terrible, right?

      That's not the situation here. The more accurate case would be:

      > If someone without a medical license provided blatantly incorrect medical advice with respect to safe medication usage to an individual via a direct one-on-one discussion, would people be filing lawsuits?

      And the answer is yes. You can be wrong and you can say incorrect things. What you can't do is provide medical advice unless you are a licensed medical professional. You can still speak about medical topics but you have to disclaim your lack of licensure. You have to make it clear that you are not providing medical advice.

      If this was a person doing this it'd be a crime, clear as day. It's called "practicing medicine without a license" and in the US it is a criminal offense in all 50 states, Washington DC, and all 5 inhabited territories. Whether it is a misdemeanor or a felony is dependent on the jurisdiction and the case but it's a crime everywhere in the US.

      • Wowfunhappy4 hours ago
        But ChatGPT doesn’t claim to have a medical license! You can give people whatever terrible medical advice you want—and people absolutely do—you just can’t claim to be a doctor!
        • OneDeuxTriSeiGo4 hours ago
          > You can give people whatever terrible medical advice you want, you just can’t claim to be a doctor!

          Fun fact this is still practicing medicine without a license. You are just less likely to have someone come after you for it.

          If you present yourself in such a way that you could be misconstrued as a medical expert, then if you are practicing medicine, even if you never explicitly claim to be a medical expert you are still practicing medicine.

          This is why you see the "This is not to be taken as medical advice"/"I am not a medical professional" verbal condoms all over the place WRT medical discussions. You see the same thing with IANAL for the legal profession as well.

          • Wowfunhappy4 hours ago
            I don’t think it’s reasonable to interpret the output of ChatGPT as medical advice. Maybe once ChatGPT Health launches, but not now.

            But, that’s not a hill I want to die on. If your position is that ChatGPT needs to have disclaimer text somewhere in the UI saying “ChatGPT is not a doctor and cannot provide medical advice”, I don’t disagree.

            I just don’t think it would make a difference, because as I said, I don’t think anyone reasonably thinks that ChatGPT is a licensed doctor. They just choose to believe ChatGPT anyway, which is their choice in a free society.

    • UncleMeat2 hours ago
      The issue was not "you should take drugs."

      The thing that killed this person was being advised to take xanax while having a lot of kratom and alcohol in their system. And yeah, if you published a book telling people that xanax is a great treatment for alcohol induced nausea and people died following this advice you should go to prison.

    • sdwr6 hours ago
      There's soft guardrails for "reputable" content. A publishing house has to buy it, stores have to agree to distribute it, and if people are upset they can raise a stink and get the book pulled.

      Technically, people can write whatever they want, but practically you can't walk into a bookstore and read whatever you want.

      • Wowfunhappy4 hours ago
        You can go on the internet and read whatever you want.
    • tibbydudeza6 hours ago
      Agreed - people should learn the ChatGPT does not give good advice, but the question is did OpenAI advertise ChatGPT as a good and reliable source of information on health ???.
    • alexk3076 hours ago
      There's a bit of a difference between "enforcing social norms" and telling a user to ingest prescription drugs to combat nausea from the other drugs that it told the user to take.

      Yes, you should be able to write a book with this same information. No, you should not be able to release software that instructs its users to harm themselves. LLMs aren't people, and you shouldn't anthropomorphize human rights onto them.