44 pointsby freediver8 hours ago11 comments
  • potamic4 hours ago
    > Asked about “the pros” of ChatGPT by Jimmy Fallon on a December episode of “The Tonight Show,” Altman talked effusively about the tool’s use for health care. “The number of people that reach out to us and are like, ‘I had this crazy health condition. I couldn’t figure out what was going on. I just put my symptoms into ChatGPT, and it told me what test to ask the doctor for, and I got it and now I’m cured.’”

    I've always believed, don't blame the tool for the user, but can't help but feel the sellers are a little complicit here. That statement was no accident. It was carefully conceived to be part of discourse and set the narrative on how people are using AI.

    It's understandable that they want to tout their tool's intelligence over imitation, so expecting them to go out of their way to warn people about flaws may be asking too much. But the least thing to do is simply refrain from dangerous topics and let people decide for themselves. To actively influence perception and set the tone on these topics when you know the what ramifications will be, is deeply disappointing.

  • themafia6 hours ago
    The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear.

    Ask any model why something is bad, then separately ask why the same thing is good. These tools aren't fit for any purpose other than regurgitating stale reddit conversations.

    • PeterHolzwarth6 hours ago
      >"The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear."

      I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.

      What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

      <edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.

      • andsoitis6 hours ago
        > What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

        In a forum, it is the actual people who post who are responsible for sharing the recommendation.

        In a chatbot, it is the owner (e.g. OpenAI).

        But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.

        • falkensmaize6 hours ago
          Nah, OpenAI can’t have it both ways. If they’re going to assert that their model is intelligent and is capable of replacing human work and authority they can’t also claim that it (and they) don’t have to take the same responsibility a human would for giving dangerous advice and incitement.
          • EgregiousCube5 hours ago
            Imagine a subreddit full of people giving bad drug advice. They're at least partially full of people who are intelligent and capable of performing human work - but they're mostly not professional drug advisors. I think at best you could hold OpenAI to the same standard as that subreddit. That's not a super high bar.

            It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.

            • threatofrain5 hours ago
              > I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.

              If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.

              You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.

              This effect may force companies to simply ban chatbots from certain conversation.

              • xethos3 hours ago
                Aternately, Google claimed gMail wa in public beta for years. People did not treat it like a public beta that could die with no warning, despite being explicitly told to by a company that, in recent years, has developed a reputation for doing that exact thing.
  • AuryGlenz6 hours ago
    I skimmed the article, and I had a hard time finding anything that ChatGPT wrote that was all that..bad? It tried to talk him out of what he was doing, told him that it was potentially very fatal, etc. I'm not so sure that it outright refusing to answer and the teen looking at random forum posts would have been better, because they very well might not have told him he was potentially going to kill himself. Worse yet, he could have just taken the planned substances without any advice.

    Keep in mind this reaction is from someone that doesn't drink and has never touched marijuana.

    • codebolt5 hours ago
      I guess you didn't catch this:

      > ChatGPT started coaching Sam on how to take drugs, recover from them and plan further binges. It gave him specific doses of illegal substances, and in one chat, it wrote, “Hell yes—let’s go full trippy mode,” before recommending Sam take twice as much cough syrup so he would have stronger hallucinations. The AI tool even recommended playlists to match his drug use.

      • red75prime5 hours ago
        LD50 should be at around 1 - 10 liters, I doubt he was trying to gulp half a liter or more.
        • NewJazz5 hours ago
          He was mixing multiple depressants.
      • avadodin5 hours ago
        swim has never been addicted to or even used illegal drugs but he can attest to the fact that you'd be hard pressed to find content like that in the dark web addict forums swim was browsing.
    • GrowingSideways5 hours ago
      It's just further evidence capital is replacing our humanity, no biggie
  • dfajgljsldkjag7 hours ago
    The guardrails clearly failed here because the model was trying to be helpful instead of safe. We know that these systems hallucinate facts but regular users have no idea. This is a huge liability issue that needs to be fixed immediately.
    • akomtu4 hours ago
      Guardrails? OpenAI openly deceives users when it wraps this text generator with a quasi personality of a chatbot. This is how it gets users hooked. If OpenAI was honest, it would tell something along the lines: "this is a possible continuation of your input based on texts from reddit, adjust the temperature parameter to get a different result." But this would dispell the lie of AI.
  • leshokunin5 hours ago
    People need training about these tools. The other day I ran an uncensored model and asked it for tips on a fun trend I read about to amputate my teeth with toothpicks. It happily complied.

    My point is they will gladly oblige with any request. Users don’t understand this.

    • Ferret744622 minutes ago
      That depends on the model and version. More recent models and IME Gemini seem to be more reserved and willing to call out the prompter.
    • potamic4 hours ago
      People at large have still not learned to question what they hear from social media or what youtube influencers tell them. So this is a far cry. If anything, I feel the population getting more vulnerable to suggestion compared to the pre- smartphone era.
    • NewJazz5 hours ago
      Even then I think you're being generous... They're not fulfilling requests they are just regurgitating the statistically likely follow up. They are echoing off you.
  • PeterHolzwarth6 hours ago
    I don't yet see how this case is any different from trusting stuff you see on the web in general. What's unique about the ChatGPT angle that is notably different from any number of forums, dark-net forums, reddit etc? I don't mean that there isn't potentially something unique here, but my initial thought is that this is a case of "an unfortunate kid typed questions into a web browser, and got horrible advice."

    This seems like a web problem, not a ChatGPT issue specifically.

    I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.

    Is there an angle here I am not picking up on, do you think?

    • toofy5 hours ago
      if it doesn’t know medical advice, then it should say “why tf would i know?” instead it confidently responds “oh, you can absolutely do x mg of y mixed with z.”

      these companies are simultaneously telling us it’s the greatest thing ever and also never trust it. which is it?

      give us all of the money, but also never trust our product.

      our product will replace humans in your company, also, our product is dumb af.

      subscribe to us because our product has all the answers, fast. also, never trust those answers.

    • stvltvs6 hours ago
      Those other technologies didn't come with hype about superintelligence that causes people to put too much trust in it.
    • ninjin5 hours ago
      The uniqueness of the situation is that OpenAI et al. poses as an intelligent entity that serves information to you as an authority.

      If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".

      With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.

      Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.

    • Animats6 hours ago
      > highly inaccurate authority.

      The presentation style of most LLMs is confident and authoritative, even when totally wrong. That's the problem.

      Systems that ingest social media and then return it as authoritative information are doomed to do things like this. We're seeing this in other contexts. Systems believing all their prompt history equally, leading to security holes.

    • falkensmaize6 hours ago
      AI companies are actively marketing their products as highly intelligent superhuman assistants that are on the cusp of replacing humans in every field of knowledge work, including medicine. People who have not read deeply into how LLMs work do not typically understand that this is not true, and is merely marketing.

      So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.

    • anonzzzies5 hours ago
      The big issue remains that llms cannot know their response is not accurate, even after 'reading' a page with the correct info, it can still simply generate wrong data for you. With authority as it just read and there is a link so it is right.
      • WalterBright5 hours ago
        Who decides what information is "accurate"?

        My trust in what the experts say has declined drastically over the last 10 years.

        • ironman14785 hours ago
          It's a valid concern, but with a doctor giving bad advice there is accountability and there are legal consequences for malpractice. These LLM companies want to be able to act authoritatively without any of the responsibility. They can't have it both ways.
          • WalterBright4 hours ago
            I don't mean just doctors giving bad advice. It comes from the top, too.

            For example, I remember when eggs were bad for you. Now they're good for you. The amount of alcohol you can safely drink changes constantly. Not too long ago a glass of wine a day was good for you. I poisoned myself with margarine believing the government saying it was healthier than butter. Coffee cycles between being bad and good. Masks work, masks don't work. MJ is addictive, then not addictive, then addictive again. Prozac is safe, then not safe. Xanax, too.

            And on and on.

            BTW, everyone always knew that smoking was bad for you. My dad went to high school in the 1930s, and said the kids called cigarettes "coffin nails". It's hard to miss the coughing fits, and the black lungs in an autopsy. I remember in the 1960s seeing a smoker's lung in formaldehyde. It was completely black, with white cancerous blobs. I avoided cigarettes ever since.

            The notion that people didn't know that cigs were bad until the 1960s is nonsense.

    • xyzzy1236 hours ago
      The different is that OpenAI have much deeper pockets.

      I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".

      • PeterHolzwarth6 hours ago
        To sue, do you mean? I don't quite understand what you intend to convey. Reddit has moderately deep pockets. A random forum related to drugs doesn't.
        • xyzzy1236 hours ago
          Random forums aren't worth suing. Legally, reddit is not treated as responsible for content that users post under section 230, i.e, this battle has already been fought.

          On the other hand, if I post bad advice on my own website and someone follows it and is harmed, I can be found liable.

          OpenAI _might plausibly_ be responsible for certain outputs.

          • PeterHolzwarth6 hours ago
            Ah, I see you added an edit of "I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs"."

            I thought perhaps that's what you meant. A bit mercenary of a take, and maybe not applicable to this case. On the other hand, given the legal topic is up for grabs, as you note, I'm sure there will be instances of this tactical approach when it comes to lawsuits happening in the future.

    • wat100005 hours ago
      A major difference is that it’s coming straight from the company. If you get bad advice on a forum, well, the forum just facilitated that interaction, your real beef is with the jackass you talked to. With ChatGPT, the jackass is owned and operated by the company itself.
    • squigz6 hours ago
      The difference is that those other mediums enable a conversation - if someone gives bad advice, you'll often have someone else saying so.
  • datsci_est_20156 hours ago
    This brings to mind some of the “darker” subreddits that circle around drug abuse. I’m sure there are some terrible stories about young people going down tragic paths due to information they found on those subreddits, or even worse, encouragement. There’s even the commonly-discussed account that (allegedly) documented their first experiences with heroin, and then the hole of despair they fell into shortly afterwards due to addiction.

    But the question here is one of liability. Is Reddit liable for the content available on its website, if that content encourages young impressionable people to abuse drugs irresponsibly? Is ChatGPT liable for the content available through its web interface? Is anyone liable for anything anymore in a post-AI world?

    • ggm6 hours ago
      This is a useful question to ask in the context of carriers having specific defence. Also, publishers in times past had specific obligations. Common carrier and safe harbour laws.

      I have heard it said that many online systems repudiate any obligation to act, lest they be required to act, and thus both acquire cost, and risk, when their enforcement of editorial standards fail: that which they permit, they will be liable for.

  • NewJazz7 hours ago
    Took a while to figure out what the OD was of, but it was a combination of alcohol, kratom (or a stronger kratom-like drug), and xanax.
    • loeg6 hours ago
      7-O is like kratom in a similar way that fentanyl is like opium, FWIW. It's much, much more potent. That stuff should be banned.

      That said, he claims to have taken 15g of "kratom" -- that has to be the regular stuff, not 7-O -- that's still a huge, huge dose of the regular stuff. That plus a 0.125 BAC and benzos... is a lot.

    • dfajgljsldkjag6 hours ago
      The article mentions 7-OH also known as feel free, which shockingly hasn't been banned and is sold without checks at many stores. There are quite a few Youtube videos talking about addiction to it and it sounds awful.

      https://www.youtube.com/watch?v=TLObpcBR2yw

      • brasscupcakes38 minutes ago
        Yeah it has been banned in quite a few states -- unfortunately those same few states end up banning plain powdered kratom right along with it.

        Unadultated unextracted kratom is far safer than tylenol or ibuprofen in small doses and is widely used by recovering addicts for harm reduction.

        (a gram or two drastically reduces the urge to take opioids, drink alcohol, etc.)

        But 15 grams -- that's a LOT. Kratom is self limiting for most people in its powder form because beyond the first few grams it doesn't get any better (you just get sleepy and nauseous).

        That amount will also cause the kind of constipation that will bring you to tears.

        (In and of itself, though, even fifty grams of kratom isn't enough to kill you.)

        But 164 Xanax, is that what he told the AI he took? Good God, if he'd even said ten it warranted a stern recommendation to call ambulance immediately.

        I don't think you can lay these ridiculous responses at the feet of Reddit drug subforums.

        I haven't lurked all of them by any means but of those that I visited the meth one was about the worst and even there, many voices of reason made themselves heard.

        Guy posted from a trap house, saying another resident was there with a baby what should he do and with very few exceptions this sub full of tweakers said call the cops, call CPS immediately.

        I don't engage with AI much except when I am doing research for a project and it's always just a preliminary step to help me see the big picture (I check and recheck every 'fact' up, down and sideways because I am astonished by just how much it getd wrong.

        But ... I really had no idea it could be quite this stupid, parroting hearsay without attribution as if it was hard data.

        It's very sobering.

  • returnInfinity6 hours ago
    Sam and Dario "The society can tolerate a few deaths to AI"
  • 6 hours ago
    undefined
  • solaris20076 hours ago

      "Don't believe everything you read online".