I've always believed, don't blame the tool for the user, but can't help but feel the sellers are a little complicit here. That statement was no accident. It was carefully conceived to be part of discourse and set the narrative on how people are using AI.
It's understandable that they want to tout their tool's intelligence over imitation, so expecting them to go out of their way to warn people about flaws may be asking too much. But the least thing to do is simply refrain from dangerous topics and let people decide for themselves. To actively influence perception and set the tone on these topics when you know the what ramifications will be, is deeply disappointing.
Ask any model why something is bad, then separately ask why the same thing is good. These tools aren't fit for any purpose other than regurgitating stale reddit conversations.
I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.
What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?
<edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.
In a forum, it is the actual people who post who are responsible for sharing the recommendation.
In a chatbot, it is the owner (e.g. OpenAI).
But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.
It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.
If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.
You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.
This effect may force companies to simply ban chatbots from certain conversation.
Keep in mind this reaction is from someone that doesn't drink and has never touched marijuana.
> ChatGPT started coaching Sam on how to take drugs, recover from them and plan further binges. It gave him specific doses of illegal substances, and in one chat, it wrote, “Hell yes—let’s go full trippy mode,” before recommending Sam take twice as much cough syrup so he would have stronger hallucinations. The AI tool even recommended playlists to match his drug use.
My point is they will gladly oblige with any request. Users don’t understand this.
This seems like a web problem, not a ChatGPT issue specifically.
I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.
Is there an angle here I am not picking up on, do you think?
these companies are simultaneously telling us it’s the greatest thing ever and also never trust it. which is it?
give us all of the money, but also never trust our product.
our product will replace humans in your company, also, our product is dumb af.
subscribe to us because our product has all the answers, fast. also, never trust those answers.
If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".
With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.
Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.
The presentation style of most LLMs is confident and authoritative, even when totally wrong. That's the problem.
Systems that ingest social media and then return it as authoritative information are doomed to do things like this. We're seeing this in other contexts. Systems believing all their prompt history equally, leading to security holes.
So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.
My trust in what the experts say has declined drastically over the last 10 years.
For example, I remember when eggs were bad for you. Now they're good for you. The amount of alcohol you can safely drink changes constantly. Not too long ago a glass of wine a day was good for you. I poisoned myself with margarine believing the government saying it was healthier than butter. Coffee cycles between being bad and good. Masks work, masks don't work. MJ is addictive, then not addictive, then addictive again. Prozac is safe, then not safe. Xanax, too.
And on and on.
BTW, everyone always knew that smoking was bad for you. My dad went to high school in the 1930s, and said the kids called cigarettes "coffin nails". It's hard to miss the coughing fits, and the black lungs in an autopsy. I remember in the 1960s seeing a smoker's lung in formaldehyde. It was completely black, with white cancerous blobs. I avoided cigarettes ever since.
The notion that people didn't know that cigs were bad until the 1960s is nonsense.
I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".
On the other hand, if I post bad advice on my own website and someone follows it and is harmed, I can be found liable.
OpenAI _might plausibly_ be responsible for certain outputs.
I thought perhaps that's what you meant. A bit mercenary of a take, and maybe not applicable to this case. On the other hand, given the legal topic is up for grabs, as you note, I'm sure there will be instances of this tactical approach when it comes to lawsuits happening in the future.
But the question here is one of liability. Is Reddit liable for the content available on its website, if that content encourages young impressionable people to abuse drugs irresponsibly? Is ChatGPT liable for the content available through its web interface? Is anyone liable for anything anymore in a post-AI world?
I have heard it said that many online systems repudiate any obligation to act, lest they be required to act, and thus both acquire cost, and risk, when their enforcement of editorial standards fail: that which they permit, they will be liable for.
That said, he claims to have taken 15g of "kratom" -- that has to be the regular stuff, not 7-O -- that's still a huge, huge dose of the regular stuff. That plus a 0.125 BAC and benzos... is a lot.
Unadultated unextracted kratom is far safer than tylenol or ibuprofen in small doses and is widely used by recovering addicts for harm reduction.
(a gram or two drastically reduces the urge to take opioids, drink alcohol, etc.)
But 15 grams -- that's a LOT. Kratom is self limiting for most people in its powder form because beyond the first few grams it doesn't get any better (you just get sleepy and nauseous).
That amount will also cause the kind of constipation that will bring you to tears.
(In and of itself, though, even fifty grams of kratom isn't enough to kill you.)
But 164 Xanax, is that what he told the AI he took? Good God, if he'd even said ten it warranted a stern recommendation to call ambulance immediately.
I don't think you can lay these ridiculous responses at the feet of Reddit drug subforums.
I haven't lurked all of them by any means but of those that I visited the meth one was about the worst and even there, many voices of reason made themselves heard.
Guy posted from a trap house, saying another resident was there with a baby what should he do and with very few exceptions this sub full of tweakers said call the cops, call CPS immediately.
I don't engage with AI much except when I am doing research for a project and it's always just a preliminary step to help me see the big picture (I check and recheck every 'fact' up, down and sideways because I am astonished by just how much it getd wrong.
But ... I really had no idea it could be quite this stupid, parroting hearsay without attribution as if it was hard data.
It's very sobering.