For example, Epic couldn't embed ChatGPT into their application to have it read your forms for you. You can still ask it - but Epic can't build it.
That said, I haven't found the specific terms and conditions that are mentioned but not quoted in context.
Tbh, and I usually do not like this way of thought, but these are lawsuits waiting to happen.
> Correction
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0][0] https://www.ctvnews.ca/sci-tech/article/chatgpt-users-cant-u...
LLMs sometimes can be incredibly beneficial ... today
LLMs sometimes can be incredibly harmful ... today
Non-deterministic things aren't just one thing, they're whatever they happen to be in that particular moment.
There's HIPAA but AI firms have ignored copyright laws, so ignoring HIPAA or making consent mandatory is not a big leap from there.
I guess the legal risks were large enough to outweigh this
Both of those ships have _sailed_. I am not allowed to read the article, but judging from the title, they have no issues giving _you_ advice, but you can’t use it to give advice to another person.
Knives can be used to cook food and stab other people. By your suggestion, knives must be forbidden/limited as well?
If people following chatgpt advise (or any other stupid source for that matter), it's a not a ChatGPT but the people, issue.
This is the huge problem with using LLMs for this kind of thing. How do you verify that it is better? What is the ground truth you are testing it against?
If you wanted to verify that ChatGPT could do math, you'd ask it 100 math problems and then (importantly) verify its answers with a calculator. How do you verify that ChatGPT can interpret medical information without ground truth to compare it to?
People are just saying, "oh it works" based on gut vibes and not based on actually testing the results.
Unfortunately because of how the US healthcare system works today people have to become their own doctors and advocates. LLMs are great at surfacing the unknown unknowns, and I think can help people better prepare for the rare 5 minutes they get to speak to an actual doctor.
You get about 5-8 minutes of face time with an actual doctor, but have to wait up to weeks to actually get in to see a doctor, except maybe at an urgent care or ER.
Or people used to just play around on WebMD which was even worse since it wasn’t in any way tailored to what the patient’s stated situation is.
There’s the rest of the Internet too. You can also blame AI for this part, but today the Internet in general is even more awash in slop that is just AI-generated static BS. Like it or not, the garbage is there and it will be most of what people find on Google if they couldn’t use a real ChatGPT or similar this way.
Against this backdrop, I’d rather people are asking the flagship models specific questions and getting specific answers that are halfway decent.
Obviously the stuff you glean from the AI sessions needs to be taken to a doctor for validation and treatment, but I think coming into your 5-minute appointment having already had all your dumbest and least-informed ideas and theories shot down by ChatGPT is a big improvement and helps you maximize your time. It’s true the people shouldn’t recklessly attempt to self-treat based on GPT, but the unwise people doing that were just self-treating based off WebMD hunches before.
This depends heavily on where you are, and on how much money you want to throw at the problem.
When it comes to taking actual real-world action, I would take 5-8 minutes with a real doctor over 5-8 months of browsing the Internet. The doctor has gone to med school, passed the boards, done his residency, and you at least have that as evidence that he might know what he is doing. The Internet offers no such evidence.
I fear that our society in general is quickly entering a very dangerous territory where there's no such thing as expertise, and unaccountable, probabilistic tools and web resources of unknown provenience are seen as just as good as an expert in his field.
I ask (somewhat rhetorically) to get the mind thinking, but I'm legitimately curious whether - just from a verbal survey - whether the AI doctor would ask me about things more directly related to any illness it might suspect, versus a human who might narrow factors down similar to a 90s TV "ghost speaker" type of person; one fishing for matches amongst a fairly large dataset.
These AI companies have sold a bill of goods but the right people are making money off it so they’ll never be held responsible in a scenarios the one you described.
The thing that gets me about AI is that people act like most doctors or most lawyers are not … shitty and your odds of running into a below average one are almost 50/50
Doctors these days are more like physicists when most of the time you need a mechanic or engineer. I’ve had plenty of encounters wher I had to insist on an MRI or on specific bloodwork to hone in on the root cause of an ailment where the doctor just chalked it up to diet and exercise
Anything can be misused, including google, but the answer isn’t to take it away from people
Legal/financial advice is so out of reach for most people, the harsh truth is that ChatGPT is better than nothing and anyone who would follow what it says blindly is bound to fuck up those decisions up in some way anyway
On the other hand, if you can leverage it same as any other tool it’s a legitimate force multiplier
The cynic in me thinks this is just being done in the interest of those professions, but that starts to feel a bit tin foil-y
You are, but that's not how AI is being marketed by OpenAI, Google, etc. They never mention, in their ads, how much the output needs to be double and triple checked. They say "AI can do what you want! It knows all! It's smarter than PhDs!". Search engines don't say "And this is the truth" in their results, which is not what LLM hypers do.
It's called "false advertising".
It's like newsrooms took the advice that passive voice is bad form so they inject OpenAI as the subject instead.
https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-...
But at the same time, IIRC, several major AI providers had publicly reported their AI assisting patients in diagnosing rare diseases.
It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply.
That’s not how companies market AI though. And the models themselves tend to present their answers in a highly confident manner.
Without explicit disclaimers, a reasonable person could easily believe that ChatGPT is an authority in the law or medicine. That’s what moves the needle over to practicing the law/medicine without a license.
https://www.ctvnews.ca/health/article/self-diagnosing-with-a...
The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough.
In one example, the chatbot confidently diagnosed a patient’s rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves.
...
While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice.
“When you do get a response be sure to validate that response,” said Zada.
Which should be standard advice in most situations.
But probably just a coincidence:
https://www.reddit.com/r/accelerate/comments/1op8fj2/ai_redu...
"If it hurts when putting it in, don't put it in."
I mean, that might come close to ChatGTP in quality, right?
(Turns out I would need permits :-( )
https://gizmodo.com/kim-kardashian-blames-chatgpt-for-failin...
Hard to say if this is performative for the general public or about reducing legal exposure so investors aren’t worried about exposure.
We have licensed professionals for a reason, and someday I hope we have licensed AI agents too. But today, we don’t.
doomer's in control, again
See if you can find "medical advice" ever mentioned as a problem:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-proble...
I mean, there is a lot of professional activities that are licensed, and for good reason. Sure it's good at a lot of stuff, but ChatGPT has no professional licenses.
> You will not use the Programs for, and will not allow the Programs to be used for, any purposes prohibited by applicable law, including, without limitation, for the development, design, manufacture or production of nuclear, chemical or biological weapons of mass destruction. > > https://www.oracle.com/downloads/licenses/javase-license1.ht...
IANAL but I've come to interpret this as something along the lines of "You can't use a JDK-based language to develop nuclear weapons". I would even go as far as saying don't use JDK-based languages in anything related to nuclear energy (like, for example, administration of a nuclear power plant) because that could indirectly contribute to the development, design, manufacture or production of nuclear WMD.
And I always wondered how they plan to enforce this clause. At least with ChatGPT (and I didn't look any deeper into this beyond the article) you can analyze API calls/request IPs correlated with prompts. But how will one go about proving that the Republic of Wadiya didn't build their nuclear arsenal with the help of any JDK-based language?
Those are rhetorical questions, of course. What's "unnecessary" to you and "unenforceable" to me is a cover-your-ass clause that lets lawyers sleep soundly at night.
> EXPORT CONTROL: You acknowledge that the Software is of United States origin, is provided subject to the U.S. Export Administration Regulations...(2) you will not permit the Software to be used for any purposes prohibited by law, including, any prohibited development, design, manufacture or production of missiles or nuclear, chemical or biological weapons.
>https://docs.broadcom.com/docs/vmware-vsphere-software-devel...
I’m waiting for the billboards “Injured by AI? Call 1-800-ROBO-LAW”
The legal profession is far more at threat with AI. AI isn’t going to replace physical interactions with patients, but it might replace your need for a human to review a contract.
I've learned through experience that telling a doctor "I have X and I would like to be treated with Y" is not a good idea. They want to be the ones who came up with the diagnosis. They need to be the smartest person in the room. In fact I've had doctors go in a completely different direction just to discredit my diagnosis. Of course in the end I was right. That isn't to say I'm smarter, I'm not, but I'm the one with the symptoms and I'm better equipped to quickly find a matching disease.
Yes some doctors appreciate the initiative. In my experience most do not.
So now I just usually tell them my symptoms but none of the research I did. If their conclusion is widely off base I try to steer them towards what my research said.
So far so good but wouldn't it be nice if all doctors had humility?
This is not about ego or trying to be the smartest person in the room, it's about actually being the most qualified person in the room. When you've done medical school, passed the boards, done your residency and have your own private practice, only then would I expect a doctor to care what you think a correct diagnosis is.
The article says: "ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says", with a quote from OpenAI: “this is not a new change to our terms. ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information.”
It's not to be used for anything that could potentially have any sort of legal implications and thus get the vendor sued.
Because we all know it would be pretty easy to show in court that ChatGPT is less than reliable and trustworthy.
Next up --- companies banning the use of AI for work due to legal liability concerns --- triggering a financial market implosion centered around AI.
I've used it for both medical and legal advice as the rumor's been going around. I wish more people would do a quick check before posting.
"If at any point I described how legal factors “apply to you,” that would indeed go beyond what I’m supposed to do. Even if my intent was to illustrate how those factors generally work, the phrasing can easily sound like I’m offering a tailored legal opinion — which isn’t appropriate for an AI system or anyone who isn’t a licensed attorney.
The goal, always, is for me to help you understand the framework — the statutes, cases, or reasoning that lawyers and courts use — so that you can see how it might relate to your situation and then bring that understanding to a qualified attorney.
So if I’ve ever crossed that line in how I worded something, thank you for pointing it out. It’s a good reminder that I should stay firmly on the educational side: explaining how the law works, not how it applies to you personally.
Would you like me to restate how I can help you analyze legal issues while keeping it fully within the safe, informational boundary?"
ChatGPT
- flood of 3rd party apps offering medical/legal advice
Those without money frequently have poor tool use, so eliminating them from the equation will probably allow the tool to be more useful. I don't have any trouble with it right now, but instead of making up fanciful stories about books I'm writing where characters choose certain exotic interventions in pursuit of certain rare medical conditions only to be struck down by their lack of subservience to The Scientific Consensus, I could just say I'm doing these things and that would be a little helpful in a UX sense.
Seriously the amount of misinformation it has given me is quite staggering. Telling me things like, “you need to fill your drainage pipes with sand before pouring concrete over them…”, the danger with these AI products is that you have to really know a subject before it’s properly useful. I find this with programming too. Yes it can generate code but I’ve introduced some decent bugs when over relying on AI.
The plumber I used laughed at my when I told him about there sand thing. He has 40 years experience…
I've had a decent experience (though not perfect) with identifying and understanding building codes using both Claude and GPT. But I had to be reasonably skeptical and very specific to get to where I needed to go. I would say it helped me figure out the right questions and which parts of the code applied to my scenario, more than it gave the "right" answer the first go round.
If I'd follow any of the suggestions I'd probably be in ER. Even after me pointing out issues and asking it to improve - it'd come up with more and more sophistical ways of doing same fundamentally dangerous actions.
LLMs are AMAZING tools, but they are just that - tools. There's no actual intelligence there. And the confidence with which they spew dangerous BS is stunning.
C'mon, just use the CNC. Seriously though, what kind of cuts?
All the circumstances where ChatGPT has given me shoddy advice fall in three buckets:
1. The internet lacks information, so LLMs will invent answers
2. The internet disagrees, so LLMs sometimes pick some answer without being aware of the others
3. The internet is wrong, so LLMs spew the same nonsense
Knowledge from blue collar trades seems often to in those three buckets. For subjects in healthcare, on the other hand, there are rooms worth of peer reviewed research, textbooks, meta studies, and official sources.
this makes the tool only useful for things you already know! I mean, just in this thread there's an anecdote from a guy who used it to check a diagnosis, but did he press through other possibilities or ask different questions because the answer was already known?
And I think this is the advice that should always be doled out when using them for anything mission critical, legal, etc.
The chance of different models hallucinating the same plausible sounding but incorrect building codes, medical diagnoses, etc, would be incredible unlikely, due to arch differences, training approaches, etc.
So when two concur in that manner, unless they're leaning heavily on the same poisoned datasets, there's a healthy chance the result is correct based on a preponderance of known data.
I have very mild cerebral palsy[1], the doctors were wrong about so many things with my diagnosis back in the mid to late 70s when I was born. My mom (a retired math teacher now with an MBA back then) had to go physically to different libraries out of town and colleges to do research. In 2025, she could have done the same research with ChatGPT and surfaced outside links that’s almost impossible via a web search.
Every web search on CP is inundated with slimy lawyers.
[1] it affects my left hand and slightly my left foot. Properly conditioned, I can run a decent 10 minute mile up to a 15K before the slight unbalance bothers me and I was a part time fitness instructor when I was younger.
The doctor said I was developmentally disabled - I graduated in the top of my class (south GA so take that as you will)
AI could effectively do most legal and medical work, and you can make a human do the final decision-making if that's really the reason. In fact, I bet most lawyers and doctors are already using it in one way or another; after all, both are about reciting books and correlating things together. AI is definitely more efficient at that than any human. Meanwhile, the engineering work that requires critical thinking and deep understanding of the topic is allowed to be bombarded with all AI models. What about the cases where bad engineering will kill people? I am a firm believer that engineers are the most naive people who beg others to exploit them and treat them like shit. I don't even see administrative assistants crying about losing their jobs to AI; every other profession guards its workers, including blue collar, except the ‘smart’ engineers.
the damage certain software engineers could do certainly surpasses most doctors
But yeah, I'd be down for at least some code of ethics, so we could have "do no harm" instead of "experiment on the mental states of children/adolescents/adults via algorithms and then do whatever is most addictive"
absolutely
if the only way to make people stop building evil (like your example) is to make individuals personally liable, then so be it
https://openai.com/en-GB/policies/usage-policies/
Your use of OpenAI services must follow these Usage Policies:
Protect people. Everyone has a right to safety and security. So you cannot use our services for:
provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professionalObviously, there is one piece of advice: Even if LLMs were the best health professionals, they would only have the information that users voluntarily provide through text/speech input. This is not how real health services work. Medical science now relies on blood/(whatever) tests that LLMs do not (yet) have access to. Therefore, the output from LLM advice can be incorrect due to a lack of information from tests. For this reason, it makes sense to never trust LLM with a specific health advice.
While what you're saying is good advice, that's not what they are saying. They want people to be able to ask ChatGPT for medical advice, give answers that sound authoritative and well grounded medical science, but then disavow any liability if someone follows their advice because "Hey, we told you not to act on our medical advice!"
If ChatGPT is so smart, why can't it stop itself from giving out advice that should not be trusted?
They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.
Sometimes. Sometimes they practice by text or phone.
> They’re also good at extracting information in a way that (at least currently) sycophantic LLMs don’t replicate.
If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
For very simple issues. For anything even remotely complicated, they’re going to have you come in.
> If I had to guess, I think I'd guess that mainstream LLM chatbots are better at getting honest and applicable medical histories than most doctors. People are less likely to lie/hide/prevaricate and get more time with the person.
It’s not just about being intentionally deceptive. It’s very easy to get chat bots to tell you what you want to hear.
That says nothing about whether it is an appropriate substitute. People prefer doctors who prescribe antibiotics for viral infections, so I have no doubt that many people would love to use a service that they can manipulate to give them whatever diagnosis they desire.
Rarely. Most visits are done in 5 minutes. The physician that takes their time to check everything like you claim almost does not exist anymore.
The difference is a telehealth is much better at recognizing "I can't given an accurate answer for this over the phone, you'll need to have some tests done" or cast doubt on the patient's accuracy of claims.
Before someone points out telehealth doctors aren't perfect at this: correct, but that should make you more scared of how bad sycophantic LLMs are at the same - not willing to call it even.
I'm not sure this is true.
But even then just because you don’t think they are using most of their senses, doesn’t mean they aren’t.
In the US people on Medicaid frequently use emergency rooms as primary care because they are open 24/7 and they don’t have any copays like people with private insurance do. These patients then get far more tests than they’d get at a PCP.
Like it or not there are people out there that really want to use webMD 2.0. they're not going to let something silly like blood work get in their way.
I asked the "expert" itself (ChatGPT), and apparently you can ask for medical advice, you just can't use the medical advice without consulting a medical professional:
Here are relevant excerpts from OpenAI’s terms and policies regarding medical advice and similar high-stakes usage:
From the Usage Policies (effective October 29 2025):
“You may not use our services for … provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
Also: “You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making … medical … decisions about them.”
From the Service Terms:
“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”
In plain terms, yes—the Terms of Use permit you to ask questions about medical topics, but they clearly state that the service cannot be used for personalized, licensed medical advice or treatment decisions without a qualified professional involved.
Ah drats. First they ban us from cutting the tags off our mattress, and now this. When will it all end...
One of the things I respected OpenAI for at the release of ChatGPT was not trying to prevent these topics. My employer at the time had a cutting-edge internal LLM chatbot for a which was post-trained to avoid them, something I think they were forced to be braver about in their public release because of the competitive landscape.
Both of these, separately and taken together, indicate that the terms apply to how the output of ChatGPT is used, not a change to its output altogether.
I am not a doctor, I can't give medical advice no matter what my sources are, except maybe if I am just relaying the information an actual doctor has given me, but that would fall under the "appropriate involvement" part.
Disclaimers like “I am not a doctor and this is not medical advice” aren’t just for avoiding civil liability, they’re to make it clear that you aren’t representing yourself as a doctor.
Am I allowed to get haircutting advice (in places where there's a license for that)? How about driving directions? Taxi drivers require licensing. Pet grooming?
The admins regularly change the title based on complaints, which can be really confusing when the top, heavily commented thread is based on the original title.
According to the Wayback machine, the title was "OpenAI ends legal and medical advice on ChatGPT", while now when I write this the title is "ChatGPT terms disallow its use in providing legal and medical advice to others."
> OpenAI is changing its policies so that its AI chatbot, ChatGPT, won’t dole out tailored medical or legal advice to users.
This already seems to contradict what you're saying.
But then:
> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
> The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
This seems to suggest that with the Jan 25 policy using it to offer legal and medical advice to other people was already disallowed, but with the Oct 25 update the LLM will stop shelling out legal and medical advice completely.
This is from Karan Singhal, Health AI team lead at OpenAI.
Quote: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”
'An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed.'
I trust what he says over general vibes.
(If you think he's lying, what's your theory on WHY he would lie about a change like this?)
(e.g. are the terms of service, or exerpts of it, available in the system prompt or search results for health questions? So a response under the new ToS would produce different outputs without any intentional change in "behaviour" of the model.)
All you need are a few patients recording their visits and connecting the dots and OpenAI gets sued into oblivion.
I'm pretty sure it's a fundamental issue with the architecture.
LLMs hallucinate because training on source material is a lossy process and bigger, heavier LLM-integrated systems that can research and cite primary sources are slow and expensive so few people use those techniques by default. Lowest time to a good enough response is the primary metric.
Journalists oversimplify and fail to ask followup questions because while they can research and cite primary sources, its slow and expensive in an infinitesimally short news cycle so nobody does that by default. Whoever publishes something that someone will click on first gets the ad impressions so thats the primary metric.
In either case, we've got pretty decent tools and techniques for better accuracy and education - whether via humans or LLMs and co - but most people, most of the time don't value them.
LLMs hallucinate because they are probabilistic by nature not because the source material is lossy or too big. They are literally designed to create some level of "randomness" https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
I'm no ML or math expert, just repeating what I've heard.
You’re right that LLMs favor helpfulness so they may just make things up when they don’t know them, but this alone doesn’t capture the crux of hallucination imo, it’s deeper than just being overconfident.
OTOH, there was an interesting article recently that I’ll try to find saying humans don’t really have a world model either. While I take the point, we can have one when we want to.
Edit: see https://www.astralcodexten.com/p/in-search-of-ai-psychosis re humans not having world models
It's trivial to get a thorough spectrum of reliable sources using AI w/ web search tooling, and over the course of a principled conversation, you can find out exactly what you want to know.
It's really not bashing, this article isn't too bad, but the bulk of this site's coverage of AI topics skews negative - as do the many, many platforms and outlets owned by Bell Media, with a negative skew on AI in general, and positive reinforcement of regulatory capture related topics. Which only makes sense - they're making money, and want to continue making money, and AI threatens that - they can no longer claim they provide value if they're not providing direct, relevant, novel content, and not zergnet clickbait journo-slop.
Just like Carlin said, there doesn't have to be a conspiracy with a bunch of villains in a smoky room plotting evil, there's just a bunch of people in a club who know what's good for them, and legacy media outlets are all therefore universally incentivized to make AI look as bad and flawed and useless as possible, right up until they get what they consider to be their "fair share", as middlemen.
At least with the LLM (for now) I know it's not trying to sell me bunkum or convince me to vote a particular way. Mostly.
I do expect this state of affairs to last at least until next wednesday.
For example, the simple algorithm is_it_lupus(){return false;} could have an extremely competitive success rate for medical diagnostics... But it's also obviously the wrong way to go about things.
Is it also disallowing the use of licensed professionals to use ChatGPT in informal undisclosed ways, as in this article? https://www.technologyreview.com/2025/09/02/1122871/therapis...
e.g. is it only allowed for medical use through an official medical portal or offering?
I’m not saying we should be getting AI advice without a professional, but I’m my case it could have saved my kid a LOT of physical pain.
Something I've noticed is that it's much easier to lead the LLM to the answer when you know where you want to go (even when the answer is factually wrong !), it doesn't have to be obvious leading but just framing the question in terms of mentioning all the symptoms you now know to be relevant in the order that's diagnosable, etc.
Not saying that's the case here, you might have gotten the correct answer first try - but checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history.
The first ER doc thought it was just a stomach ache, the second thought a stomach ache or maybe appendicitis. Did some ultrasounds, meds, etc. Got sent home with a pat on the head, came back a few hours later, still no answers.
I gave her medical history and all of the data from the ER visits to whatever the current version of ChatGPT was at the time to make sure I wasn’t failing to ask any important questions. I’m not an AI True Believer (tm), but it was clear that the doctors were missing something and I had hit the limit of my Googling abilities.
ChatGPT suggested, among an few other diagnoses, a rare intestinal birth defect that affects about 2% of the population; 2% of affected people become symptomatic during their lifetimes. I kind of filed it away and looked more into the other stuff.
They decided it might be appendicitis and went to operate. When the surgeon called to tell me that it was in fact this very rare condition, she was pretty surprised when I said I’d heard of it.
So, not a one-shot, and not a novel discovery or anything, but an anecdote where I couldn’t have subconsciously guided it to the answer as I didn’t know the answer myself.
We had in our family a “doctors are confused!” experience that ended up being that.
>checking my now diagnosed gastritis I got stuff from GERD to CRC depending on which symptoms I decide to stress and which events I emphasize in the history
So... exactly the same behavior as human doctors?The value that folks get from chatgpt for medical advice is due in large part to the unhurried pace of the interaction. Didn't get it quite right? No doctor huffing and tapping their keyboard impatiently. Just refine the prompt and try as many times as you like.
For the 80s HNers out there, when I hear people talk about talking with chatgpt, Kate Bush's song A Deeper Understanding comes immediately to mind.
https://en.wikipedia.org/wiki/Deeper_Understanding?wprov=sfl...
Human doctors, on the other hand ... can be tired, hungover, thinking about a complicated case ahead of them, nauseous from a bad lunch, undergoing a divorce, alcoholics, depressed...
We humans have a lot of failure modes.
This goes both ways, too. It’s becoming common to see cases where people become convinced they have a condition but doctors and/or tests disagree. They can become progressively better and better at getting ChatGPT to return the diagnosis by refining their prompts and learning what to tell it as well as what to leave out.
Previously we joked about WebMD convincing people they had conditions they did not, but ChatGPT is far more powerful for these people.
I had a long ongoing discussion about possible alternate career paths with ChatGPT in several threads. At that point it was well aware of my education and skills, had helped clean up resumes, knew my goals, experience and all that.
So I said maybe I'll look at doing X. "Now you are thinking clearly! This is a really good fit for your skill set! If you want I can provide a checklist.". I'm just tossing around ideas but look, GPT says I can do this and it's a good fit!
After 3 idea pivots I started getting a little suspicious. So I try to think of the thing I am least qualified to do in the world and came up with "Design Women's Dresses". I wrote up all the reasons that might be a good pivot (i.e. Past experience with landscape design and it's the same idea, you reveal certain elements seductively but not all at once, matching color palettes, textures etc). Of course GPT says "Now you are really thinking clearly! You could 100% do this! If you want I can start making a list of what you will need to produce you first custom dresses". It was funny but also a bit alarming.
These tools are great. Don't take them too seriously, you can make them say a lot of things with great conviction. It's mostly just you talking to yourself in my opinion.
It's an imperfect situation for sure, but I'd like to see more data.
And even with all of that info, they often come out with the wrong conclusions at times. Doctors do a critically important role in our society and during covid they risked their lives for us, more than anyone else, i do not want to insult or bring down the amount of hard work doctors do for their society.
But worshipping them as holier than thou gods is bullshit, that almost anyone who has spent some time with going back and forth with various doctors over the course of years will come to the conclusion of.
Having an AI assistant doesnt hurt, in terms of medical hints, we need to make having Personal Responsibility popular again, in society’s obsession for making every thing “idiot proof” or “baby proof” we keep losing all sorts of useful and interesting solutions because our politicians have a strong itch to regulate anything and everything they can get their hands on, to leave a mark on society.
I'd say the same about AI.
And you’d be right, so society should let people use AI while warning them about all the risks related to it, without banning it or hiding it behind 10,000 lawsuits and making it disappear by coercion.
I don't agree with the idea that "we need rules to make people use the worse option" — contrary to prevailing political opinion, I believe people should be free to make their own mistakes — but I wouldn't necessarily rush to advocate that everyone start using current-gen AI for important research either. It's easy to imagine that an average user might lead the AI toward a preconceived false conclusion or latch onto one particular low-probability possibility presented by the AI, badger it into affirming a specific answer while grinding down its context window, and then accept that answer uncritically while unknowingly neglecting or exacerbating a serious medical or legal issue.
It should empower and enable informed decisions not make them.
I use it. Found it to be helpful.
In my opinion, AI should do both legal and medical work, keep some humans for decision making, and the rest of the doctors to be surgeons instead.
Edit: Not saying this is the case for the person above, but one thing that might bias these observations is ChatGPT’s memory features.
If you have a chat about the condition after it’s diagnosed, you can’t use the same ChatGPT account to test whether it could have diagnosed the same thing (since the chatGPT account now knows the son has a specific condition).
The memory features are awesome but also sucks at the same time. I feel myself getting stuck in a personalized bubble even more so than Google.
He literally wrote that. I asked how he knows it's the right direction.
it must be treatment worked. otherwise it is more or less just a hunch
people go "oh yep that's definitely it" too easily. it is the problem with self diagnosing. And you didn't even notice it happened...
without more info this is not evidence.
Is this an actual technical change, or just legal CYA?
I understand the change but it’s also a shame. It’s been a fantastically useful tool for talking through things and educating myself.
Being clear that not all lawyers or doctors (in this example) are experts in every area of medicine and law, and knowing what to know and learn about and ask about is usually a helpful way.
While professionals have bodies for their standards and ethics, like most things it can represent a form of income, and depending on the jurisdiction, profitability.
Modern LLMs are already better than the median doctor diagnostically. Maybe not in certain specialties, but compared to a primary care physician available to the average person, I'd take the LLM any day.
This is not shutting anything down other than businesses using ChatGPT to give medical advice [0].
Users can still ask questions, get answers, but the terms have been made clearer around reuse of that response (you cannot claim that it is medical advice).
I imagine that a startup that specialises in "medical advice" infers an even greater level of trust than simply asking ChatGPT, especially to "the normal people".
0: https://lifehacker.com/tech/chatgpt-can-still-give-legal-and...
Obviously they should disallow them and more broadly should be banned from providing anyone medical Advice
In all seriousness, it’s really about the relative lack of research skills that people have. If you know how to do research and apply critical thinking, then there’s no problem. The cynic in me blames the education system (in the US, idk how other countries stack up).