If I hand wrote some notes in a notebook or diary, I wouldn't have to hand them over, as I understand it, even with no lawyer in the mix. Same if I wrote some notes in a text file on my computer.
Leaving AI aside, what in particular makes this different from using any other cloud-based software? Does writing a Google Doc to gather my thoughts or a draft email in Gmail constituent "revealing information from a lawyer to a third party"?
What if Google have enabled AI-features on these? Feels like this area really needs clarity for users rather than waiting for courts to rule on it.
Absolutely wrong in the U.S. The police can't just break into your home and demand it, but a judge can 100% mandate discovery or a subpoena if there is reason to believe that evidence exists which is relevant to the case.
The 4th amendment prohibits UNREASONABLE search and seizure, and we let judges make that determination. You never have absolute privacy rights.
I was on a jury recently where we had to swap out judges in the last couple days of the trial. The reason was because the judge had been assigned another case where the defendant had not waived his right to a speedy trial. The judge wanted to finish his existing case first, the defense lawyers said "You can't do that", the judge looked it up and found out that indeed they were right, so off he went to start the new case and handed off the existing one to a colleague. In my experience judges really do take the law seriously - that's how they get to be judges.
The value of that configuration has just been greatly magnified.
Perhaps this could be gleaned from your ISP's records, but it would be far more difficult than determining the existence of an account at Anthropic.
There is some protection of personal private documents for civil cases. But for a criminal case, there is no 4th or 5th amendment protection for stuff you wrote in your diary.
First off, the Fifth Amendment right to not self-incriminate is rather narrower than you might expect. With regard to document production, it only privileges you from having to produce documents if the act of producing those documents would in effect incriminate you. So if you tell people "I've got a diary where I've been keeping track of all the crimes I've committed..." the government can force you to turn over that diary.
Second, the default assumption whenever you send something to another person is that it's unprivileged communication. IANAL, but even using cloud storage for things I'd want to remain privileged is something I'd want to ask a lawyer about before relying on. Although that's also as much because the default privacy policy of most services is "fuck you."
Which is what happened here. Claude's privacy policy says that Anthropic reserves the right to share your chats with third parties for various reasons, which means you have no reasonable expectation of privacy in those communications in the first place and automatically defeats any other confidential privileges. What happened is therefore little different from the defendant texting his attorney's responses to his friends, which is a fairly time-worn way of defeating attorney-client privilege.
Seems an opportune time to remember that every day is STFU Friday. And, to quote The Wire, is you taking notes on a criminal fucking conspiracy?
And consider local LLM logs no different from your txt file or command history on your computer. Could still be requested for discovery.
Is that true? I would expect that any notes I have in any form could be requested during discovery (client-attorney priviledge being one of the few exceptions and narrower than people assume).
The only reason this ruling is even remotely interesting is because people don't understand computer systems, and chatbots feel different. For the technologically minded, it should be pretty obvious that typing into a chatbot is no different from typing into a Google Doc, and that the data in both can be available to the legal system without the user's involvement or consent. But most people aren't technologically minded and may not have realized that all of their data is being saved and made available like that.
Would a judge be able to demand the attorney hand over written notes from his clients?
I doubt it.
I understand that other countries handle this differently and might have more privacy restrictions, but this seems to come down to a judge asking a neutral third-party to testify to what they know about a subject and them responding with search history/chat logs/location pings. I guess if you want to do crimes then you need to stop intentionally revealing incriminating evidence to unbound third-parties.
'No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote'.
A local model or Venice are still platforms, just local.
Nerd smarts seldom survive real world smarts. Reminds me of this: https://xkcd.com/538/
For 1), his reasoning shows how intelligent, well-read humans view AI which is quite different from the attitudes seen on HN. Rakoff calls the chats "Claude searches" which while it may sound ridiculous (what is this, Perplexity?) is just how some people must view this crazy new thing: another Google. You type stuff in and get results out.
2) Rakoff goes through the 3 elements of attorney client privilege in US law (communications between attorney and client, intended to be and kept confidential, and for the purpose of legal advice). It's obvious the Claude chats fail two of them and he goes over why.
3) A lot of people bring up the point that if you use Google Docs to transcribe privileged information, is that the same, since you send your data to Google? The model AI companies take when they cater to legal clients is akin to that of a locked filing cabinet in a storage facility: sure, you're sending the data to them, but with a ZDR they ain't looking at it or training on it.
Another CRITICAL point here not mentioned in the article is Warner v Gilbarco; Gilbarco directly contradicts Heppner and indicates that work-product doctrine covers AI-generated chats! https://perkinscoie.com/insights/update/heppner-and-gilbarco...
The law is not settled.
I looked into on-premises AI for legal as a business idea but decided it's not a great idea right now.
I don't want to give them ideas, but surely someone else would have thought of it after reading this article's headline.
Google AI Edge Gallery now runs Gemma-4-E2B-it locally on an iPhone after a 2.5Gb download.
No network calls needed, claims Google.
Self-hosting is always a strong option for privacy seeking people, as it should be.
Seems like there might be demand for chat clients with end-to-end encryption.
Seems dumb, and like it will cause quite a few issues until it is overturned.
Say a person used Excel via Office 365 to run some calculations to be given to their lawyer for their defense. Is that considered to be "communicating with a third party?" I don't think so, it's just a computer tool.
We call them "chatbots" and anthropomorphize LLMs, but, despite the name of Claude's parent company, Claude is not a person.
Why? The privacy policy explicitly says that when you're using it, you're sending your data to Anthropic.
> Say a person used Excel via Office 365 to run some calculations to be given to their lawyer for their defense. Is that considered to be "communicating with a third party?" I don't think so, it's just a computer tool.
Very possibly, actually. At the very least, I wouldn't assume that it's okay to do that without first consulting with a lawyer. I do know of at least one feature in Office (desktop, not the web version) that prompted lawyers to say "if you don't roll this back, we cannot legally use your product anymore and maintain attorney-client privilege." It depends a lot on the actual contractual agreements in the terms of service and privacy policy, and while I know most people don't read them, those things actually matter!
If so, then does a Google doc for your attorney written with Google AI auto enabled have attorney client privilege?
If so, the AI chats for figuring out what you want to say to your attorney would seem to fall under the same category. And so there is either a contradiction or an unintended widening of scope.
Chatbots are not people. They are computer programs. And there's no other realm I can think of where merely interfacing with a computer program breaks attorney-client privilege.
It is equivalent to saying an email to your lawyer breaks privilege because you communicated with gmail. And it gets turbofucked when you consider that a program may be sending your information to an LLM. Would this same judge rule that having copilot installed in Outlook also breaks privilege because they "chatted with an outside party" while drafting an email (even if they didn't intend to send it to copilot)?
I can't think of a reason this isn't about the technology.
I think it's insanely foolish to use these tools in these configurations.
If you must use AI you should be running it locally.
we need some kind of user-chat privilege much like doctors and their patients, or lawyers and their clients
You send a prompt to a neutral third party who then sends it to an AI model and then routes the response back to you?
Meanwhile, sensible people perform sensitive defense and prosecution related chats anonymously facilitated via local LLMs or cryptocurrency.