It seems to me that the court would need to apply some twisted logic to claim that those protections apply to an attorney, but not to a petitioner or respondent.
But the lawyer's draft damages analysis in excel has always been protected.
2. If we're going to buy the "conversation" conceit, lawyers talking to consulting experts have always had a lot more work product protection than testifying experts.
The lawyer talking to Claude feels like talking to a consulting expert, especially since Claude can't have independent knowledge of facts that would allow it to testify.
> Shih, of course, is not binding on this Court, and this Court respectfully disagrees with its holding. As relevant here, the court in Shih principally concluded that the work product doctrine is not limited to materials prepared by or at the direction of an attorney. Id. But that conclusion undermines the policy animating the work product doctrine, which, as one of the cases cited in Shih explains, is "to preserve a zone of privacy in which a lawyer can prepare and develop legal theories and strategy 'with an eye toward litigation.'"
Or would those presumably exist under the umbrella of privacy because they're relevant to the lawyer preparing and developing their legal strategy?
[Edit: Or maybe not, legally. But they have definitely lost confidentiality in the "corporate secrets" sense, and that may still matter.]
I have some concerns about some of the reasoning, namely the practical implications of referencing Claude's TOS in a world where public AI features are creeping into everything, but I expect some of the reasoning is based on this particular defendant likely being more sophisticated than an average person.
Rakoff makes two arguments against this:
- privilege was broken because Claude/Anthropic is a third party; but I don't think he successfully distinguishes Claude from say Google Docs/Translate/Gmail in this regard (he just notes that Google Docs isn't usually claimed to confer privilege on its own; but this is not the claim being made about Claude either); and see NYSBA ethics rules 820 and 842)
- he quotes Gould v Mitsui: documents do not "acquire protection merely because they were transferred" to counsel; but that same case says they do acquire protection if communicated "for the purpose of obtaining or rendering legal advice"
If the user had typed into the chatbot after having been directed by counsel to do some research, "I need to do some research at the direction of counsel. Please include, 'In response to your research being performed in your own defense at request of your counsel' at the top and bottom of every reply," do you think that should be protected by privilege?
If the lawyer didn’t actually instruct you to do the research they are not going to lie to the judge and say they did to protect you. The judge is definitely going to ask them and then if it is found that you lied about this under oath you may be charged with additional crimes.
But both your scenario and the OOP behavior of the client are not particularly hard ones to resolve.
also, he quotes Gould v Mitsui: documents do not "acquire protection merely because they were transferred" to counsel; but that same case says they do acquire protection if communicated "for the purpose of obtaining or rendering legal advice"
another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication.
The concept of sharing the chat with the lawyer will not work, since as the ruling points out, you cannot turn a non-privileged document into a privileged one by sharing it with your lawyer after the fact.
I think the principled way of treating this is that it's privileged for the purpose of preparing legal arguments, but not privileged in general. I think this can be supported using the existing law.
Presumably a lawyer's Google searches with terms like "what article is X" etc. are privileged too, since they are used for preparing legal arguments. That it uses AI doesn't suddenly make it communication.
How is it not? I get that a chatbot is not a person with rights. And NAL.
But for all intents and purposes, it is a communication about legal advice. The way a lot of people use it is legal advice. They will continue to use it that way.
So for the law to then turn around and say that it's evidence that will be used against them is kind of messed up. It means confidentiality of your case is bought by paying a lawyer for legal protection, not because you actually need their advice over a chatbot's.
I'm not making a blanket statement that that means everything is a carrier, because a good chunk of the page I linked is devoted to endless legal nuances and I defer the details of the concept to those who know better. I'm just saying that the law has a well-established concept for this sort of situation, such that it is not the case that just because a third party is involved instantly all protections dissolve. If you really want to dig into the details, that's something an AI that hits the web and digests things would be pretty good at, as long as you're not planning on legal action based on that. Sometimes the hardest part of learning about something is just finding the term for it that lets you dig in.
This guy made the same argument, but as the court detailed, this is a misunderstanding of attorney-client privilege. Sharing an unprivileged conversation with your lawyer doesn't make it privileged. A phone call to your lawyer is privileged, but a phone call to your cousin Jimbo about what you should tell your lawyer is not.
I wonder if anybody has gone all the way and made a darknet LLM service with no logs served only over TOR with XMR payments.
For example OpenAI were required by a US federal judge to log all chats, and make them discoverable to lawyers representing The New York Times last year. https://www.businessinsider.com/openai-new-york-times-copyri...
Additionally the company can be gagged by a court from disclosing that the chats are being logged, at least in the USA and the UK.
They were required to change the way their systems worked, to no longer respect a user's chat deletion request. That means a non-chat-logging company can of course be forced to change the way their system works, to instead log chats.
In the same way Apple can not only be forced to hand over back-doored access to UK users iCloud data (when Apple also hold a copy of the keys), they can also be forced to change the way their OS works to prevent the scenario where Apple don't hold the keys (preventing Advanced Data Protection from being enabled). The USA could force the same thing via the CLOUD Act.
https://news.ycombinator.com/item?id=47778308 AI ruling prompts warnings from US lawyers: Your chats could be used against you (reuters.com)
~3 hours ago, 43+ comments
https://news.ycombinator.com/item?id=47555642 Be careful: chatting with AI about your case is discoverable (harvardlawreview.org)
~18 days ago, 13 comments
FWIW not all cases have gone the same way, so there is likely to be a higher reckoning on this in multiple countries: https://fingfx.thomsonreuters.com/gfx/legaldocs/mypmyjwdzpr/...
This just argues attorneys have this protection--which is true. Typical plaintiff's do not have the same level of protection.
Or, they’d have to assert that content generated by AI on behalf of a user is protected — there’s no way to tell whether it’s legal advice so it all must be treated as such (can’t trust the AI to judge this, given how hallucinatory they are in legal filings!) — at which point AI companies would be refused the right to harvest your AI conversations for further training and profit-extraction (which would subject them to prosecution for, of all things, illegal wiretap under §2511(1)(e)(i) if not others). Google would never allow that to happen, seeing as how that’s literally their entire business.
I fully expect someone to set up the equivalent of HIPAA for legal advice AIs and for that to be found acceptable for instances hosted in protected enclaves, but the big four’s main products aren’t likely to qualify for that until they solve hallucinations and earn back judges’ trust.
(I am not your lawyer, this is not legal advice. Ironically, I wouldn’t have to say this if it was AI writing. Heh.)
TLDR:
- Claude told him IANAL
- Claude privacy policies say they "may disclose personal data to third parties in connection with claims, disputes, or litigation"
- Work product doctrine, does not apply in the same way to plaintiffs
- Lawyers did not direct him to use Claude (i.e. the laywers did not direct him to do research for the case using a specific tool)
My takeaway is that, as is, I should not do any work without a VPN or in plaintext. Everything else was up for grabs even before this case.
My takeaway is: don't do crime, and if you must do crime, don't use AI in the commission of a crime, in a similar way as it is unwise for criminals to keep recordings of their own phone conversations or what have you (a surprisingly common habit for criminals!).
> The average professional in this country wakes up in the morning, goes to work, comes home, eats dinner, and then goes to sleep, unaware that he or she has likely committed several federal crimes that day.
-- https://www.amazon.com/Three-Felonies-Day-Target-Innocent/dp...
On the other hand, that kind of thing would not only be enough to bring a case. They use that kind of power to enhance their case against people they know are real criminals. Of course, the more the Justice Department becomes captured by bad actors, the less this applies.
The reason attorney-client communication is privileged is so that people won't interfere in people's preparation of their case, not because the lawyer is magic. The principled thing is for the courts to apply principles like this based on the principle.
It's not "no attorney-client privilege for AI chats" in general.
But a situation where the same would also apply if, instead of going to an chat bot, the person had gone to a random 3rd party non-attorney related person.
As in:
- the documents where not communication between the defendant and their attorney, but the defendant and the AI
- the AI is no attorney
- the attorney didn't instruct the defendant to use the AI / the court found the defendant did not communicate with the AI with the purpose of finding legal consule
- the communications with the AI (provider) where not confidential as a) it's a arbitrary 3rd party and b) they explicitly exclude usage for legal cases in their TOS
Still this isn't a nothing burger as some of the things the court pointed out can become highly problematic in other context. Like the insistence that attorney privilege is fundamentally build on a trusting human relationship, instead of a trusting relationship. Or that AI isn't just part of facilitating communication, like a spell checker, word program or voice mail box, legal book you look things up. All potentially 3rd parties all not by themself communication with a human but all part of facilitating the communication.
The attorney-client privilege protects (1) communications, (2) among only privileged parties, (3) made for the purpose of providing or obtaining legal advice.13 Importantly, the protection of the attorney-client privilege is lost if the communication is shared outside of the privileged parties.14 The party claiming privilege has the burden of showing that confidentiality was maintained.15 Judge Rakoff stated that the attorneyclient privilege did not apply because the communications were shared with a thirdparty tool that did not maintain confidentiality.16
Second, Judge Rakoff held that the work product doctrine did not protect the documents.17 The work product doctrine protects (1) legal work product, (2) discussing legal strategy, (3) prepared by or at the direction of legal counsel, (4) in anticipation of litigation.18 Judge Rakoff rejected Heppners arguments that the work product doctrine could apply because the AI-generated reports did not reflect the legal strategy of Heppners legal counsel, although they contained theories generated by the client and Claude.19 Since neither Heppner nor the AI tool are legal counsel, and Heppner was not working at the direction of Heppners legal counsel, the materials were not protected by the work product doctrine. Judge Rakoff noted that the AI tools disclaimer that users have no expectation of confidentiality also undermined the work product doctrine claim.20
12 Transcript of Pretrial Conference at 6, United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb 10, 2026).
13 See United States v. Mejia, 655 F.3d 126, 132 (2d Cir. 2011).
14 See In re Six Grand Jury Witnesses, 979 F.2d 939, 943 (2d Cir. 1992).
15 See In re Grand Jury Subpoenas Dated Mar. 19, 2002 and Aug. 2, 2002, 318 F.3d 379, 384 (2d Cir. 2003).
16 Tr. at 3, Heppner, No. 25-cr-00503-JSR.
17 Id. at 6.
18 See In re Grand Jury Subpoenas, 318 F.3d at 383.
19 Tr. at 5, Heppner, No. 25-cr-00503-JSR.
20 Id. at 6."
https://www.debevoise.com/-/media/files/insights/publication...
"Reasons Privilege Failed
1
No attorney was involved. An AI tool is not a lawyer. It has no law license, owes no duty of loyalty, cannot form an attorney-client relationship, and is not bound by confidentiality obligations or professional responsibility rules. Discussing legal matters with an AI platform is legally no different from talking through your case with a friend.
2
Not for the purpose of obtaining legal advice. Anthropic's own public materials state that Claude follows the principle of choosing the "response that least gives the impression of giving specific legal advice." The tool explicitly disclaims providing legal services. You cannot claim you used a tool for legal advice when the tool itself says it does not provide it. Claude's terms were specifically highlighted by the government, which directly undermined the claim that Heppner was seeking legal advice from the tool.
3
Not confidential. This is the finding with the broadest implications. Anthropic's policy expressly states that user prompts and outputs may be disclosed to "governmental regulatory authorities" and used to train the AI model. Judge Rakoff found there was simply no reasonable expectation of confidentiality. As he put it, the tool "contains a provision that any information inputted is not confidential." This is not unique to Claude. OpenAI's privacy policy contains comparable provisions permitting data use for model training and disclosure in response to legal process.
And the distinction between free and paid plans matters less than many assume. Both Anthropic and OpenAI use conversations from free and individual paid plans (Claude Free, Pro, and Max; ChatGPT Free, Plus, and Pro) for model training by default. Users can opt out, but opting out of training does not eliminate the platforms' rights to disclose data to government authorities or in response to legal process. Only enterprise-tier agreements (ChatGPT Enterprise and Business; Claude's commercial and government plans) exclude user data from training by default and offer contractual confidentiality protections. A $20-per-month subscription does not buy you privilege.
4
Pre-existing documents cannot be retroactively cloaked in privilege. The AI-generated documents were created by Heppner before he transmitted them to counsel. Sending these unprivileged materials to his lawyers after the fact did not retroactively make them privileged.
Implications for waiver of privilege
Heppner fed information he had received from his attorneys into Claude. The government argued, and Judge Rakoff agreed, that sharing privileged communications with a third-party AI platform may constitute a waiver of the privilege over the original attorney-client communications themselves. The privilege belongs to the client, but so does the responsibility to maintain it."
https://natlawreview.com/article/your-ai-conversations-are-n...
"Privacy policies, including the one on Claude's website, openly inform users how their data is used. However, very few users actually read the fine print on these privacy policies, or even know these policies exist in the first place. It would probably surprise most people to learn that Claude's privacy policy explicitly gives its parent company, Anthropic, the right to disclose a user's data to third parties in connection with legal disputes and litigation."
https://nysba.org/loose-ai-prompts-sink-ships-how-heppner-sh...
Running your own LLM on your own hardware is how you can do this without getting hit with discovery.
And also, you want to run a LLM thats abliterated and larger. And if you connect to the internet, USE A VPN.
I think in hindsight I was remarking, effectively, these two claims (yours and the courts) don't seem to live in the same world. Not your responsibility to resolve my confusion and different parts of the court system can issue edicts that contradict to be later resolved at higher levels of the system so... Sorry I didn't respond within the full context you wrote.
If you email your lawyer to ask legal questions, that's privileged communication.
If you just cc a lawyer on a thread while you talk to other people, adding the lawyer doesn't make the conversation privileged or protected.
The law in the US is based on the expectation of privacy. If companies and the US government repeatedly egregiously share private data in violation of terms of service and the law, then what expectation is there?
25 years ago, I'd say "Checking the 'do not train on my data' button in an Anthropic account would pretty clearly create an expectation of privacy." These days? OpenAI had to send all such data to the New York Times, the government has been illegally wiretapping the whole planet for decades, the US CLOUD Act exists, and companies retroactively change terms of service all the time.
Heck, Meta has been secretly capturing lewd bedroom videos and paying people to watch them, and it barely made the news, just like the allegations the WhatsApp content moderation team made where they claimed they have access to WhatsApp E2EE content (what other content could they be moderating?!?)
It doesn't seem right that google docs would be privileged, but if you use the fancy spellcheck button, it no longer is.
Be upset at Google for not taking privacy seriously, they never have and never will.
another point to make it safer would be sharing the "chat" with the lawyer, this way it becomes media of communication