United States v. Heppner, No. 25-cr-00503 JSR
Overview
On February 10, 2026, Judge Jed S. Rakoff of the United States District Court for the Southern District of New York ruled in United States v. Heppner, No. 25-cr-00503 JSR, that documents created by a criminal defendant using a commercial AI platform were not protected by attorney-client privilege or the work product doctrine. While the decision is subject to appeal, it carries substantial implications for individuals and organizations that use consumer-facing AI tools to evaluate legal exposure.
Factual Background
Bradley Heppner, founder and former CEO of Beneficient, faces charges of securities fraud and related offenses. After receiving a grand jury subpoena and engaging counsel, Heppner used Anthropicʼs consumer Claude AI platform to research legal questions related to the governmentʼs investigation, inputting information learned from his defense attorneys. When federal agents executed a search warrant, they seized devices containing approximately 31 AI-generated documents. The government moved for a ruling that the documents were not privileged, which Judge Rakoff granted.
The Court’s Analysis
- Attorney-Client Privilege:
Judge Rakoff applied the Second Circuit test for attorney-client privilege, which protects communications between a client and attorney, intended to be and actually kept confidential, and for the purpose of obtaining legal advice (see United States v. Mejia, 655 F .3d 126, 132 (2d Cir. 2011)).
- No Communication With an Attorney. An AI tool holds no law license, owes no duty of loyalty, and cannot form an attorney-client relationship. Anthropicʼs own materials expressly disclaim that Claude provides legal advice.
- Absence of Confidentiality. Claudeʼs privacy policy advises users that Anthropic collects prompt data, uses it to train models, and may disclose information to governmental authorities. Given these terms, there was no reasonable expectation of confidentiality.
- No Retroactive Privilege. Pre-existing, unprivileged documents do not become privileged simply because they are later shared with an attorney.
- Work Product Doctrine:
The work product doctrine protects materials prepared by or at the behest of counsel in anticipation of litigation. Judge Rakoff found this doctrine inapplicable because Heppner created the AI documents on his own initiative, not at his attorneysʼ direction. The court noted the analysis might differ if counsel had directed the use, suggesting that in such circumstances Claude might function “akin to a highly trained professional who may act as a lawyerʼs agent.ˮ - Privilege Waiver Implications:
Perhaps most significantly, Heppner incorporated information from his attorneys into his prompts to Claude. Judge Rakoff agreed with the government that sharing privileged attorney-client communications with a third-party AI platform may waive privilege over those original communications. Voluntarily sharing sensitive information with a platform that retains, trains on, and may disclose that information defeats any reasonable expectation of confidentiality.
Open Questions
The Heppner ruling leaves open whether enterprise-tier AI products with strong confidentiality provisions could support different results, whether AI research directed by counsel might qualify for work product protection, and whether the analysis differs in civil contexts.
Key Implications for Clients & Organizations:
This ruling carries substantial practical consequences for individuals and entities using commercial AI tools.
- Consumer AI Tools Lack Privilege Protection
Communications with publicly accessible AI platforms do not satisfy privilege requirements because such tools are not attorneys, disclaim providing legal advice, and do not maintain confidentiality. The conversational interface creates a dangerous illusion of privacy. - Privacy Policies Are Outcome-Determinative
Courts will examine whether AI platform policies permit disclosure to third parties and governmental authorities. Enterprise-tier agreements with contractual confidentiality protections may support a different analysis. - Inputting Privileged Information Risks Waiver
Sharing attorney-client communications with a commercial AI platform may waive privilege over those communications—a significant concern for anyone inputting information learned from counsel. - Work Product Requires Attorney Direction
Materials created independently by a client, without counselʼs direction, will not qualify for work product protection.
Recommended Actions
- Organizations should audit AI usage policies to ensure applications involving confidential or privileged information are restricted to enterprise platforms with contractual confidentiality protections.
- Counsel should advise clients explicitly that anything input into a consumer AI tool may be discoverable and is almost certainly not privileged — lawyers should consider including such warnings in engagement letters. Clients should be reminded not to input any information they share with counsel, or any information related to the subject matter of the representation, into consumer AI platforms such as ChatGPT, Claude, Gemini, or similar tools. Doing so may waive attorney-client privilege over that information and over related communications.
- Organizations should implement guardrails as needed restricting input of privileged or investigation-related information into consumer AI systems, and train personnel on appropriate AI use in litigation and regulatory contexts.
Conclusion
Heppner is the first federal ruling to address privilege claims arising from consumer AI use for legal research. While grounded in traditional privilege principles, the decision underscores that those principles apply with full force in the AI context. Organizations should reassess their AI governance frameworks now.
Scale LLP Investigations & White Collar Defense Team
Scaleʼs Investigations & White Collar Defense team—led by Peter Lallas, Samer Korkor, and Katie Sweeten—brings decades of experience representing companies and individuals in DOJ investigations and related litigation. With backgrounds as a former SEC Senior Trial Counsel and former DOJ federal prosecutors, the team offers first-hand insight into enforcement priorities, investigative strategy, and effective defense. Serving clients facing inquiries from the DOJ and other federal regulators, the group also conducts independent internal investigations and helps organizations design and strengthen compliance programs to mitigate risk.
Scaleʼs General Counsel Services team includes former senior in-house attorneys, such as Chris Geyer, who have experience developing policies governing AI usage within organizations and implementing AI in ways that protect privilege and work product. Scaleʼs litigation and regulatory attorneys are highly skilled at developing and evaluating business processes for regulatory compliance related to privacy, security, finance, governance, and more.
This client alert is not intended to serve as or replace traditional legal advice.






