1On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York ruled that documents a criminal defendant generated using a consumer AI platform are not protected by attorney-client privilege or the work product doctrine. The written opinion followed on February 17.
2The ruling is the first of its kind. Its reasoning does not stop at criminal defendants. It applies to anyone who has typed confidential information into a consumer AI platform and expected that information to remain private.
3What happens next depends on what was typed, who else is involved, and whether a court in any other proceeding finds reason to ask.
The Case
4Bradley Heppner, a Dallas financial services executive, was indicted on October 28, 2025 on charges of securities fraud, wire fraud, conspiracy, and falsifying corporate records related to his alleged conduct as an executive of GWG Holdings, Inc. The government alleged he looted more than $150 million from the company.
5After receiving a grand jury subpoena and engaging defense counsel from Quinn Emanuel, Heppner used Anthropic's Claude (the consumer version) to prepare reports outlining his defense strategy and potential legal arguments. He did this on his own, without direction from his attorneys. FBI agents executing a search warrant at his residence seized electronic devices containing 31 documents he had generated through those AI interactions.
6His defense counsel asserted that the 31 documents were protected by attorney-client privilege and the work product doctrine. The government moved for a ruling that they were not.
7Characterized by the court as a "nationwide matter of first impression": whether communications with a publicly available AI platform during a pending criminal investigation are protected by privilege or work product.
What the Court Held
8Judge Rakoff ruled that none of the 31 AI-generated documents satisfied the requirements for attorney-client privilege or work product protection. Three independent grounds defeated the privilege claim. Any one of them would have been sufficient.
- Not a communication with an attorney. Claude is not a licensed attorney. No fiduciary relationship exists between an AI user and a platform. Attorney-client privilege requires a trusting human relationship with a licensed professional subject to discipline. That relationship cannot exist with an AI platform.
- Not confidential. Anthropic's privacy policy expressly notifies users that it collects data on inputs and outputs to train its models and reserves the right to disclose that data to third parties, including regulatory authorities. Judge Rakoff held there was no reasonable expectation of confidentiality. The tool "contains a provision that any information inputted is not confidential."
- Not for the purpose of obtaining legal advice. Heppner used Claude of his own volition. His attorneys did not direct him to run the searches. The AI tool itself explicitly disclaims the ability to provide formal legal advice. Without attorney direction, work product protection does not attach.
9The court also noted: had counsel directed Heppner to use Claude, the analysis might differ. Claude might then have functioned as a lawyer's agent within the protection of the privilege. That question was left open.
Why the Reasoning Reaches Further
10Judge Rakoff characterized the ruling as addressing a "nationwide" matter. The three grounds for the decision (no attorney relationship, no confidentiality, no legal advice purpose) are not specific to criminal defendants. They are structural features of every consumer AI platform interaction.
11Bloomberg Law noted that a close reading of the privacy terms at the leading AI platforms reveals that several products marketed for "business" or "enterprise" use offer no more legal protection than the consumer services at issue in Heppner. A $20-per-month subscription, as one analysis put it, does not buy privilege.
12The court's ruling builds on a parallel trend in the same courthouse. Judge Oetken ruled that 20 million ChatGPT conversation logs are likely subject to compelled production in the OpenAI copyright litigation, finding that users have a "diminished privacy interest" in their AI conversations. ChatGPT users, unlike wiretap subjects, voluntarily submitted their communications.
13Privilege doctrine has always held that confidentiality is a prerequisite. You cannot claim privilege over a communication you made in public, or to a third party who had no obligation to keep it private, or through a platform whose terms of service reserved the right to disclose it.
14Consumer AI platforms are, in the court's reasoning, that third party. What you typed into them was not confidential in any legally cognizable sense. Whether that matters depends entirely on your circumstances.
The Questions That Follow
15The Heppner ruling establishes the legal principle. What it does not resolve is the practical exposure for the millions of people who have used consumer AI platforms for work that touched confidential matters. Courts have not yet addressed most of the questions that follow from the ruling. They are not hypothetical. They are pending.
- An attorney used ChatGPT to draft a strategy memo, then shared it with their client. The memo incorporates case theory the attorney received from the client. Is the memo privileged?
- A tax professional used a consumer AI platform to analyze a client's financial documents. The client is now under audit. Are those AI-generated analyses available to the IRS?
- A healthcare administrator used a consumer AI tool to prepare materials related to a regulatory inquiry. The platform retained those interactions. What are the HIPAA implications?
- An executive used a consumer AI platform to prepare for sensitive business negotiations. A dispute from those negotiations is now in litigation. Opposing counsel has issued a discovery request.
- A financial professional used a consumer AI tool to analyze transactions that are now under investigation. The platform's preservation obligations have been extended by court order in unrelated litigation.
16The answer to each of these questions is: it depends. It depends on the jurisdiction, the nature of the proceeding, the specific terms of service in effect at the time of use, whether counsel directed the AI use, and whether a court in a future proceeding finds reason to compel production.
17The Heppner ruling does not answer these questions. It establishes the framework within which they will be answered.
The Parallel Development: OpenAI Data Retention
18Running parallel to the Heppner ruling is a separate line of cases that bears directly on the scope of potential exposure. In the consolidated copyright infringement litigation brought by The New York Times and other publishers against OpenAI, courts have established that ChatGPT conversation logs are discoverable electronically stored information.
19On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued a preservation order requiring OpenAI to retain all output log data that would otherwise be deleted, including conversations users had already deleted. On January 5, 2026, District Judge Sidney Stein affirmed the order, compelling production of 20 million anonymized ChatGPT logs spanning December 2022 to November 2024.
20Judge Stein's reasoning on privacy is significant. ChatGPT users, he held, "voluntarily submitted their communications" to OpenAI. That voluntary submission distinguished their situation from subjects of surreptitious wiretaps, who had stronger privacy interests. The data exists. Courts have demonstrated willingness to order its production. Whether any specific individual's data is reachable in any specific proceeding depends on conditions that vary by case.
What This Means for Professionals
21The Heppner ruling and the OpenAI data retention orders are separate cases with separate facts. They converge on a single point: consumer AI platform interactions are not private in the way that communications with a licensed attorney are private. The platform's terms of service say so. Courts have now confirmed it.
22For attorneys advising clients, the implications begin with the duty of competence under ABA Model Rule 1.1. Comment 8 requires attorneys to keep abreast of the benefits and risks associated with relevant technology. Heppner is now part of that technology landscape. Advising clients about AI tool use without reference to the privilege implications of that use is a competence question.
23For other licensed professionals (accountants, healthcare providers, financial advisers), the privilege analysis differs by jurisdiction and by the nature of the professional relationship. The confidentiality analysis does not. Consumer AI platforms have not changed their terms of service in response to Heppner. The conditions that led to the ruling remain in place.
24The question is not whether this has happened. It has. The question is what the exposure looks like for any given professional in any given proceeding, and whether the conditions for compelled production will be met before or after they have reason to ask.