This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read

Are AI Chatbot Prompts Discoverable? Courts Say 'YES!'

Anyone faced with litigation risk should be prepared for the inevitable request for their employees’ user prompts on either enterprise-licensed or individual user accounts on ChatGPT, Claude, or other AI chatbots. In fact, parties have already started making these requests in discovery. Although case law is limited, courts are analyzing requests for AI prompts under traditional discovery principles governing privilege and proportionality.

In May 2025, the U.S. District Court for the Northern District of California considered whether defendant Anthropic’s request to plaintiffs in Concord Music Group, Inc. v. Anthropic PBC for prompts to and outputs from Claude were protected by the attorney work product doctrine. Anthropic requested that plaintiffs produce all prompts to Claude that plaintiffs input in their pre-suit investigation. Notably, plaintiffs produced 5,000 prompts and output pairs on which they relied in alleging that Anthropic engaged in copyright infringement of plaintiffs’ music in training Claude, but refused to produce all other prompt and output pairs on which plaintiffs did not rely to allege infringement. Citing the N.D. Cal. case Tremblay v. OpenAI, Inc., the Concord court found that an attorney’s undisclosed prompts and outputs can constitute work product. The court then found that, because plaintiffs relied on such work product in their complaint’s allegations, plaintiffs had partially waived the work product immunity with respect to the prompts and outputs on which plaintiffs relied in their complaint. However, because plaintiffs had already produced those 5,000 prompts and outputs and because, at this early stage in the litigation, Anthropic had not established that plaintiffs would rely on the unproduced prompts and outputs to prove their case, Anthropic had no right to those prompts and outputs.

Of course, if an individual or employee inputs prompts into a chatbot independently of any direction from counsel, those prompts and outputs are not subject to the work product doctrine. Judge Jed Rakoff of the Southern District of New York confirmed this in a decision issued just last month in United States v. Heppner. In Heppner, Judge Rakoff ruled that a criminal defendant’s exchanges with an AI platform were neither privileged nor work product, despite the defendant inputting legal advice into the chatbot and subsequently sharing the AI outputs in conversations with counsel. The court found that the AI software is not an attorney, which “alone disposes of Heppner’s claim of privilege.” (Judge Rakoff’s conclusion would apply to AI note-takers and AI transcription services, so corporate counsel should caution against the use of those programs by employees.) Additionally, because the AI software’s privacy policy states that it collects data on both users’ inputs and the chatbot’s output, Heppner could have had no “reasonable expectation of confidentiality in his communications[.]” Finally, the work product doctrine was similarly inapplicable because none of these communications were “prepared by or at the behest of counsel[.]” 

So far, invoking user privacy interests has not been a successful method of combatting discovery requests for prompts and outputs. In In re OpenAI, Inc., Copyright Infringement Litigation in the Southern District of New York, OpenAI asked the court to find ChatGPT users’ privacy interests to be weightier than the relevance of the sought disclosures. Plaintiffs, The New York Times and other news outlets, requested the production of 20 million anonymized consumer ChatGPT output logs, asserting that this is crucial evidence showing that OpenAI has trained its models on plaintiffs’ copyrighted works. In affirming Magistrate Judge Wang’s order requiring OpenAI to produce 20 million consumer chat logs, Judge Stein held it was not “clearly erroneous” for Judge Wang to order such a production even when purportedly less burdensome alternatives exist. Judge Stein also declined to adopt OpenAI’s assertion that its users possess a privacy interest akin to a speaker on a wiretapped telephone conversation, noting that privacy interests in wiretapped recordings of private phone conversations “are stronger than the privacy interests in users’ conversations with ChatGPT which users voluntarily disclosed to OpenAI and which OpenAI retains in the normal course of business.”

What is clear is that requests for chatbot prompts and outputs will only increase, and courts will likely address these requests based on existing precedent on relevance, proportionality, privilege, and user privacy. Under existing case law, an employee using an employers’ enterprise license for a chatbot or even his/her own individual license at work is simply creating discoverable material (unless such work is being directed by counsel). Corporate counsel should carefully draft policies governing the use of chatbots and the retention of those prompts for enterprise licenses in order to manage litigation risk. 

What is clear is that requests for chatbot prompts and outputs will only increase, and courts will likely address these requests based on existing precedent on relevance, proportionality, privilege and user privacy. Under existing case law, an employee using an employers’ enterprise license for a chatbot or even his/her own individual license at work is simply creating discoverable material ...