A Tale of Two Cases – Attorney Client Privilege and the Use of AI Tools
Two recent high-profile cases, each issued within a week of the other, have recently weighed in on the use of AI tools and attorney-client / work product privilege. While reaching opposite conclusions, they are not irreconcilable but rather highlight the fact-intensive nature on the edges of privilege and the risks of using AI in current or anticipated litigation under certain circumstances. Our more detailed summary below highlights where the courts diverge and why. At their heart, however, traditional privilege and attorney work product doctrines continue to apply (without AI exceptions), based on: (1) who uses the AI; (2) for what purpose(s); (3) under whose direction; and (4) whether confidentiality is preserved.
United States v. Heppner (Feb 17, 2026 - S.D.N.Y.)
The Heppner court denied both attorney-client privilege and work product protection for AI-generated documents created by a represented defendant using Anthropic’s consumer Claude platform. In 2025, Heppner received a grand jury subpoena for securities and wire fraud, and it was clear he was the target of a federal investigation, Heppner – without any direction from his attorneys – then used the AI platform to prepare reports that “outlined defense strategy” and “what he might argue with respect to the facts and the law” in anticipation of a potential indictment. Subsequently, Heppner sent these reports to his attorney in anticipation of litigation. In discovery, Heppner’s counsel asserted privilege over the documents, while the government prosecution moved for a ruling that they were not protected.
In support of privilege, Heppner argued that:
- The information input into the AI platform was information learned from his attorneys;
- Heppner created the documents for the purpose of speaking with counsel to obtain legal advice; and
- He subsequently shared the AI outputs with his counsel.
In support of discoverability, the government attorneys countered that:
- An AI platform is not an attorney or legal professional;
- The communications were not confidential; and
- The materials were not prepared by or at the direction of counsel.
In his opinion, Judge Rakoff agreed with the government attorneys, holding that:
- Because an AI platform was not an attorney, and lacks the “trusting human relationship” required for recognized privileges involving the use of software, the communications were not between Heppner and counsel;
- The communications were not confidential – not just because it is a third-party AI platform, but because the platform states that it trains the model with the inputs and reserves the right to disclose them. The AI platform’s privacy policy, to which users consent, permitted data collection, model training, and disclosure to third parties including government authorities. (citing In re OpenAI, Inc. Copyright Infringement Litigation, 2025 WL 3468036 (SDNY Dec 2, 2026)(holding that AI users do not have substantial privacy interests in their “conversations with [another publicly accessible AI platform] which users voluntarily disclosed” to the platform and which the platform “retains in the normal course of its business.”).
- Heppner did not communicate with the AI platform for the purpose of obtaining legal advice because: (A) the platform disclaims providing legal advice; and (B) counsel conceded it did not direct Heppner to run the searches.
Although Heppner’s counsel argued the exchanges were made for the express purpose of talking with counsel, the court held that the relevant inquiry is whether Heppner intended to obtain legal advice from the AI platform, and not whether he later shared his outputs with his lawyers. Doing so did not “alchemically change[]” the communications into privileged ones upon being shared with counsel. However, the court did note that had counsel directed Heppner to use an AI platform, this may have left open a potential pathway for privilege where AI use occurs under attorney supervision.
Regarding attorney work product protection, the court similarly ruled against Heppner. Even if the documents were prepared “in anticipation of litigation” they were not prepared “by or at the behest of counsel”, and they did not reflect defense counsel’s strategies. Counsel conceded the documents were prepared by the defendant on his own volition” and may have affected counsel’s strategy, but they did not reflect it at the time they were created. The court noted that the Second Circuit has “repeatedly stressed that the purpose of the doctrine is to protect lawyers’ mental processes.”
Warner v. Gilbarco (ED. Mich.) (Feb 10, 2026)
A week earlier, the Warner court found that a pro se plaintiff’s use of an AI tool in connection with litigation preparation was protected as work product. In Warner, plaintiff Warner brought claims against her employer alleging race discrimination. A discovery dispute arose when the defendants moved to compel production of “all documents and information concerning [Warner’s] use of third-party AI tools in connection with this lawsuit.” The defendants also sought to overrule the plaintiff’s work product objection to those materials, arguing that any protection had been waived by inputting litigation materials into a public version of an AI platform. Notably in October 2025, the court had modified its existing protective order to provide that “any documents marked confidential shall not be uploaded onto any AI platform” – the modification was made due to defendants’ concern that the plaintiff might upload confidential discovery materials to an AI tool, which could compromise confidentiality.
In holding that the materials were protected, the court rejected the defendants’ argument that the plaintiff’s use of a third-party AI platform effectively waived any work product protection, finding that what the defendant really sought were plaintiff’s “internal analysis and mental impressions -- i.e., her thought process -- rather than any existing document or evidence”. Critically, the Fed. R. Civ. P. 26(b)(3)(A) protects materials prepared “by another party or its representative” -- not solely materials prepared by or at the direction of an attorney. A pro se litigant is the party, and the materials she prepares in anticipation of litigation qualify on the plain text of the rule.
The court rejected the discovery request as a “fishing expedition” based on “speculation about what might exist in Plaintiff’s internal drafting process, untethered from Rule 26 relevance, disregarding the heightened protection afforded to opinion work product, and improperly attempting to manufacture a waiver where none exists.” The court noted that “to the extent Defendants argue that Plaintiff waived the work-product protection by using ChatGPT, the work-product waiver has to be a waiver to an adversary or in a way likely to get in an adversary's hand”. Notably, the court also characterized the defendant’s theory as one that “if accepted would nullify work-product protection in nearly every modern drafting environment, a result no court has endorsed.”
Reconciling the Two Cases
Although they seem to reach opposite conclusions, these two cases, issued within a week of each other, reflect the fact-specific nature of the law with respect to privilege. Both cases reflect that the use of AI tools for research, analysis and strategic planning in connection with litigation and investigations is becoming more common. They also carry with them material discovery and privilege risks.
The Heppner court makes clear that the represented defendant’s use of an “open” AI platform, even if used in preparation for privileged conversation with his attorneys, was not generated at the behest of his attorneys, was not conversation with an attorney, and was not privileged. On the contrary, the Warner plaintiff was pro se (and thus acting as her own attorney), and the court viewed the defendants’ attempt at discovery to be a “fishing expedition” into her own privileged thought processes. Aside from the “represented” vs. “pro se” distinction, the two cases also appear to leave potential room for a factual scenario whereby a law firm representing a defendant carefully coordinates the client’s use of a “closed” enterprise version of an AI tool (within a company or firm environment) under certain parameters, labels, and instructions, to potentially uphold a privilege defense. That said, the details matter, and the case law is still evolving.
Implications for Corporations and Corporate Counsel
Our society at large appears to grow increasingly accustomed to posing important and sometimes sensitive questions (along with related information) to AI tools. Whether such AI tools include genAI LLM models to assess risk and generate information or documentation, or AI “note takers” during otherwise privileged conversations, the use of AI tools continues to increase and diversify. Litigants, including current and former employees, will be increasingly tempted to use AI to help them understand their risks, particularly in contentious or potentially contentious situations. However, it is exactly such contentious situations that can create risks that these inputs and outputs could potentially be deemed unprivileged and discoverable.
There will inevitably be more opinions like Heppner and Warner in the future – in-house and external counsel should continue closely monitoring developments as guidance on this issue continues to evolve. Meanwhile, companies and their counsel should consider the following best practices:
- Employee Policies. Generate new policies or amend existing ones (e.g., legal, compliance, internal investigations, incident response, etc.) to prohibit employees from posting legal questions or prompts containing matter-relevant facts to any AI tool without prior approval from in-house counsel. Any use of generative AI in connection with pending/anticipated litigation or investigation should occur at the direction of and under the supervision of counsel, as indicated by the Heppner court’s suggestion that attorney-directed use may have opened a pathway to a privilege claim.
- Update Litigation-Related Documents. During litigation or internal investigations, companies and counsel should include briefing instructions on the importance of not using AI tools unless specifically directed to do so by counsel. This should occur at the same time as (if not before) the distribution of litigation hold notices or document retention obligations. Similarly, discovery requests should be broadened to include the use of AI tools by the counterparty.
- Employee Education/Awareness. Update regular employee training and awareness/education to include educating employees about the risks of inputting sensitive information into AI prompts. Employees and clients should clearly understand that communications with public AI platforms are not confidential and should not be treated as substitutes for privileged communication with attorneys, even if their content and purpose are related to current or anticipated litigation.
- Use of Closed AI Tools. To the extent AI tools may be used in connection with legal work, such usage must be routed through the company’s internal or external counsel and restricted to AI tools that provident stringent confidentiality protections such as internal, closed system that do not train public models. Harvey AI, Lexis+ AI or Thomson Reuters Cocounsel.
- Limit Exposure. If employees or defendants mistakenly create non-privileged AI content regarding a sensitive issue, they should not compound the error by emailing the content around. Instead, they should notify counsel immediately. Subsequently sharing AI-generated content with legal counsel for discussion does not wrap with privilege materials that were not already privileged; further sharing will simply generate additional documents or messages that potentially alert the other side to their existence.
About Maynard Nexsen
Maynard Nexsen is a nationally ranked, full-service law firm with more than 600 attorneys nationwide, representing public and private clients across diverse industries. The firm fosters entrepreneurial growth and delivers innovative, high-quality legal solutions to support client success.