United States v. Heppner, 25-cr-00503 (S.D.N.Y. 2026)
Held: AI-generated documents not protected by attorney-client privilege or work product doctrine.
In February 2026, Judge Jed Rakoff of the Southern District of New York ruled that documents defendant Bradley Heppner created using a commercial AI platform and then sent to his attorney were not privileged. The court found no attorney-client relationship between Heppner and the AI service, no reasonable expectation of confidentiality given the platform's terms of service, and no basis for work product protection because the materials were client-generated without attorney direction.
Critically, the court left open a path for AI used under counsel's direction, inside a private environment. Rakoff noted that had an attorney directed the client to use a private AI system, the analysis might differ.
Any document your client creates using a consumer AI platform — ChatGPT, Claude, Gemini — and then forwards to you may be discoverable. The terms of service of those platforms, which reserve rights to user data, undermine any claim of confidentiality. The fix is not to avoid AI; it's to use AI inside a controlled, private environment at counsel's direction.
Mata v. Avianca, Inc., No. 22-cv-01461 (S.D.N.Y. 2023)
Held: Attorneys sanctioned $5,000 for citing AI-hallucinated cases without verification.
In the case that put the legal world on notice about AI, attorneys representing Roberto Mata submitted a brief citing six cases that did not exist — fabricated by ChatGPT and submitted without verification. When opposing counsel and the court couldn't locate the citations, the attorneys doubled down, submitting false attestations that the cases were real.
Judge Castel imposed $5,000 in sanctions under Rule 11 and referred the attorneys for potential disciplinary action, describing one of the AI-generated legal analyses as "gibberish."
AI tools used for legal research must be verified. General-purpose AI platforms generate plausible-sounding citations that don't exist. The professional responsibility risk is not theoretical — it's Rule 11, malpractice exposure, and bar discipline. AI tools built specifically for legal work, with citations traced back to verified source documents, are a different category entirely.
The Open Question: When Does AI Use Preserve Privilege?
The courts are converging on a framework — and it favors counsel-directed, private-environment AI.
Across both Heppner and the broader commentary from legal scholars, courts and bar associations are developing a consistent framework: privilege survives when AI is used at counsel's direction, inside a controlled private environment, where confidentiality is contractually maintained and user data is not collected or used for training.
The New York State Bar Association, in its analysis of Heppner, noted that "courts may hold that the use of these AI enterprise systems, in collaboration and consultation with legal counsel, may be treated more akin to a third-party consultant, and thus privileged." The American Bar Association and several state bars have issued similar guidance.
The distinction is structural, not behavioral: it's not about how carefully you use the AI — it's about whether the platform itself maintains confidentiality by design.
For AI-assisted legal work to preserve privilege: (1) counsel must direct the use, (2) the AI must process data in a private environment, (3) the platform must contractually commit to not training on user data, and (4) reasonable confidentiality must be maintained throughout. Consumer AI platforms fail conditions 2 and 3 by design.
The Bottom Line for Litigation Attorneys
The courts are not saying AI has no place in legal practice. They're saying that the tool matters — specifically, whether it maintains the confidentiality that privilege requires.
Consumer AI platforms (ChatGPT, Claude, Gemini used directly) fail this test by design. Their terms of service, data collection practices, and lack of private endpoints make confidentiality impossible to guarantee.
AI tools built for legal use — with private cloud infrastructure, zero training on user data, and contractual confidentiality commitments — are positioned differently under the emerging standard.
EvidentIQ is built for the Heppner standard.
Private AWS infrastructure. Zero AI training on your documents. AES-256 encryption. Built to preserve privilege the way the courts are defining it.
Request Early Access →