A federal judge just ruled that everything you typed into ChatGPT about your legal matter can be used against you. Here's what happened, what it means, and what you can do about it.

On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York issued a ruling that sent shockwaves through the legal profession. In United States v. Heppner, he held that documents a criminal defendant generated using a publicly available AI tool were not protected by attorney-client privilege or the work product doctrine.

This wasn't a hypothetical. This was a real defendant, facing real federal charges, whose AI conversations were handed directly to prosecutors.

And the implications extend far beyond one case.

What Happened

Bradley Heppner was indicted on securities fraud and wire fraud charges in October 2025. After learning he was the target of a federal investigation, he did what millions of professionals do every day: he turned to AI for help.

Using Anthropic's Claude, Heppner generated approximately thirty-one documents outlining potential defense strategies, legal arguments, and factual analyses related to his case. He later shared these documents with his attorneys to help inform their strategy.

When the FBI searched his residence, they seized his devices and found everything: the AI-generated documents, the prompts he used to create them, and the full conversation logs.

His defense team argued privilege. The court disagreed on every count.

Why the Court Said No

Judge Rakoff's reasoning came down to three fundamental failures:

1. Claude is not your lawyer.

Attorney-client privilege requires a communication between a client and an attorney. Claude is neither an attorney nor an agent of one. When Heppner typed his case details into a public AI tool on his own initiative, without his attorney directing him to do so, he wasn't communicating with counsel. He was talking to a third party.

2. There was no expectation of confidentiality.

Anthropic's terms of service explicitly state that user inputs may be retained, used for model training, and disclosed to third parties, including government authorities. The court found that anyone who agrees to those terms has no reasonable expectation that their conversations are private.

As the court put it: the privacy policy "expressly stated that users should have no expectation of privacy in their inputs."

3. Sharing with AI is sharing with a third party.

This is the critical point. Under established privilege law, disclosing privileged information to a third party waives the privilege. It doesn't matter that Heppner later sent the AI-generated documents to his lawyers. The privilege was already gone the moment he typed his case details into Claude.

Documents that are not privileged at the time of creation do not become privileged merely because they are later shared with counsel.

The Work Product Argument Failed Too

Heppner's defense also invoked the work product doctrine, which protects materials prepared in anticipation of litigation. The court rejected this as well, because:

  • The documents were not prepared by an attorney or at an attorney's direction
  • Defense counsel conceded that Heppner created them "of his own volition"
  • The documents did not reflect the mental impressions or legal strategies of counsel

As DLA Piper noted in their analysis: "The principle applies whether the client creates those documents with an AI tool or crayons."

What the Legal Profession Is Saying

The reaction has been swift and unanimous in its concern. At least ten major law firms published analyses within weeks of the ruling:

Venable LLP clarified that the ruling "does not declare Gen AI incompatible with legal privileges" but that "unsupervised client use of a publicly available Gen AI platform defeated attorney-client privilege." They noted that "attorney-directed use under enforceable confidentiality frameworks may present a materially different analysis."

Ogletree Deakins (a 900+ attorney firm) wrote that the ruling "highlights the risks of using consumer AI tools in legal contexts and underscores the importance of secure, attorney-directed AI platforms to maintain privilege protections." They specifically noted that "enterprise-grade AI with contractual confidentiality protections might preserve privilege."

Husch Blackwell called it "the first decision that addresses whether a client's AI chats are privileged" and warned: "Feeding legal analysis, or correspondence with counsel or an expert, into an open AI system potentially waives the attorney-client privilege, confidentiality, and trade secret protections."

Freshfields, one of the world's largest law firms, pointed to the contrasting Tremblay v. OpenAI case, where privilege WAS preserved because the user had a reasonable expectation of privacy based on the platform's user agreement. The distinction? The tool's confidentiality protections matter.

Falcon Rappaport & Berkman advised: "Clients should avoid privileged conversations with AI chatbots, and work collaboratively with their law firm to explore collaborative AI environments that operate within the attorney-client relationship."

What This Means for You

If you're an attorney, a business professional handling sensitive data, or anyone who has ever typed confidential information into ChatGPT, Claude, Gemini, or Copilot, here's the reality:

Every conversation you've had with a public AI tool is potentially discoverable.

Not might be. Is.

Opposing counsel can subpoena your AI chat history. Prosecutors can seize it. Regulators can request it. And as Husch Blackwell warned, you should now expect AI usage to become a standard deposition question and discovery request.

For Attorneys

  • Immediately advise clients that anything they've typed into a public AI tool may be discoverable
  • Update engagement letters and litigation hold notices to address AI usage
  • Add AI usage questions to depositions and discovery requests
  • Audit your own AI usage for any privileged client information

For Business Professionals

  • Any confidential business data entered into ChatGPT, Claude, or similar tools is no longer confidential
  • Client proposals, financial projections, strategic plans, and employee data shared with AI tools may be exposed
  • According to OpenAI's own data, 27% of ChatGPT consumer messages are work-related
  • Research shows sensitive data makes up 34.8% of employee ChatGPT inputs, up from 11% in 2023

The Path Forward: Enterprise AI with Privilege Protection

Here's what matters most about the Heppner ruling: the court left the door open.

Judge Rakoff explicitly acknowledged that the outcome "might well differ" under different circumstances, specifically:

  • If counsel had directed or supervised the AI use
  • If an enterprise-grade or secure version of the AI tool had been used, with contractual confidentiality protections, prohibitions on data training from inputs, and zero-retention policies
  • If the AI had functioned as a necessary third-party aide under the Kovel doctrine

Venable's analysis confirms this reading: "Enterprise AI deployments typically include contractual prohibitions on training models using customer data, confidentiality commitments for inputs and outputs, defined data retention limits, audit and security controls, and data processing agreements restricting disclosure."

In other words, the court didn't ban AI in legal work. It drew a clear line: if you want your AI conversations to remain privileged, you need an AI tool that is private by design.

No data training on your inputs. No third-party access. No government disclosure clauses buried in terms of service.

The AI tools most professionals use today were not built with privilege protection in mind. They were built to collect data at scale.

The question isn't whether you should use AI. The question is whether the AI you're using was built to protect you.

OpsDoctor is a private AI platform designed to preserve attorney-client privilege and protect sensitive business data. Unlike public AI tools, OpsDoctor provides enterprise-grade confidentiality protections, zero data retention, and full client control over all conversations. Learn more at opsdoctor.ai

Sources

  • United States v. Heppner, No. 1:25-cr-00503-JSR (S.D.N.Y. Feb. 10, 2026) — Full opinion
  • Ogletree Deakins, "The Intersection of AI and Attorney-Client Privilege — A Cautionary Tale" (Feb. 2026)
  • Venable LLP, "AI, Privilege, and the Heppner Ruling: What the Court Actually Held" (Feb. 2026)
  • Husch Blackwell, "Heppner v. Claude: The First Privilege Waiver by AI Ruling" (Feb. 2026)
  • Freshfields, "Your AI Chatbot is Not Your Lawyer: AI Privilege Issues in Litigation" (Feb. 2026)
  • DLA Piper, "Are AI-generated documents protected from discovery?" (Feb. 2026)
  • Inside Privacy (Covington), "AI and Legal Privilege: Key Takeaways from US v. Heppner" (Mar. 2026)
  • Chapman and Cutler LLP, "Federal Court Rules That AI-Generated Documents Are Not Protected by Privilege" (2026)
  • Maynard Nexsen, "United States v. Heppner and AI Discovery" (2026)
  • Falcon Rappaport & Berkman, "Your AI Conversations Are Not Privileged" (Feb. 2026)