A landmark federal court ruling in New York has sent shockwaves through the U.S. legal industry, declaring that conversations with AI chatbots like Anthropic's Claude are not protected by attorney-client privilege. The decision, handed down by Judge Jed Rakoff of the Southern District of New York in United States v. Heppner in February, has prompted more than a dozen major law firms to issue urgent client advisories and revise engagement contracts.
The case centered on Bradley Heppner, former chair of bankrupt financial services firm GWG Holdings, who was indicted on securities and wire fraud charges. After receiving a grand jury subpoena, Heppner used Claude independently to generate 31 documents mapping out his defense, which the FBI later seized. Judge Rakoff ruled these documents were discoverable by prosecutors for three key reasons: Claude is not a licensed attorney, Anthropic's privacy policy reserves the right to share user data with third parties including government regulators, and Heppner acted on his own, not under the direction of his counsel.
In response, firms like Sher Tremonte have begun embedding explicit warnings into client contracts. A March engagement agreement stated that "disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege." Other firms, including O'Melveny & Myers and Kobre & Kim, are advising clients to use only closed, enterprise-grade AI systems under attorney direction. Debevoise & Plimpton offered tactical guidance, suggesting clients explicitly note in chatbot prompts that they are conducting research "at the direction of counsel" to potentially invoke the Kovel doctrine, which extends privilege to an attorney's agents.
The legal landscape remains unsettled. Contrasting rulings, such as in Warner v. Gilbarco and Morgan v. V2X, have protected AI-generated work product for self-represented litigants, treating AI as a tool rather than a person. However, for represented parties using consumer AI chatbots independently, the risk of exposure is now clear. The Los Angeles Superior Court is separately piloting AI tools for judges, highlighting the technology's dual role from the bench and the bar.