By Kayla Gaisi
•
March 10, 2026
As generative AI becomes increasingly integrated into our daily lives, it continues to raise legal questions that courts can no longer ignore. This month, the question of whether communications between criminal defendants and public AI are protected from government inspection was answered by Judge Jed Rakoff. That answer was an unequivocal 'no.' In the case at hand, defendant Bradely Heppner was charged with fraud and arrested a month later, in November 2025. When the FBI executed a search warrant at his home, they seized documents containing communications between him and the public AI platform Claude AI. According to Heppner's counsel, these communications reflected a defense strategy Heppner had generated in anticipation of potential indictment. Heppner asserted that these documents were either protected under attorney-client privilege or by the work product doctrine, arguing that he had used Claude for the purpose of obtaining legal advice and had shared these outputs with his attorneys. However, Judge Rakoff rejected both arguments. Attorney-client privilege applies only to communications between a client and a professional who owes them fiduciary duties and is subject to discipline. It is a socially valuable human relationship. Regardless of how advanced an AI systems is, it cannot meet this definition. Claude is not a human attorney and does not have an attorney-client relationship with its users, so communications with it cannot qualify for attorney-client privilege. Aside from this, Rakoff listed other reasons why Heppner's communications with Claude are not considered confidential. Firstly, Claude is a public AI system whose privacy policy discloses that communications can be shared with third parties including "governmental regulatory authorities." Secondly, as his counsel admitted, Heppner sought legal advice from Claude on his own volition, not at their direction. Even if Heppner received legal advice and later shared that with his counsel, that does not render the initially unprivileged communication privileged. The related work product doctrine fared no better for Heppner. This doctrine protects materials prepared by attorneys in anticipation of litigation from discovery by opposing parties. Here, the AI-generated documents were not prepared by or at the behest of counsel and did not reflect counsel's strategy. Thus, they fell outside the scope of the doctrine. Judge Rakoff's ruling matters because it maintains the narrowness of evidentiary privileges that is necessary for protecting the judicial system's truth-seeking function. Extending privilege to communications with public AI systems could create a dangerous loophoole, one where parties could shield discoverable information by filtering it through a chatbot. But given Rakoff's ruling, the main takeaway here is that attorneys should explicitly advise their clients not to share personal or legal information with public AI systems. Despite how routine it has now become for many to ask public AI personal questions, these communications are not confidential, and may ultimately be used as evidence in court.