Legal Update

Jun 26, 2023

Update on the ChatGPT Case: Counsel Who Submitted Fake Cases Are Sanctioned

Click for PDF

We previously wrote about the widely-publicized Southern District of New York case involving lawyers who submitted papers citing non-existent cases generated by the artificial intelligence program ChatGPT, Mata v. Avianca, Inc. The judge overseeing the matter held a lengthy, and tense, hearing on June 8, 2023, before a packed courtroom, and then issued a decision on June 22, 2023 sanctioning the lawyers involved. The case has grabbed attention by highlighting some of the real risks of using AI in the legal profession, but the case’s primary lessons have nothing to do with AI.

The June 8 Hearing

On June 8, 2023, the judge in the Mata case held a hearing on the issue of whether to sanction two of plaintiff’s lawyers, and the law firm at which they worked, for their conduct. The courtroom was filled to capacity, with many would-be observers directed to an overflow courtroom to watch a video feed of the hearing. 

As set forth in our prior update, the plaintiff’s first lawyer submitted an affirmation on March 1, 2023, in opposition to the defendant’s motion to dismiss, which was written by the second lawyer, but contained citations to non-existent cases. Thereafter, the defendant pointed out that it could not find these cases in a filing on March 15, and the Court issued an order on April 11 directing the plaintiff’s lawyer to submit an affidavit attaching the identified cases. The first lawyer did so on April 25 (attaching some of the “cases”, and admitting he could not find others), but did not reveal that all of the identified cases were obtained via ChatGPT.  Only after the Court issued a further order on May 4 directing the lawyer to show cause as to why he should not be sanctioned for citing non-existent cases did the first lawyer finally reveal the involvement of the second lawyer and the role of ChatGPT in the preparation of the submissions.

During the June 8 hearing, the judge questioned the two lawyers at length, and under oath. Many of the questions were aimed at establishing the fact that the lawyers had decades of experience and knew about various options for performing legal research. The first lawyer for plaintiff admitted that he signed papers simply because he was admitted to practice in the SDNY and the second lawyer was not, and that he made no effort to check the work of his partner, who actually did the research and drafted the papers with the bogus cases. The first lawyer admitted that he did not read any of the initial filings from the defendant or the Court’s initial order that cast doubt on the existence of the cases. The first lawyer also admitted that he made a misstatement to the Court in seeking an extension of time to respond to the Court’s April 11 order directing production of copies of the cases.  In that extension request, the first lawyer claimed that he was on vacation when, in fact, the second lawyer involved was the one on vacation.

The second lawyer for plaintiff claimed that he was unable to access federal court cases using the legal research program utilized by his firm, and so he turned to ChatGPT, which he thought was a “super search engine.” Even though he admitted that he was unable to locate the cases identified by ChatGPT, he stated that he thought the cases were unpublished or were in databases to which he did not have access. In response to the idea that the cases were unpublished, the judge pointed out that some of the fake case citations included “F.3d” and asked the second lawyer if he knew what “F.3d” meant.  Initially, the second lawyer said he did not know, and that it perhaps meant “federal district, third department.” Under further questioning from the judge, he later admitted that “F.3d” referred to a decision that was published in the federal reporter, indicating that the decision was not, in fact, unpublished.

Following extensive testimony and the reading of prepared statements by the two lawyers, and a statement from the head of the law firm at which they worked, the judge heard closing arguments from each of the two lawyers’ separate outside counsel. Outside counsel took an interesting approach, downplaying the lawyers’ conduct as mere carelessness that did not rise to the level of sanctionable misconduct. The second lawyer’s outside counsel even blamed the lawyers’ conduct on the fact that lawyers, in general, are notoriously “bad with new technology.”  They also argued that the lawyers had already “suffered enough” due to the widespread publicity of the case, and so no sanctions were warranted.

The judge was not receptive to these arguments and cut off two of the outside counsel before they had completed their arguments.  The judge noted that ChatGPT did exactly what the second lawyer asked it do to: he asked ChatGPT to provide cases to support his desired legal argument, and ChatGPT complied by making up cases that did not exist.  The judge also emphasized the fact that the defendant had filed a brief on March 15 indicating that plaintiff’s cited cases did not exist, and that the judge was focused on the plaintiff’s lawyers’ conduct after being put on notice by the defendant.

The June 22 Decision

On June 22, 2023, the Court issued a written decision fining the two lawyers and their firm $5,000. The judge also required the two lawyers to write letters to their client, the plaintiff, and to the judges whose names were listed as the authors of the fake opinions generated by ChatGPT, attaching to the letters the judge’s order, a transcript of the June 8 hearing, and a copy of the lawyers’ prior court submission attaching the fake cases. The judge said that he would not require the lawyers to apologize, “because a compelled apology is not a sincere apology,” and stated that “[a]ny decision to apologize is left to” the two lawyers.

True to his statements at the end of the hearing, the judge focused primarily on the lawyers’ conduct after the defendant put them on notice that the cases might not exist. The judge faulted the first lawyer for failing to do any due diligence whatsoever and for what the judge characterized as a lie to the Court about the lawyer’s need for an extension to file the affidavit attaching the cases. The judge found that both lawyers acted inappropriately when they submitted the April 25 affidavit attaching the purported cases and claiming that they were real cases that came from “online databases,” without revealing the second lawyer’s use of ChatGPT.

The judge pointed out that, even aside from the warnings raised by the defendant and the Court, the plaintiff’s lawyers should have known something was seriously wrong. The cases attached to that affidavit had numerous warning signs, including judge names that did not line up with the courts supposedly issuing the decisions, inconsistent plaintiff names and legal ideas within the decisions themselves, and decisions that stopped abruptly in the middle of a sentence.  Moreover, the second lawyer admitted that he searched for some of the cases, and could not find them, and that he believed the defendant and the Court had been unable to find them as well.  Yet, instead of admitting to the Court what had happened, the lawyers still tried to pass off the decisions as real court decisions in the April 25 affidavit.

In light of the foregoing, the judge found that both lawyers had acted in bad faith, and that sanctions were warranted. Separately, that same day, the judge issued an order granting the defendant’s motion to dismiss, not because of the plaintiff’s lawyers’ misconduct, but because the case was barred by the applicable statute of limitations.

Key Takeaways

This situation clearly highlights the risks of using new technology without understanding how that technology works and what limitations it may have.  But, ultimately, this case is less about the use of AI and more about the importance of what to do (or not to do) when you make a mistake.

As the judge noted, this case would have been very different if, after the Court issued its April 11 order demanding copies of the cited cases, the lawyers had come clean and admitted what had happened.  Instead, they tried to cover up both what they had done and their doubts about the veracity of these cases by continuing to insist that the cases were real in the April 25 affidavit. Even after making the revelation about ChatGPT in response to the Court’s subsequent May 4 order, however, the lawyers downplayed their conduct by chalking it all up to an innocent mistake borne of carelessness.

In their papers leading up to the June 8 hearing and at the hearing itself, the lawyers’ outside counsel doubled-down on that strategy, arguing that the situation was created by a mere lack of diligence and understanding about the brand-new AI program.  Accordingly, the outside counsel argued that the lawyers did not act with bad faith and no sanctions were warranted. But the lawyers and their outside counsel never really grappled with the fact that, in the April 25 affidavit, plaintiff’s lawyers continued to assert that the cases were genuine, despite their own doubts about the cases and the questions raised by the defendant and the Court. During the hearing, it was clear they had not “read the room” and angered the judge further with these arguments.

Put simply, what really got the lawyers into trouble was not the use of AI and their carelessness.  It was that they tried to cover up what happened and never took responsibility for that cover up.

Although this situation arose in the context of litigation, as we previously noted, these lesson are applicable to all aspects of legal practice, whether a lawyer is working on a real estate transaction, drafting a will, performing legal research for a court filing, or other legal work.  Although AI is a powerful tool that lawyers are incorporating into their practices, it has not advanced to the point where it can be relied upon for substantive legal information. 

Moreover, lawyers are responsible for checking their work and making sure it is accurate.  No matter how careful a lawyer is, we all make mistakes sometimes.  The Mata case is a reminder that it is important to own up to those mistakes, rather than try to run away from them.