It was a routine divorce case in which an overwhelmed judge asked the husband's lawyer, Diana Lynch, to draft the appropriate order. It's not unusual for an overburdened judge to ask attorneys to draft a court order.
Lynch used artificial intelligence to help compose and research legal filings. The judge signed off on the routine motions without question.
The problem was that on appeal, the court found the order relied on "two fictitious cases" to deny the wife's petition. Georgia Court of Appeals Judge Jeff Watkins suggested that they were "possibly 'hallucinations' made up by generative artificial intelligence," as well as two other citations that had nothing to do with the wife's petition.
Lynch was hit with a $2,500 fine after his wife appealed. Incredibly, the husband's response cited 11 additional cases that were either fictitious or irrelevant.
Worryingly, the judge could not confirm whether the fake cases were generated by AI or even determine if Lynch inserted the bogus cases into the court filings, indicating how hard it can be for courts to hold lawyers accountable for suspected AI hallucinations. Lynch did not respond to Ars' request to comment, and her website appeared to be taken down following media attention to the case.
But Watkins noted that "the irregularities in these filings suggest that they were drafted using generative AI" while warning that many "harms flow from the submission of fake opinions." Exposing deceptions can waste time and money, and AI misuse can deprive people of raising their best arguments. Fake orders can also soil judges' and courts' reputations and promote "cynicism" in the justice system. If left unchecked, Watkins warned, these harms could pave the way to a future where a "litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity."
Watkins wrote in his decision, "We have no information regarding why Appellee’s Brief repeatedly cites to nonexistent cases and can only speculate that the Brief may have been prepared by AI."
A retired Texas Fifth Court of Appeals justice, John Browning, published an article in the Georgia State University Law Review, warning of the ethical risks of lawyers using AI.
Browning stressed in the article that the biggest threat from using AI was that lawyers "will use generative AI to produce work product they treat as a final draft, without confirming the accuracy of the information contained therein or without applying their own independent professional judgment."
Browning told Ars Technica that he thinks it's "frighteningly likely that we will see more cases" like the Georgia case, where "a trial court unwittingly incorporates bogus case citations that an attorney includes in a proposed order" or even perhaps in "proposed findings of fact and conclusions of law."
According to reporting from the National Center for State Courts, a nonprofit representing court leaders and professionals who are advocating for better judicial resources, AI tools like ChatGPT have made it easier for high-volume filers and unrepresented litigants who can't afford attorneys to file more cases, potentially further bogging down courts.
Peter Henderson, a researcher who runs the Princeton Language+Law, Artificial Intelligence, & Society (POLARIS) Lab, told Ars that he expects cases like the Georgia divorce dispute aren't happening every day just yet.
I don't think anyone is suggesting placing a hold on using AI to file court cases or assist attorneys with "busy work" such as finding citations to use in their briefs. But this is a cautionary tale for the entire legal system to make sure that the technology doesn't get ahead of our ability to manage and control it.