Two federal judges — Colorado U.S. District Judge John L. Kane and Pennsylvania U.S. Magistrate Judge José R. Arteaga — included generative AI certification requirements in their court policies and procedures, per Law360 Pulse's tracker.
Judge Arteaga said in his May 15 update to policies and procedures that attorneys and self-represented litigants need to identify what portions of their documents are AI generated, disclose what tools they used, and certify that all citations have been checked for accuracy.
"Failure to comply with this policy may result in consequences such as referral to the appropriate state bar, monetary sanctions, or any other sanction the court deems appropriate," the judge said.
Pennsylvania U.S. District Judge Kai N. Scott issued a standing order March 3 requiring attorneys and pro se litigants to disclose use of generative AI and certify that citations are accurate.
Texas U.S. District Chief Judge Randy Crane implemented a general order May 7 reminding lawyers and self-represented litigants of their obligation under federal rules of civil procedure to ensure information in court filings is accurate.
He also warned filers against submitting AI-generated court documents without checking them for accuracy, noting that generative AI tools can put out false information.
"Any attorney or self-represented litigant who signs a pleading, written motion, or other paper submitted to the court will be held responsible for the contents of that filing," Judge Crane said.
Texas U.S. District Judge Marina Garcia Marmolejo included a similar reminder in her procedures for civil cases updated in May, which referred to Judge Crane's order.
"All counsel and pro se parties are reminded that, consistent with Federal Rule of Civil Procedure 11, the person signing any pleading, motion, or other paper remains fully responsible for its content, regardless of whether it was drafted in whole or in part by generative AI," she said.
The orders issued so far this year are similar to the ones issued in 2023 and 2024, though in earlier orders a few federal judges went as far as banning AI from being used to draft court documents.
In 2023, federal judges started issuing standing orders on the use of generative AI in response to a case where two New York personal injury attorneys submitted a ChatGPT-generated brief with fake case citations. The attorneys were sanctioned for their mistake.
Some legal scholars have raised concerns about these orders, saying they might discourage the use of AI and create more barriers for self-represented litigants.
Despite dozens of federal judges implementing AI orders and rules, lawyers are continuing to file court documents with fictional case citations.
Last month, a Florida federal judge expressed outrage over an attorney submitting multiple AI-generated documents with fake case citations and quotes.
That same month, Butler Snow LLP lawyers told an Alabama federal court that fake AI-generated case citations in two filings were an "isolated event," and the firm revised its policies and procedures to prevent similar mishaps.
Attorneys recently told Law360 Pulse that they are not surprised that federal judges are continuing to issue orders and guidance on use of generative AI, as lawyers keep submitting court documents with false case citations.
Katherine Forrest, partner at Paul Weiss Rifkind Wharton & Garrison LLP and chair of the firm's AI group, said what is more surprising is that so many lawyers are willing to submit AI-generated legal research to courts without checking the results.
"It's suggestive that they're not only overly trusting, because there could be still hallucinations, but they're not really not listening and understanding the courts when they say, 'No, we're really serious about this,'" she said. Forrest served seven years as a New York federal judge.
Jacob Canter and Joachim Steinberg, attorneys at Crowell & Moring LLP, said that even though professional ethics and federal rules still apply to generative AI, lawyers have lost some old safeguards they may have relied on with earlier technologies.
Canter, a member of Crowell & Moring's AI steering committee, noted that before generative AI tools were an option, when lawyers would do legal research, if there was no applicable case law, they wouldn't get any search results.
Generative AI legal research tools are different in that they will make up case law when nothing is applicable, he said.
"There's a lot of opportunity here for efficiencies, for putting together first drafts that can save clients money potentially, but at the same time, that means that certain guardrails that were in place ... are just gone," Canter said.
Steinberg, a chair of Crowell & Moring's internal AI working group, added that attorneys have an ethical obligation to understand the difference between old and new legal research tools.
"You wouldn't trust a carpenter who didn't know the difference between a saw, a wrench and a hammer. You should not trust a lawyer who doesn't know the difference between different kinds of research tools," he said.
Forrest noted that generative AI is bringing new attention to a longstanding problem of some attorneys not thoroughly reading cases they are citing in their court filings.
She said that when she was a federal judge, many times lawyers would submit filings to her with case citations that didn't support their arguments. She added that sometimes the bad case citations were pointed out to her by opposing counsel or law clerks.
"Sometimes your law clerk finds it and says, 'Well, no, that deposition citation doesn't say that at all. It says X. And they've said it says Y,'" Forrest said.
While fake case citations appearing in court filings seems to be an unabating problem, some attorneys believe that generative AI tools will improve to the point where they will never or rarely hallucinate or put out false information.
Attorneys told Law360 Pulse that they predict generative AI tools are six to 18 months away from becoming nearly hallucination free.
Shawn Helms, co-head of McDermott Will & Emery LLP's technology and outsourcing practice, said that generative AI models are continually getting better and their hallucination rates are decreasing.
"At some point, these tools are going to be so good that they're not going to be checked appropriately and it will be okay 99.9%," he said. Helms is also co-founder of McDermott's AI cross-practice group.
Steinberg said that the elimination of hallucinations won't end the bigger problem of lawyers being overly reliant on technology.
"To me, the pressure to use tools to replace certain aspects of human lawyering is likely to be the ongoing problem even as the technology evolves," he said.
Forrest predicts that once the issue of hallucinations is resolved, a new problem will emerge where attorneys are using generative AI to draft court filings and submitting those documents without fully understanding their arguments.
"We need to not fall into the trap of thinking that the AI problems we have today are the AI problems we're going to see tomorrow," she said.
--Editing by Adam LoBelia.
For a reprint of this article, please contact reprints@law360.com.