Judges And Law Scholars Divided Over AI Standing Orders

This article has been saved to your Favorites!
Several federal judges have issued standing orders blocking or putting guidelines on the use of artificial intelligence over accuracy issues with the technology, but a few legal scholars have raised concerns that the orders might discourage attorneys and self-represented litigants from using AI.

About 16% of more than 1,600 U.S. district and magistrate judges have issued 23 standing orders on AI as of February, according to a Law360 Pulse tracker. Some of the issued orders have been signed by more than one federal judge.

Most of the orders allow attorneys and self-represented litigants to use AI-generated content in their court filing as long as they disclose what content was produced by AI and certify that the AI-generated content is accurate.

For example, Hawaii U.S. District Judge Leslie E. Kobayashi wrote in her order, "The court directs that any party, whether appearing pro se or through counsel, who utilizes any generative artificial intelligence (AI) tool in the preparation of any documents to be filed with the court, must disclose in the document that AI was used and the specific AI tool that was used. The unrepresented party or attorney must further certify in the document that the person has checked the accuracy of any portion of the document drafted by generative AI, including all citations and legal authority."

While some orders simply remind attorneys and self-represented litigants of their obligations to submit accurate briefs, other orders outright ban the use of AI-generated content in court filings.

Many orders are focused on the accuracy of AI, but a few orders, such as California U.S. Magistrate Judge Peter H. Kang's order, express concerns about other issues like confidentiality.

Judge Kang said in his July 14 order, "In the course of preparing filings with the court or other documents for submission in an action, counsel and parties choosing to use an AI or other automated tools shall fully comply with any applicable protective order and all applicable ethical/legal obligations (including issues relating to privilege) in their use, disclosure to, submission to, or other interaction with any such AI tools."

The orders also vary in specificity. Some orders specifically call out generative AI and applications like OpenAI's ChatGPT, Harvey.AI and Google's Bard, now called Gemini, while other orders lump all AI technologies together. The orders range in length from one line to several pages.

More than a third of the orders were issued May through July, after it came to light that two New York personal injury attorneys submitted a ChatGPT-generated brief with fake case citations. The attorneys were ultimately sanctioned for their mistake.

Since the New York case, several other courts, including Texas and Missouri state appeals courts, have called out litigants for submitting AI-generated court filings with fake case citations. A Manhattan federal judge also criticized a law firm for using ChatGPT to support its attorney fee request of more than $100,000.

Judge Kang told Law360 Pulse in a recent interview that attorneys keep arguing they didn't know generative AI can produce false information to avoid sanctions for fake case citations, so he sees his standing order on AI as a way to educate attorneys and pro se litigants on the risks of the technology.

"By issuing public standing orders such as this, I hope in some way to help promote public trust in the judicial system by demonstrating to litigants that the court is aware of these cutting edge issues and will hold parties to these basic standards in dealing with AI," he said.

Pennsylvania U.S. District Judge Gene E. K. Pratter told Law360 Pulse that she believes her AI order is similar to a standing order she has that requires pro se litigants to disclose if they have received assistance from an attorney not appearing in their case.

"I want to remind the lawyers that I think they should be putting their own skin in the game," Judge Pratter said.

Even though judges are well-meaning with their AI orders, a few legal scholars have concerns about these orders discouraging the use of technology and creating more barriers for self-represented litigants.

Maura Grossman, adjunct professor at York University Osgoode Hall Law School in Toronto, noted that rather than trying to comply with the requirements in federal judges' standing orders on AI, attorneys might choose to avoid the technology altogether.

For example, California U.S. District Judge Araceli Martínez-Olguín's standing order requires lead trial counsel to personally verify the accuracy of AI-generated content, but in big litigation, the lead trial attorney is not the person who usually checks case citations in every motion, Grossman said.

In addition, Judges Martínez-Olguín and Kang advise attorneys to keep all prompts used to generate AI content for court filings in case they become relevant. Grossman said this is another requirement that could deter attorneys from using AI.

"They're more likely to say, 'Just don't use it,'" she said. "People who are required to save every prompt will say, 'Infringes on work product, don't use it.'"

Paul Grimm, retired judge and director of the Bolch Judicial Institute at Duke Law School, added that attorneys practicing in multiple court jurisdictions are more likely to make mistakes when they have to comply with multiple different standing orders on AI.

Some U.S. district courts currently have two or three different standing orders on AI issued by separate judges, according to Law360 Pulse's tracker.

"If every one of them has their own tailor made order, then the chances of sort of messing up with regard to judge X becomes huge," Grimm said.

Another issue with federal judges' standing orders on AI raised by legal scholars is the barriers they create for self-represented litigants.

Daniel Linna, senior lecturer and director of law and technology initiatives at Northwestern University Pritzker School of Law, said that AI orders that outright ban the use of AI remove a valuable resource for self-represented litigators who can't afford a lawyer and don't qualify for legal aid.

He added it is unclear whether the orders that prohibit the use of AI would also apply to legal aid tools leveraging the technology.

Rather than banning the use of AI, federal judges can check pro se litigants' case citations using the legal research tools that they have access to through the court, Linna said.

"What we should be doing is using technology in courts to have online dispute resolution, have AI tools that help guide people through the courts so they can comply with these procedural rules, and we could absolutely build those things," he said. "We could create courts that would accommodate people and help them through the court system, but instead we're punishing self-represented litigants and telling them they can't use these tools."

Grossman and Grimm said a better alternative to federal judges' individual standing orders on AI is courts issuing one rule for all the judges in their district. The Fifth Circuit is currently mulling this approach.

By going through the rulemaking process, federal courts would be able to address some of the issues that have been raised about individual federal judges' standing orders on AI, they said.

"If a district is going to do it, they need to be thoughtful about what language they use and what is the scope of the order to make sure it's clear to people what the court is trying to get at, what is included and what is not included, so that people understand what their obligations are," Grossman said.

Despite their concerns with federal judges' orders on AI, legal scholars expect that more federal and state court judges will issue AI orders this year.

Grimm said that he expects judges will continue to issue individual orders on AI until a court steps up and issues a local rule that other courts will want to adopt.

Grossman added that she expects issues with fake case citations will go away as attorneys learn the limitations of AI, but judges still need to grapple with the issues of deepfake, or AI-fabricated, evidence.

"Deepfakes are not going away, and we are going to see more and more and more deepfakes," she said. "They're more focused on the case hallucinations, and meanwhile, the tsunami's coming of the deepfakes, and they're not focused on what I think is the much bigger problem."

--Editing by Emily Kokoll.


For a reprint of this article, please contact reprints@law360.com.

×

Law360

Law360 Law360 UK Law360 Tax Authority Law360 Employment Authority Law360 Insurance Authority Law360 Real Estate Authority

Rankings

Social Impact Leaders Prestige Leaders Pulse Leaderboard Women in Law Report Law360 400 Diversity Snapshot Rising Stars Summer Associates

National Sections

Modern Lawyer Courts Daily Litigation In-House Mid-Law Legal Tech Small Law Insights

Regional Sections

California Pulse Connecticut Pulse DC Pulse Delaware Pulse Florida Pulse Georgia Pulse New Jersey Pulse New York Pulse Pennsylvania Pulse Texas Pulse

Site Menu

Subscribe Advanced Search About Contact