Grassley Probes Judges' Possible AI Use In Faulty Rulings

(October 6, 2025, 4:51 PM EDT) -- Sen. Chuck Grassley, R-Iowa, chair of the Senate Judiciary Committee, pressed two federal judges on Monday about their possible use of artificial intelligence in court orders that contained a multitude of errors.

Sen. Chuck Grassley on Monday asked two federal judges about errors in court orders that may have involved the use of generative artificial intelligence. (Photo by Samuel Corum/Sipa USA via AP Images)

Judges across the country have been taking different approaches to the use of AI in their courts. The new technology has prompted vast concerns about ethics and accuracy.

"As Chairman of the Senate Judiciary Committee, I am committed to safeguarding litigants' rights and ensuring that every party in federal court receives fair treatment and careful review by the Article III judges confirmed by the Senate," Grassley wrote in letters addressed to U.S. District Judges Julien Xavier Neals of the District of New Jersey and Henry T. Wingate of the Southern District of Mississippi. "No less than the attorneys who appear before them, judges must be held to the highest standards of integrity, candor, and factual accuracy."

The situation with Judge Neals involves a securities class action against biopharmaceutical company CorMedix Inc. The directors and officers allegedly mislead investors in press releases, earnings calls and regulatory filings as the company pursued U.S. Food and Drug Administration approval of a remedy for catheter-related infections.

In July, Judge Neals withdrew an opinion declining to dismiss the class action after an attorney for the company pointed out the decision had a "series of errors," such as quotes with mistaken attributions and incorrect court decisions. CorMedix attorney Andrew Lichtman of Willkie Farr & Gallagher LLP's letter to the court did not mention the possible use of AI.

In late August, Judge Neals filed a corrected opinion in which he once again declined to dismiss the suit.

"Public reporting attributes these types of substantive errors as hallmarks of generative artificial intelligence ('AI') 'hallucinations,'" Grassley wrote to Judge Neals. "Recent reports note that 'a person familiar with the matter' explained 'that a temporary assistant' in your court 'used an artificial intelligence platform' in contributing to the court's original decision, and 'that the opinion was inadvertently issued before a review process was able to catch errors introduced by AI.'"

As for Judge Wingate, he granted a temporary restraining order in July pausing enforcement of a Mississippi law prohibiting diversity, equity and inclusion in public schools, following a lawsuit by various teachers' associations.

Nine days later, Mississippi Attorney General Lynn Fitch asked the judge to explain the "indisputable factual inaccuracies" in his decision. Specifically, Fitch said the order had false allegations, identified the plaintiffs and defendants incorrectly and misquoted the legislative text.

In early August, Judge Wingate said in an order that his "prompt amendment" to the inaccuracies in the initial TRO was enough, and he didn't have to explain anything further.

In a separate, recent case assigned to him and then referred to U.S. Magistrate Judge Andrew S. Harris, the court issued a show cause order to an attorney after concerns they used AI on a brief that was "riddled with citation errors to non-existent authorities," Grassley wrote in the letter. "That attorney was required to explain under oath how the errors occurred and what remedial steps were being taken to avoid such errors in the future."

Given this, Grassley said he was "troubled" that, in the teachers' association case, Judge Wingate "declined to meaningfully address the defendants' concerns — and their legitimate request to preserve the record and correct the factual inaccuracies in the original TRO order."

Grassley asked both judges if either they or their clerks or staff used AI to write the orders and opinions in each case or put "sealed, privileged, confidential, or otherwise non-public case" information into any AI tool. Among other questions, he asked the judges if they allow litigants in their courts to use AI in their filings. He asked for responses by Oct. 13. 

The judges' chambers could not be reached for comment.

--Additional reporting from Hailey Konnath, Lauren Berg, Katherine Smith and Gina Kim. Editing by Dave Trumbore.

For a reprint of this article, please contact reprints@law360.com.