Neither judge explained the cause of the errors until the committee contacted them.
The use of generative artificial intelligence has become more common in the US judicial system. Wingate and Neals join scores of lawyers and litigants who have been rebuked for using AI to produce legal filings strewn with errors.
Legal groups are still catching up.
The Administrative Office of the US Courts, which supports the federal court system, issued interim guidance in July that suggests users “consider whether the use of AI should be disclosed” in judicial functions.
It has also established a task force to issue additional guidance on AI use in federal courts.
Grassley said on Tuesday that federal courts need to establish rules on AI use in litigation.
“I call on every judge in America to take this issue seriously and formalise measures to prevent the misuse of artificial intelligence in their chambers,” he said.
Wingate and Neals said in their letters that they took corrective measures after being alerted to the mistakes and will implement additional reviews of court filings before they are submitted.
Neals said he established a written policy in his chambers prohibiting the use of generative AI in legal research or drafting court filings.
Wingate did not immediately respond to a request for comment. Neals’s office declined to comment.
Case one: Mississippi education
Wingate, whom President Ronald Reagan appointed to the court in 1985, was overseeing a case brought by the Jackson Federation of Teachers and other advocacy groups against the Mississippi State Board of Education and other state bodies.
The suit challenged a state law banning public schools from teaching “transgender ideology” and “diversity training” on topics of race, gender and sexual orientation.
On July 20, Wingate granted a temporary restraining order that blocked the state from enforcing parts of the ban. Two days later, in a motion to clarify, Mississippi lawyers said Wingate’s order was replete with errors.
The order named several plaintiffs and defendants, including a college club, a Mississippi parent, students, and government officials, who were not parties to the case, according to the Mississippi lawyers’ response.
The order described allegations that did not appear in the plaintiff’s complaint and falsely quoted the legislation as being blocked, the lawyers noted. The order also cited declarations from individuals in support of a restraining order that did not exist.
Wingate’s office issued a corrected restraining order that evening and told the parties to disregard the previous one. The case is ongoing; Wingate granted a preliminary injunction against the legislation in August that Mississippi lawyers appealed.
Case two: Medical product
Neals, who was appointed by President Joe Biden in 2021, issued an opinion with errors in a federal securities class-action lawsuit against CorMedix, a pharmaceutical company, over allegations that it misled investors about a medical product.
On June 30, Neals denied a CorMedix motion to dismiss the lawsuit. About a month later, lawyers for CorMedix wrote that Neals’ opinion contained fabricated cases and non-existent quotes from real cases it cited in support of his ruling.
It misstated the outcomes of cases and whether appeals motions to dismiss were granted. It also attributed false quotes to CorMedix, according to the letter.
Neals’ opinion was also submitted as “supplemental authority” in support of another class-action lawsuit, whose defendants also raised the issues with his filing, the letter said.
Neals said the opinion was entered in error and removed it from the court docket. The case is ongoing.
The mistakes in both judges’ orders were similar to those caused by AI hallucinations - where generative AI, which produces text by predicting what words follow each other from an analysis of written content, confidently invents facts and false citations. Observers quickly speculated that the errors had come from AI use.
At first, facing questions from lawyers and litigants, neither judge admitted that the errors were AI-related. Grassley, in his speech, called their “lack of transparency … breathtaking”.
The Senate Judiciary Committee wrote to Neals and Wingate in early October inquiring about the mistakes, it said.
Both judges said in their responses that the errors were attributable to AI but that the filings were drafts that were mistakenly published before review.
A law clerk in Wingate’s office used the Perplexity AI tool as a “foundational drafting assistant” to synthesise publicly available information on the court docket, Wingate wrote. A law school intern for Neals used ChatGPT to perform legal research, Neals wrote.
(The Washington Post has partnerships with Perplexity and ChatGPT’s creator, OpenAI.)
“I manage a very busy docket and strive to maintain the public’s trust by administering justice in a fair and transparent manner,” Wingate wrote. “Given that I hold myself and my staff to the highest standards of conduct, I do not expect that a mistake like this one will occur in the future.”
“While my experience in the CorMedix case was most unfortunate and unforeseeable, I hope that, at the very least, it will inform the [Administrative Office of the Courts] Task Force’s continuing work and ultimately lead to new meaningful policies for all federal courts,” Neals wrote.
Sign up to Herald Premium Editor’s Picks, delivered straight to your inbox every Friday. Editor-in-Chief Murray Kirkness picks the week’s best features, interviews and investigations. Sign up for Herald Premium here.