ChatGPT Suit Points To Ups And Downs Of Pro Se AI Use

By Cara Bayles | May 11, 2026, 11:21 AM EDT ·

When Graciela Dela Torre set about reopening her previously settled case against her insurance company over carpal tunnel syndrome and tennis elbow claims, she docketed about four dozen filings in just over a year, a feat for anyone, let alone a self-represented litigant.

She simultaneously filed a new suit to revive her case against the insurer, Nippon Life Insurance Co. of America, with a flurry of filings.

But her briefs, which made arguments about the Employee Retirement Income Security Act and the Health Insurance Portability and Accountability Act, and brought claims of fraud and breach of fiduciary duty, didn't always conform to the norms of such filings. The font and formatting were unusual. They included "scales of justice" emojis in the headers. One case citation was garbled.

In March, Nippon turned around and sued OpenAI in federal court, alleging that its ChatGPT software was advising Dela Torre and writing her briefs. Nippon's complaint claimed the bot had engaged in tortious interference with its prior settlement with Dela Torre and "the unlicensed practice of law."

That case, filed in the Northern District of Illinois, is still in its early stages, but it highlights many anxieties and hopes about pro se litigants using generative artificial intelligence to churn out legal arguments. The technology raises concerns about confidentiality, hallucinations, and a host of ethical issues. Some advocates for expanding legal services worry Nippon's case might hinder the use of technology that can democratize access for litigants with limited means in cases against financial firms and other large institutions.

Nippon's suit "succeeded in surfacing a question here that the legal profession needs to deal with" before it dives headfirst into AI-enabled services, said Mark McCreary, who co-chairs Fox Rothschild LLP's AI practice and serves as the firm's chief artificial intelligence and information security officer.

"Does an AI tool move from providing general information to rendering tailored legal advice for a specific person when they actually go in and they ask it a question?" he asked. "There's a lot of distinctions anytime you have a product like this of where do you draw the line, what's permissible. That's why this is an interesting case. We're going to find out which side of the line this falls on."

Many elements of the allegations epitomize the potential and pitfalls of AI. Dela Torre's filings included a citation to "Carr v. Gateway," which she claimed was a 2013 federal district court decision that found Gateway couldn't compel arbitration in an ERISA dispute. Nippon, after looking up her citation, argued the case was a hallucination that "only exists in Dela Torre's papers and the 'mind' of ChatGPT."

The citation seems to be not entirely invented. There was a case by the same name that found the computer company Gateway could not compel arbitration, but it was a putative consumer protection class action, and was decided by the Supreme Court of Illinois in 2011.

False citations are not uncommon in AI-generated legal briefs, which plague well-resourced BigLaw attorneys and pro se litigants alike. Courts have grappled with how to handle such problems, with standing orders that require AI disclosures and promises of human review, as well as sanctions for false citations.

It's easy to see why courts are alarmed. Those hallucinations present a threat to the legitimacy of legal argument, according to Brad Wendel, a Cornell law professor who writes about legal ethics.

"I'm really worried this is going to define the standard downward, and courts are going to start saying: 'Well, look, even Boies Schiller and Sullivan & Cromwell screwed this up. So, how can we punish a solo practitioner who's really busy or a pro se?'" Wendel said. "There has to be this really determined effort to hold the line and to keep the norm where it's at, and not let it erode. "

Another dilemma is posed by Dela Torre's alleged use of the chatbot for practical advice. She had settled her claims with Nippon in 2024, but a year later, worried its terms were the result of errors or omissions in the record, and wondered if she'd gotten a good enough deal. According to Nippon's lawsuit, when her attorney told her she couldn't reopen the case, she asked ChatGPT if he was gaslighting her, and the machine gave her an answer that confirmed her suspicions.

Chatbots are designed to please, and will often answer a leading question in the affirmative. That's not true of a human lawyer, who has a reputation, a law license, and time and resources to protect. AI may be able to spit out a passable legal argument, but it's no replacement for human judgment, according to Angela Tripp, a program officer for technology at Legal Services Corporation.

"Talk to a legal aid lawyer and they'll say — all day long, they tell people — 'No, you can't do that,' and disappoint people with the realistic shortcomings and limitations of our justice system," she said. "AI can't grapple with that. It wants to see that everything is possible, because probably somewhere in its database, there's some piece of information that says that it is possible, because it's feeding on what may or may not be true, or is only true in a particular situation."

According to Nippon's complaint, many of Dela Torre's filings predate an October 2025 addition to OpenAI's terms that prohibits users from turning to ChatGPT for legal advice. That provision will likely insulate it from liability involving future litigants, according to McCreary of Fox Rothschild.

But the case still poses ethics and liability questions for AI services, particularly if courts find the new terms don't provide them a meaningful shield.

"If that's not enough, is the response then that OpenAI and Google and Meta all need to reprogram their tool so that it will not give legal advice? I mean, do they need to reprogram it to not tell you how to fly a plane?" McCreary said. "I know they did reprogram it in the early days to not tell people how to build a bomb."

A spokesperson for OpenAI did not respond to questions about its possible liability, saying only that the complaint "lacks any merit whatsoever." Dela Torre and an attorney for Nippon did not respond to requests for comment.

The lawsuit brings up what Wendel calls "an old issue: the unauthorized practice of law by machines." He often teaches about the Janson v. LegalZoom case, in which a Missouri federal court found that a class of consumers could move forward with allegations that fill-in-the-blank software had engaged in the unauthorized practice of law, because it offered a service: the automated preparation of documents, which were reviewed by nonlawyer employees who copy-edited them. The judge wrote that the fact the document was "prepared using a computer program rather than a pen and paper does not change the essence of the transaction."

"Now you have these very powerful, large language model AI things out there that can provide legal advice or draft pleadings, or write nasty-grams to insurance companies," Wendel said. "Clearly, if a human did those things, it would be the practice of law. So what's the status of a machine doing the same thing?"

Access to justice advocates have long worried that unauthorized practice of law regulations are too broad, limiting tools for self-represented litigants.

"The requirement that only lawyers can practice law is there to ensure that legal services provided to people are high quality. Why? Because bad legal services have all kinds of very clear consumer protection implications," said Stanford law professor David Engstrom. "But the problem with any licensure system is access. It can also become a mechanism for protectionism and a mechanism for propping up the earnings of that profession."

But advocates are split on the best uses of artificial intelligence. Jan Jacobowitz, a University of Miami law professor, co-wrote a 2023 law journal article advocating for a new, less vague definition of the unauthorized practice of law, one that would allow for technologies and services that improve access to legal help. But she told Law360 that while she's hopeful AI can help improve legal knowledge, "We're in a big, transformative time in our society, and the legal system is no exception to that, so there needs to be guardrails put in place."

Many of these questions hinge on whether a chatbot should be defined as a tool, a legal adviser or a third party. Courts have grappled with this question as they've struggled with issues of privilege in recent months.

One such case concerned Bradley Heppner, former chairman at GWG Holdings, who was accused of securities fraud. He asked Anthropic's Claude chatbot to generate reports about his then-pending criminal investigation. U.S. District Judge Jed Rakoff in New York ruled in February that because Claude was not Heppner's attorney, his queries to the software were not protected by privilege. That was especially true, he said, because the consumer version of Claude is subject to a privacy policy that allows it to disclose data to third parties. Nor could they be considered work product, since the documents were prepared without the knowledge of Heppner's attorney.

But two other recent decisions went the other way. A U.S. magistrate judge in Michigan denied a discovery request for a pro se plaintiff's AI queries, finding the output was work product because chatbots are "tools, not persons." If uploading information waived privilege, the judge said, it would "nullify work-product protection in nearly every modern drafting environment." Another magistrate judge in Colorado said a chatbot's output for a self-represented plaintiff was work product under the Rules of Civil Procedure, especially because "pro se litigants are forced to act as both party and advocate, simultaneously." The judge barred the plaintiff from uploading confidential information into a chatbot.

It's unclear whether Nippon successfully subpoenaed Dela Torre's ChatGPT history, though its complaint's allegation that she asked the machine if her attorney was gaslighting her suggests some inside knowledge. Dela Torre did lament in a September court filing that Nippon was "demanding access to my private login credentials" in pursuit of its $10 million claim against OpenAI.

She added that the claim "targets a tool specifically designed to help individuals like me: pro se litigants trying to navigate the legal system without the benefit of legal counsel."

While the right to counsel for criminal defendants was enshrined by the Sixth Amendment and the U.S. Supreme Court's 1963 Gideon v. Wainwright case, no equivalent right exists in civil litigation. The Legal Services Corporation — a nonprofit offering civil legal aid to low-income court users — reported in 2022 that it receives 1.9 million requests for help each year, half of which are turned down due to limited resources.

Filings of pro se civil lawsuits seem to be on the rise, though according to an analysis by the National Center for State Courts, as of 2023, they had not yet returned to pre-pandemic levels. The NCSC study, which analyzed new cases in 28 states, found the upswing was mostly driven by contract disputes, which saw a 21% year-over-year increase in 2022 and a 15% increase in 2023.

Another analysis, by Fisher Phillips of new lawsuits in federal court and some state jurisdictions, found a 49% increase in pro se employment cases between 2024 and 2025.

Some scholars have argued AI could have a democratizing effect and improve access to justice, but others urge caution.

"It's a really hard topic," said Tripp of Legal Services Corporation. "Very, very reasonable minds disagree wildly about how much we, as a self-help supporting community, should be pushing AI."

Aubrie Souza, a consultant for the National Center for State Courts specializing in access to justice, self-help and technology, said "access to bad legal help isn't really access to justice."

Engstrom, on the other hand, said the legal system needs to come to terms with the fact that in many cases, pro se defendants find themselves up against institutional plaintiffs — like banks, corporate landlords or the government — and some people are simply priced out of the market for civil legal services.

"If their alternative is a tool that is less than perfect and yet might actually allow them to navigate their case in court, then I think that's something that policymakers should be taking account of," he said.

More common ground can be found on tailored uses of AI, particularly those that speed up, ease or double-check the work of legal professionals, rather than replacing them. Quite a few such uses are in development now.

Tripp pointed to an expungement clinic in Tennessee, in which pro bono attorneys used AI to fill out paperwork that used to take an hour in a matter of minutes, freeing up more time to discuss the process with their clients. Engstrom said his research team at Stanford is prototyping AI tools that will help Los Angeles Superior Courts, including an automated review of default judgment filings in consumer debt collection cases that will ensure they're legally warranted. And Souza cited the NCSC's work with a Pennsylvania court on a credit card debt diversion program. The jurisdiction requires that more than half the debt is established in the card statements, she said, and an AI tool will guide experts through voluminous documents to check for compliance.

When a consortium of experts sought to create a fact sheet of best practices for pro se litigants about AI as a legal assistance tool, they ultimately decided to draft talking points for experts who might interact with those court users, instead of the laypeople themselves, Tripp said, because "they can tailor the guidance to the person."

"What we realized was that we either had to be really general, or we had to go really in depth, like: 'Here's how you phrase the question. Here's how you need to ask the prompt in order to get good information.' And then it's so specific that it's not really useful," she said. "It's like trying to put a little FAQ together about, 'Am I doing the right thing in my divorce?' Well, it depends on where you are, what you're doing."

But the question of how to advise laypeople on using AI for legal services could be rendered moot, if the Nippon case becomes the first of many dominoes to fall. If Nippon does succeed in court, it could lose in another sense, McCreary said.

"To put it bluntly, if this case is somehow successful and there are restrictions, or they scare these companies so much they put them in place, they might piss off a lot of people," he said.

--Editing by Robert Rudinger.