Analysis

ChatGPT's Potential Is 'Astonishing,' But Nothing To Fear

(April 5, 2023, 8:48 PM EDT) -- A recent experiment in Brazil pitted ChatGPT against a team of student advocates for a moot hearing, and while the results were "astonishing" in some ways, experts tell Law360 that there's no need to worry about arbitration lawyers being replaced by artificial intelligence any time soon.

With all the discussion surrounding ChatGPT in recent months, it was probably inevitable that someone, at some point, would want to see how a ChatGPT-led team would fare against a team of human lawyers in an arbitration. The experiment conducted in Brazil last month helped to illustrate the potential of ChatGPT, which experts say could change the way the profession is carried out in the coming years.

Two Brazilian organizations — the Chartered Institute of Arbitrators Brazil Branch and the Arbitration Channel — organized a moot competition in early March using ChatGPT. One team was made up of the winning team at the 2023 Rio Pre-Moot held at the Federal University of Rio de Janeiro, a warmup to the Willem C. Vis International Arbitration Moot competitions that are held annually in Vienna.

The other team was comprised solely of ChatGPT, which had been coached for several weeks before the event by a young lawyer who had formerly participated in the Vienna Vis Moot competitions and a data scientist who teaches machine learning. Its prewritten pleadings and real-time responses to the tribunal's questions were read by two actors during the mock hearing.

Perhaps unsurprisingly, the human team won the moot. But the performance of ChatGPT was nonetheless impressive, the organizers told Law360 recently, noting that ChatGPT was able to present its arguments in a very reasoned and structured way.

"The results were, in some ways, astonishing," said André Guskow Cardoso, a partner at Brazilian firm Justen Pereira Oliveira & Talamini who specializes in digital law.

"I think we were all very positively surprised at the quality of what ChatGPT produced for this demonstration round," added Cesar Pereira, who is also a partner at Justen Pereira and the most recent past chairman at the Chartered Institute of Arbitrators Brazil Branch.

While noting that they used the widely available ChatGPT 3.5, rather than the more advanced ChatGPT 4 — which was just released last month and is only available for paid subscribers for now — Pereira said he was excited about the chatbot's potential.

"Even [using the free version], they were able to put together an argument that was quite good," he told Law360. "Of course, [ChatGPT] lost to the human team — the human team was much better, much more flexible, they brought things that were more related to the case ... but it was quite acceptable what ChatGPT did for basically no cost and [very] little effort."

Since ChatGPT was launched last November, experiments such as this one have gotten the arbitration community talking about its potential to assist lawyers — or, for the more nervous types, how it might oust them from their jobs.

But experts told Law360 that the latter scenario is extremely unlikely.

"Even in 'Star Trek,' they still had lawyers," said Ralph Losey of Losey PLLC, who specializes in representing companies and individuals in information technology issues and who lectures frequently on technology law.

But the former scenario may actually already be happening, even though it still requires oversight from a human for now.

"The bottom line ... is that where we're at, [ChatGPT] does need to be quality controlled by a human expert, because it's not reliable enough for justice to depend upon it," Losey said. "But boy, it's a great tool."

The chatbot's uses for lawyers right now include writing simple things such as internal memos or possibly emails, and probably, at some point, documents such as wills and trusts. But many experts say its real potential is in legal research — particularly for young lawyers who embrace the technology and learn how to use it effectively.

"It won't replace lawyers or their advisers, but it will very likely amend their roles, and also amend very much what is expected of their skill set — what they're expected to bring to the party," said Tim Harrison, founder and CEO of Arkus Consulting Limited, which specializes in e-discovery, forensic and data services.

That includes knowing how to ask ChatGPT to complete a complex task such as doing legal research, a skill that many people who are interested in this technology are trying to learn right now.

"What seems to be a very common thought process at the moment is [that] ... the quality of the input has a very close relationship to the quality of the output," Harrison said. "So what I'm seeing now is a lot of commentary about ChatGPT prompts and the quality of that needing to be really up there if you expect to then receive quality output."

Simon Q.K. Garfinkel, a young lawyer at Canadian firm Taylor McCaffrey LLP who practices in his firm's advocacy, litigation and dispute resolution department, recently penned an article about artificial intelligence and the "very real" legal implications of the technology. He told Law360 that he views ChatGPT as a "fascinating and potentially powerful" tool, particularly for legal research, which currently makes up a substantial part of his workweek.

"The technology we use to do legal research is actually already quite advanced, but I think the ability to read and pull out specific types of cases, or read an entire case in a matter of seconds and pull out key aspects from it, it is extremely exciting to me, and extremely powerful," Garfinkel said.

Like the organizers of the Brazil experiment, however, Garfinkel noted that working with ChatGPT isn't problem-free yet.

"To be perfectly candid, I've experimented with using ChatGPT for legal research, and I don't have enough comfort in the outcomes that it's produced yet," he said. "It's not necessarily designed for that yet. But ... you and I can't read 10,000 cases in five minutes and search for specific cases, a specific set of facts. Artificial intelligence will be able to do that very soon."

Garfinkel noted that while doing legal research, ChatGPT provided case names that, upon review, were not applicable to the situation.

"I think the reality with a lot of legal questions is they are very unique and it's not a black-and-white answer," he said. "If I ask ChatGPT, 'What time is the sun going to rise in Chicago tomorrow,' that's a pretty black-and-white answer, and it could quickly provide me with an explanation ... but then once we're getting into the detailed nuance of case law and legal arguments, it's quite complex, and it just seems that maybe the technology is not there."

Another big problem with ChatGPT that's been widely identified by computer scientists is its propensity to "hallucinate" — meaning that at times it simply makes up information that sounds plausible.

This phenomenon was displayed during the experiment in Brazil, where organizers said ChatGPT at times simply made up cases that sounded plausible while constructing its arguments. Still, Pereira noted that the computer would admit the falsehood if pressed.

Moreover, these hallucinations may even be useful — if you know that they're a possibility.

"If you set up something like ChatGPT not to hallucinate, it will be more compliant, but it will be less creative. So hallucination is something that can be used for good purposes provided that you know it exists, provided that you know that some things you may be hearing from ChatGPT come from hallucination," he said. "But if you [check] what it said, you may be able to use some of that hallucination as brainstorming."

But there are other problems with ChatGPT, too. For one thing, the version of ChatGPT used during the experiment in Brazil couldn't handle more than about 3,000 words at a time, which became problematic when the coaches were trying to tell it about the problem at hand in the arbitration so that it could develop an argument. As a result, the coaches had to break the case into sections and ask ChatGPT for pleadings for each of the different sections. Those pleadings were then made into a 30-minute pleading, according to Pereira.

This limitation also proved to be an issue when the tribunal asked a question during the hearing because the coaches would then have to input the question into the appropriate section of the case.

There are also issues with ChatGPT not being human that may not be resolved in future iterations, further underscoring the idea that a computer simply cannot replace the human touch.

For one thing, the computer will not be personally committed to the case. That was an issue in the Brazil moot, Pereira noted — when ChatGPT was asked a question at one point, it gave an answer that, while correct, was contrary to its client's interest.

Nor can ChatGPT strategize like a human lawyer about where and when to file a case, or any of the other numerous strategic decisions that lawyers make on behalf of their clients all the time, Garfinkel said.

And there are certain human characteristics, such as empathy, that are essential for lawyers that ChatGPT will likely never have.

"At the end of the day, lawyers are a service industry. We're serving clients ... and we have to keep them happy. We have to understand our clients' needs and tend to them," Garfinkel said. "Oftentimes our clients' needs are obviously legal in nature, but I think similarly, clients want somebody who's going to help guide them through a complex legal situation and take a very stressful situation and make it at least palatable and less stressful."

He continued: "I think that requires, in some areas of law more than others, and for some clients more than others, at least some degree — some would say a high degree — of emotional intelligence and empathy and compassion." 

Nor do experts like Losey, who acts as an arbitrator in domestic cases, see ChatGPT taking work from a human arbitrator at any point.

"As an arbitrator and as a lawyer for a long time — 40-plus years — I know when things are right and when they're not right," he said. "There's more to law than just reason. That's why it's law and equity, and that's something that takes experience."

"It takes a full human presence, a being, which robots don't have," he added. "You don't want to have an inanimate object that has no feel, that just does things based on pure reason and rationality, that doesn't really get how humans lie and make things up."

--Editing by Alanna Weissman and Jay Jackson Jr.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!