Lawyers Need To Better Understand Perils Of AI, Paper Argues

This article has been saved to your Favorites!
As generative artificial intelligence becomes more commonplace in the legal industry, attorneys must better understand the limitations of large language models and programs to "reason" so as to best take advantage of the burgeoning technology, a Hong Kong-based law professor argues in a new research paper.

In a March paper titled "Caveat Lector: Large Language Models in Legal Practice," Eliza Mik, an assistant professor of law at the Chinese University of Hong Kong, cautioned legal professionals not to believe that ever-popular generative AI systems like ChatGPT can understand and make legitimate legal arguments. The paper was published on the Social Science Research Network in advance of publication in the Rutgers Business Law Review.

Along with previously researching and teaching at Singapore Management University and Melbourne Law School, Mik worked in-house at several software and telecommunications companies in Australia, Poland, Malaysia and the United Arab Emirates. She advised on software licensing, technology procurement, digital signatures and e-commerce regulation, having written her Ph.D. thesis on private law aspects of e-commerce and on general problems of transaction automation.

In an interview with Law360 Pulse, Mik argued that while the future of the legal tech sector was bright, with artificial intelligence likely to play a major role in the industry, most lawyers and some in the tech space possess a very limited understanding of its use.

"The trend is to underestimate the complexity of legal work and to overestimate the actual capabilities of LLMs," she said.

Mik's paper explains that large language models simply replicate legal language and predict subsequent sentences based on past work; they do not use actual reasoning to make a legal argument, as an appellate judge would adduce judgments based on prior cases and everyday logic.

"At a basic level, LLMs understand neither their input nor their output," the paper said, adding that they "do not know the meaning of words."

Text itself is a means to an end for attorneys' logical reasoning, the paper argues, whereas an AI cannot philosophically understand the many tangential concepts underlying a word like "justice" in the way it may understand the word "apple" in relation to actual images and videos of a piece of fruit.

In another sense, the AI chatbots lack common sense and would prefer to make "hallucinated" arguments than none at all — a problem that has resulted in attorneys being reprimanded by judges in the U.S. since the emergence of ChatGPT.

Asked to argue why a subatomic particle could be president, one AI chatbot spit out a confident dissertation in favor of voting in the particle as opposed to simply rejecting the absurd premise, according to the paper.

"How could a system that does not understand language and lacks common sense augment legal work, not to mention replace lawyers?" the paper asked.

This doesn't mean AI and LLMs cannot be helpful to attorneys, however. As the paper notes, functions such as legal research, judgment prediction and text analysis could be handled by machines.

In order for AI and related LLMs to provide a tangible benefit to legal professionals, though, LLMs must be enhanced through other techniques, Mik told Law360 Pulse, such as connecting to reliable external knowledge bases, not just search engines. The challenge remains to ensure that the AI is pulling from reliable information while generating answers.

One technique highlighted by the paper, called reinforcement learning from human feedback, entails having a user give the program a thumbs up to successful query generations — though that, in itself, brings up an issue of bias.

Reinforcement learning from human feedback "leverages human preferences and aligns the model with the opinions of a specific group of people — not with any objective ground truth," the paper said.

Similar issues arise in simply feeding a model more training data, as "given their statistical nature, LLMs cannot differentiate between an informed academic treatise proposing law reform and an angry rant of a Reddit user who is enraged by the 'injustice of the system.'"

Still, while Mik points out inherent limitations to the technology, she encourages legal practitioners to experiment and figure out where it best fits within a system, ideally one made of multiple functions.

"LLMs could improve user interaction and, if integrated with a reliable knowledge base, improve basic legal question answering and information retrieval," Mik told Law360 Pulse. "The operative word in the previous sentence is 'basic.' Ultimately, they cannot replace lawyers and each and every output they generate must be reviewed."

She also warned against viewing generative AI as a quick fix to the access to justice issue that leaves many without legal support in the marketplace. Viewing the technology as an easy, low-barrier solution could result in legal ramifications down the road.

"The worst-case scenario would be for people from disadvantaged backgrounds or junior professionals using LLMs to obtain information in such high-risk areas as law, medicine or finance," Mik said. "It is one thing to use LLMs to 'write' creative copy, it is another thing to use them to obtain factually correct advice."

--Editing by Karin Roberts.


For a reprint of this article, please contact reprints@law360.com.

×

Law360

Law360 Law360 UK Law360 Tax Authority Law360 Employment Authority Law360 Insurance Authority Law360 Real Estate Authority

Rankings

Social Impact Leaders Prestige Leaders Pulse Leaderboard Women in Law Report Law360 400 Diversity Snapshot Rising Stars Summer Associates

National Sections

Modern Lawyer Courts Daily Litigation In-House Mid-Law Legal Tech Small Law Insights

Regional Sections

California Pulse Connecticut Pulse DC Pulse Delaware Pulse Florida Pulse Georgia Pulse New Jersey Pulse New York Pulse Pennsylvania Pulse Texas Pulse

Site Menu

Subscribe Advanced Search About Contact