Judges On AI: How Judicial Use Informs Guardrails

(January 28, 2026, 4:07 PM EST) -- Do artificial intelligence tools have any practical judicial applications? In this Expert Analysis series, state and federal judges explore potential use cases for AI in adjudication and beyond. If you're a judge who would like to contribute to this series, email expertanalysis@law360.com.



Judge Maritza
Dominguez Braswell

Artificial intelligence now touches almost everything from routine emails to complex legal filings. For me, learning to use today's AI began as a curiosity, then it became a deliberate effort to build competence and today it is integral to my work — both as a judge and a member of the broader legal community.

As a regular user, I have a sense of how generative AI tools behave, where they add value, where they introduce risk, and how they are reshaping the practice of law. At the same time, I'm realistic: These tools are changing daily, and some risks or pitfalls may not be apparent. That means I must approach each use with caution.

Why I Developed an AI Use Policy for Chambers

I created a set of guardrails for the use of AI in my chambers. Whether I like it or not, people are using AI everywhere, all the time. And unlike past technologies, which historically entered workplaces from the top down, generative AI has proliferated from the bottom up.

"Shadow users" appeared overnight across industries, quietly integrating AI into their work long before organizations had any policy in place. With hundreds of millions of active users worldwide, a "no use" policy is much riskier than a "limited use" policy.

We all know AI use can lead to hallucinations, subtle inaccuracies, mischaracterizations of legal standards and biased information creeping into the decision-making process. But when used responsibly, AI means opportunity.

These tools can streamline workflows, enhance clarity, increase access to justice and give us new ways of solving old problems.

I want an environment where my team can discuss AI use openly and often. This allows us to explore innovative applications while remaining vigilant, together. With more eyes and minds on the issue, we have a greater chance of flagging risks and keeping each other in check about where and how AI should —and should not — play a role.

Additionally, the more I use the technology myself, the better I am at spotting telltale signs of AI-generated filings. Knowing the rhythms of the model helps me recognize the off-notes: the subtly wrong sentence, the fabricated authority, and the analysis that feels too generic or too on-the-nose.

Although deepfakes are much harder to detect, I hope that increased familiarity with AI tools will, at a minimum, help me navigate those challenges when they arise.

How I Use AI, Personally and Professionally

In my personal life, I use AI all the time. I've used ChatGPT to plan travel, refine writing, create menus for family gatherings, organize holiday events, troubleshoot appliances and even come up with a prank or two for my teenagers. These everyday uses give me a low-stakes environment to understand the technology's strengths and limitations.

Professionally, I use AI most often for presentations. Over the last two years, I've prepared dozens of talks for judges, lawyers, students, bar associations and symposiums, nationwide.

I routinely use generative AI to refine slides, summarize studies, test explanations and adjust tone for different audiences. It has become an invaluable tool for digesting dense material and presenting it in a way that is clear and engaging.

In my chambers, AI plays a more limited but still meaningful role. For example, I use it to prepare for hearings, particularly when a documents custodian, digital forensics specialist or other technical expert will testify. AI helps surface technical details I might otherwise miss, helping me prepare and ask more productive questions.

Prompts matter here. I tell the model to act like a particular type of expert and I provide it with anonymized, high-level context that helps it provide tailored assistance. I also take care not to ask the AI to answer questions the expert is supposed to answer.

I simply want a primer that helps me better understand the testimony and argument at the hearing.

I also use AI to refresh my grasp of procedural rules or legal frameworks before hearings.

Distinguishing between high and low risk tasks is essential. High risk tasks involve areas where my own knowledge is thin. There, the model could easily lead me astray. Low risk tasks involve areas I know well where misleading information is easier to spot.

I also avoid asking AI to independently retrieve rules or identify standards, because its "memory" in this respect can be unreliable.

Instead, I upload specific rule text or direct the system to focus on a defined standard. I've even uploaded our local rules to a project folder, so I don't have to repeat the process each time.

Equally important is what AI does not do in my chambers. AI does not evaluate evidence, draft judicial opinions or recommend outcomes. Those responsibilities remain firmly human.

I also prohibit the uploading of confidential and sensitive information into open systems. My policy also emphasizes the importance of judgment and human decision-making. Judicial officers are entrusted with applying independent judgment, and that judgment should not be outsourced.

Additionally, AI systems are hindered by bias that we can't always interrupt. Thus, while they can be powerful tools for surfacing perspectives, they should not shape outcomes.

I also bookend AI use with my own analysis and traditional legal research methods. This mirrors the way I interact with my law clerks. Before reviewing a clerk's bench memo, I study the issues myself and form a preliminary view.

I then use the clerk's memo and my discussions with them to test my and their assumptions and perspectives. I also verify information as necessary before landing on a decision.

In other words, their bench memo and our discussions are sandwiched between my own analysis and verification. AI is often in the same place: the middle, not at either end.

Why Engagement Matters to Me

Some of the judges I've spoken with are hesitant to use AI and express concern over confidentiality and accuracy. I share those concerns. But AI use is increasing exponentially, and it touches our courts whether we want it to or not.

Abstaining from AI will not insulate me, it will only make me less prepared to understand how it's shaping litigation, advocacy, the courts and our work.

AI is also reshaping society at large, and judges occupy a privileged position in this ecosystem: forming many of the guardrails that keep AI contained and safe. That role requires that we remain engaged and informed.

Looking Ahead

Tools like ChatGPT, Gemini, and the AI assistants built into Westlaw and Lexis are only the beginning. Research and writing with AI barely scratch the surface.

I've used code generation platforms to prototype simple applications, document analysis tools to digest dense documents, and AI-driven image generators to create infographics. I explore emerging tools because the pace of change demands it.

Two years ago, in a room full of lawyers, only a handful used AI. Today, when asked if they use AI regularly, nearly every hand goes up.

The world is changing faster than we realize. Our responsibility as judicial officers is to keep up, stay alert and ensure that as AI evolves, we evolve with it. Put simply: We won't be able to contain AI if we don't understand it, and we won't understand it if we never use it.



Maritza Dominguez Braswell is a U.S. magistrate judge at the U.S. District Court for the District of Colorado. She also serves as a co-chair of the District of Colorado's AI Committee, and is a founding member of the Judicial AI Consortium.

Law360 is owned by LexisNexis Legal & Professional, a RELX company.


The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

For a reprint of this article, please contact reprints@law360.com.