Deepfake Evidence Battles May Exacerbate Justice Inequities

By Rebecca Delfino | July 18, 2025, 4:52 PM EDT ·

headshot of Rebecca Delfino
Rebecca Delfino
Editor's note: Law360 welcomes opinionated commentary and debate in our Expert Analysis "Perspectives" section, which covers access to justice issues. To submit op-eds or rebuttals, or to speak to an editor about submissions, please email expertanalysis@law360.com.

In May, the Maricopa County Superior Court witnessed a moment that may come to define an inflection point in the legal system's relationship with artificial intelligence.

At the sentencing hearing of the defendant, Gabriel Paul Horcasitas, who was convicted of killing U.S. Army veteran Christopher Pelkey in a road rage incident, Pelkey's family played a nearly four-minute video in which Pelkey appeared to speak directly to his killer.

The video, created using generative AI tools, animated a still image of Pelkey and used his voice profile to deliver a posthumous victim impact statement.

Although the video included a disclaimer that the likeness was AI-generated, the emotional power of the presentation was undeniable. The courtroom fell silent. Some wept. The judge watched intently. The defense did not object, but later claimed the video contributed to a harsher sentence.[1]

This extraordinary moment offers a preview of a new frontier in legal proceedings: the era of AI-generated audiovisual evidence.

Although AI offers new expressive tools, it also risks exacerbating long-standing disparities in access to justice. This is especially so with regard to deepfakes — audio and video manipulated using machine learning to appear authentic — which are rapidly entering courtrooms. As AI-generated evidence becomes more common in litigation, parties without financial resources may be unable to challenge or verify it.[2]

To ensure justice is available to everyone — not only for those who can afford it — we must reexamine how the costs of litigating deepfake evidence are allocated.[3]

The Deepfake Dilemma and a New Inequality in the Courts

As the Pelkey case illustrates, deepfakes in legal proceedings are no longer theoretical.[4] From criminal prosecutions to civil lawsuits, litigants may increasingly offer deepfake content as evidence or object to audiovisual material by claiming it is a deepfake.[5]

The problem from an access-to-justice perspective? Proving whether a video is a deepfake requires expensive digital forensic analysis. The cost of engaging an expert can run into tens of thousands of dollars. For indigent defendants or civil litigants of limited means, these costs are simply out of reach.[6]

Take the 2021 case of Raffaela Spone, the Pennsylvania mother — dubbed the "deepfake cheer mom" — accused of creating AI-manipulated videos to harass her daughter's rivals.[7] She denied the charges, but lacked the resources to fund a forensic examination of the evidence.

Her case attracted media attention, and only then did a private firm volunteer pro bono services to analyze the digital files. Without that intervention, she may have been convicted based on unvetted evidence.[8]

Deepfake claims are also raised by lawyers invoking the "deepfake defense," an assertion that seemingly genuine evidence has been digitally falsified.[9] For example, before defendant Guy Reffitt was convicted in the U.S. District Court for the District of Columbia of five counts stemming from the Jan. 6, 2021, insurrection at the U.S. Capitol,[10] his defense attorney claimed that incriminating video evidence was a deepfake.[11]

In Huang v. Tesla Inc. in the Santa Clara County Superior Court,[12] Elon Musk's attorneys in 2023 also invoked the possibility of deepfakes in a California wrongful death lawsuit involving Tesla.[13]

These examples demonstrate how easily growing public awareness of deepfakes could be exploited to evade accountability.

As digital audiovisual evidence becomes more complex and central to litigation, the cost of authenticating it threatens to place justice even further out of reach. Litigants of limited means may be forced to abandon valid claims, settle unfairly or forego defenses altogether — risking wrongful convictions or the inability to use authentic but challenged evidence.[14]

Access to justice in the U.S. is already profoundly unequal. In a system already marked by inequality, deepfake technology may deepen the divide. Detection tools require specialized expertise and remain inaccessible to most, creating a two-tiered reality: those who can afford digital verification, and those left guessing.

Little Relief Offered by Existing Legal Frameworks 

Under the so-called American Rule, each party in litigation bears its own costs and attorney fees unless a contract, statute or court rule provides otherwise.[15] Although some fee-shifting statutes allow prevailing parties to recover attorney fees or costs, few authorize reimbursement of expert witness expenses, especially those related to digital forensics.[16]

The U.S. Supreme Court's 1987 decision in Crawford Fitting Co. v. J.T. Gibbons Inc. limited the recovery of expert witness fees to nominal attendance and travel expenses, unless Congress has explicitly authorized more.[17]

Even when a court appoints an expert under Rule 706 of the Federal Rules of Evidence, the rule contemplates only neutral assistance to the court — not party-retained experts necessary for evidentiary disputes.[18] Furthermore, the cost of court-appointed experts is typically split between the parties, and financial hardship is rarely a basis for avoiding payment.[19]

Contingent fee agreements and third-party litigation funding also fail to bridge this gap. Most such arrangements cover legal representation, not out-of-pocket expenses such as forensic testing.[20]

Public legal aid organizations may lack the necessary resources or technical expertise to support these cases, and there are currently no government programs designed to subsidize forensic expert services in civil litigation.[21]

This means that, under current law, courts have limited flexibility to address the financial barriers that litigants face in mounting or rebutting deepfake evidence.[22] The result presents an asymmetry: a courtroom where the authenticity of evidence may go unchallenged not because it is reliable, but because one side cannot afford to test it.

A Newsworthy Warning: The Pelkey AI Video

The emotional power of deepfakes further complicates matters. Visual media activates cognitive and emotional responses more intensely than text or oral testimony.[23]

The courtroom debut of the Pelkey AI video highlights these concerns. The video of Pelkey was deeply moving. Even though the victim's family was transparent about the video's artificial origin, criminal defense attorneys and public defenders raised serious concerns.[24]

What if the video had asserted a specific sentencing recommendation or recounted contested events? AI avatars might say things the real person never would, making it hard to separate sentiment from substantiated fact. Indeed, the AI-generated message, which appeared to express forgiveness and personal sentiments, may have influenced sentencing, even though courts cannot verify the deceased's actual views.[25]

Digital forensic experts warn that current detection tools are rapidly becoming obsolete. As generative AI tools improve, distinguishing between real and fake content becomes nearly impossible without expensive analysis. And in an emotionally charged setting, such as a sentencing hearing, these tools may be decisive.[26]

While the Pelkey video was not challenged on authenticity grounds, the incident foreshadows a troubling future: one where deepfake evidence is admitted into court without meaningful adversarial testing, because the party seeking to challenge it cannot afford to do so.[27]

A New Cost-Allocation Proposal

To preserve access to justice in the era of deepfakes, we must rethink how the legal system allocates the costs of litigating complex digital evidence. To address these challenges, a new framework is warranted: The party that introduces a deepfake or alleges that evidence is a deepfake should presumptively bear the initial burden of paying the costs associated with the development and presentation of the evidence, including expenses related to digital forensics, image authentication and any related evidentiary hearings.[28]

However, this presumption should be rebuttable. If the proponent of the deepfake evidence — or the party asserting the deepfake defense — can demonstrate both financial hardship and a good faith basis for their claim, then the court should have discretion to shift those costs to the opposing party or appoint a neutral expert at the court's expense.[29]

This principle — the "deepfake proponent pays" — is rooted in existing procedural and equitable doctrines. It mirrors the logic behind Rule 706, which permits court-appointed expert witnesses.[30] It also reflects practices in some family law contexts, such as Section 2030 of the California Family Code, where courts are empowered to allocate legal costs to ensure a fair proceeding.[31]

The approach aligns with the equitable principles embedded in procedural rules and fee-shifting statutes, which are designed to protect access to justice.[32] Critically, this principle also encourages litigants to exercise caution before asserting that an audio or video is fake. It deters frivolous deepfake defenses. And it ensures that the party with the best information and control over the evidence bears the responsibility to establish its veracity.

Why It Matters

Courts have long relied on the adversarial process and evidentiary rules to discover the truth. But when only one party can afford to engage in that process meaningfully, the legitimacy of outcomes is undermined.[33]

As noted, the current cost-allocation regime threatens to create a two-tier system of truth.[34] Even more troubling, the ease with which parties can assert deepfake claims — without fear of meaningful scrutiny — may embolden bad faith tactics. If lawyers can weaponize deepfake claims — or real deepfakes — without fear of meaningful scrutiny, false narratives may flourish.[35]

The result: wrongful convictions, frivolous lawsuits and a justice system awash in synthetic uncertainty. Reallocating costs is no panacea, but it gives courts a tool to promote fairness without requiring universal public funding or pro bono support.[36]

Implementation and Reform

To operationalize this framework, modest reforms could be implemented through rulemaking, legislation or judicial interpretation. First, Congress should amend Title 28 of the U.S. Code, Sections 1920 and 1821, to permit recovery of expert witness fees related to AI evidence or digital forensic analysis.[37]

Congress should also revisit the rule from the Crawford decision to allow for more flexible recovery of expert costs, especially in cases involving novel or high-stakes technological disputes.[38]

In addition, states could enact provisions modeled on California's family law fee-shifting statute to give judges more discretion in allocating costs based on fairness and financial need.

Second, federal courts should more readily use Rule 706 to appoint neutral experts in cases involving AI-manipulated evidence, and state courts should interpret their state's procedural rules to authorize need-based cost-shifting in evidentiary matters.

At a minimum, all courts should encourage early disclosure of any AI-generated evidence, especially deepfakes, and consider holding pretrial hearings to determine authenticity.

Finally, judges should also be educated on the capabilities and limitations of generative AI. Bench cards and evidentiary checklists for AI-generated content could improve transparency and consistency.

Bar associations and judicial education bodies can also help by providing training and best practices for handling deepfake evidence. Bar regulators and courts should also continue to hold lawyers to ethical standards when they present such evidence.[39]

The Stakes for Legal Integrity

The rise of deepfakes in litigation is not a future problem. It is a current one, already reshaping how evidence is presented, perceived and contested. As the Pelkey sentencing hearing demonstrated, AI-generated videos can move judges and juries. But they can also distort proceedings if not properly scrutinized.

The legitimacy of our judicial system depends on fair process and truthful evidence. Deepfakes threaten both. If judges and jurors cannot trust what they see and hear — or if litigants cannot afford to test what they're shown — the fact-finding function of our courts is imperiled.

In a legal system premised on fairness and due process, access to justice is not only about having a lawyer. It's about being able to prove the truth. Access to truth cannot be reserved for only those who can afford it. If we fail to address the economic realities of deepfake litigation, we risk creating a justice system that is both emotionally persuasive and procedurally hollow.

To prevent that, we must act now — before AI closes the courthouse doors to all but the wealthy.



Rebecca A. Delfino is an associate professor of law, and a former associate dean for clinical programs and experiential learning, at Loyola Law School, Los Angeles.

"Perspectives" is a regular feature written by guest authors on access to justice issues. To pitch article ideas, email expertanalysis@law360.com.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

[1] Id.

[2] Rebecca A. Delfino, Pay-to-Play: Access to Justice in the Era of AI and Deepfakes, 55 Seton Hall L. Rev. 789, 791–92 (2025).

[3] Id. at 838-40.

[4] Although the Pelkey video was AI-generated and qualifies as a deepfake, Mr. Pelkey's family created it created for expressive purposes, not to deceive the court. Unlike malicious deepfakes used to fabricate evidence or impersonate others, this example showcases the emotional and narrative power of AI-generated content, rather than its weaponization. Nonetheless, its impact underscores how even well-intentioned deepfakes can influence legal outcomes.

[5] Id. at 795-96; see e.g., Matter of Gabriel H. , 229 A.D.3d 1048, 215 N.Y.S.3d 613 (4th Dept. (2024) (rejecting defendant's claim that a video showing him abusing the victims was a deepfake).

[6] Delfino, supra note 3, at 800-01.

[7] Kim Bellware, Cheer Mom Used Deepfake Nudes and Threats to Harass Daughter's Teammates, Police Say, WASH. POST (March 13, 2021, 8:16 PM), https://www.washingtonpost.com/nation/2021/03/13/cheer-mom-deepfake-teammates/.

[8] Delfino, supra note 3, at 791; Drew Harwell, Remember the 'Deepfake Cheerleader Mom'? Prosecutors Now Admit They Can't Prove Fake-Video Claims, WASH. POST (May 14, 2021, 9:19 PM), https://www.washingtonpost.com/technology/2021/05/14/deepfake-cheer-momclaims-dropped.

[9] Rebecca A. Delfino, The Deepfake Defense—Exploring the Limits of the Law and Ethical Norms in Protecting Legal Proceedings from Lying Lawyers, 84 OHIO ST. L.J. 1068, 1072-1076 (2024).

[10] Reffitt was subsequently pardoned by President Donald Trump on Jan. 20.

[11] Id. at 1068-69.

[12] The case was ultimately settled on the eve of trial.

[13] Shannon Bond, People Are Trying to Claim Real Videos Are Deepfakes. The Courts Are Not Amused, NPR (May 8, 2023, 5:01 AM), https://www.npr.org/2023/05/08/1174132413/people-are-trying-to-claim-realvideos-are-deepfakes-the-courts-are-not-amused.

[14] Delfino, supra note 3, at 802.

[15] Alyeska Pipeline Serv. Co. v. Wilderness Soc'y , 421 U.S. 240, 247 (1975).

[16] Delfino, supra note 3, at 812–13.

[17] Crawford Fitting Co. v. J.T. Gibbons, Inc. , 482 U.S. 437, 445 (1987).

[18] Fed. R. Evid. 706.

[19] Delfino, supra note 3, at 822–23.

[20] Delfino, supra note 3, at 825-26.

[21] Delfino, supra note 3, at 827-28.

[22] Delfino, supra note 3, at 838.

[23] Rebecca A. Delfino, Deepfakes on Trial 2.0: A Revised Proposal for a New Federal Rule of Evidence to Mitigate Deepfake Deceptions in Court, Loyola L. Sch., L.A., Legal Studies Research Paper No. 2025-10 (2025), https://ssrn.com/abstract=5188767.

[24] Martinson, "AI Video Pushes Boundaries of Victim Impact Statements."

[25] Id.

[26] Delfino, supra note 3, at 800-01; Delfino, supra note 21; Martinson, supra, note 25. 

[27] Delfino, supra note 3 at 808.

[28] Id. at 838–39.

[29] Id.

[30] Fed. R. Evid. 706.

[31] Cal. Fam. Code § 2030 (Deering 2024).

[32] Delfino, supra note 3, at 836–37.

[33] Id. at 807.

[34] Id. at 802.

[35] Id. at 810.

[36] Id. at 840.

[37] Id. at 841.

[38] Id. at 842-43.

[39] Id.