Analysis

In The World Of Deepfake Porn, Tech Moves Faster Than Law

(October 31, 2025, 5:21 PM EDT) -- In October 2023, when New Jersey student Francesca Mani discovered her classmates had used an app to generate nude deepfakes of her and other girls, she and her mother confronted her high school and found no relevant law, no clear punishment for perpetrators, and little recourse for victims.

Older man in a dark suit and glasses smiling and shaking hands with a young woman in a light textured jacket while others applaud in a bright indoor setting.

New Jersey Gov. Phil Murphy shakes hands with student advocate Francesca Mani after signing the state's deepfake ban into law in April. Mani, whose case inspired the legislation, joined the governor at the State House ceremony.

What followed helped spark legislation in New Jersey that pairs criminal penalties with civil remedies, part of a national reckoning over artificial intelligence's misuse.

The case out of Westfield High School in northern New Jersey shows how quickly AI harms can outpace existing rules, forcing schools, prosecutors and lawmakers to improvise. It's a collision between fast-moving technology and slow-moving institutions; schools and courts that were never built to keep up with online platforms leave victims unprotected in the chaotic hours after an image spreads.

At stake is more than school discipline or social media policy, it's a measure of how a society built on proof and responsibility responds when anyone can fabricate a convincing fake picture or video in seconds.

What Is a "Deepfake"?

Deepfakes are extremely realistic AI-generated or digitally altered images, audio or video that make it appear someone said or did something they never actually did. The term surfaced in 2017 on Reddit, when users posted fake pornography of celebrities, according to media reports.

Today, anyone can use "nudify" tools to take an ordinary school portrait and use it to fabricate sexually explicit images in seconds. While the same underlying technology can be used for legitimate research and media that isn't sexual in nature, the use of these tools to create nonconsensual sexual imagery has become one of the most urgent ethical and legal issues of the AI age.

Mental health professional Alessandra Kellermann, a Westfield resident, initially reached out to the Manis in the immediate wake of the deepfakes and attended meetings that Francesca's mother, Dorota Mani, hosted in her home with concerned community members and families of the other victims.

Kellermann, who counseled Francesca in the early months after the incident, urged abandoning the term "deepfake."

"Calling it 'fake' tells teens it didn't really happen," she said. "Clinically, we call this image-based sexual abuse because the impacts mirror sexual abuse: more often than not triggering anxiety, depression, PTSD, even suicidality." She also cautioned against demonizing AI itself.

As founder of Homefront Hugs, a nonprofit supporting military families and veterans, Kellermann has seen AI used for good, from early cancer detection to aiding in the creation of prosthetic technologies that restore speech and mobility.

"There's enormous good in AI," she said. "The key is drawing clear lines around consent and misuse, so the technology is used to heal and empower, not to exploit."

Limited Options for Victims

The incident at Westfield High School exploded into public view Oct. 20, 2023, when Principal Mary Asfendis sent a disturbing email to parents.

The email alerted parents to "a very serious incident" in which students had "used Artificial Intelligence to create pornographic images from original photos." Asfendis wrote that administrators were investigating, police had been notified and counseling was being offered to affected students.

Asfendis added that the school would contact families once the investigation concluded. The images circulated on Snapchat, though investigators never confirmed how many were made or shared.

"I was frustrated, angry, and honestly hurt, and I realized that if I wanted to see real change, it had to start with me, even if I had to start alone," Francesca, a 14-year-old freshman when the deepfakes circulated, recalled in a written statement her mother released to Law360.

Neighbor and attorney Bill Palatucci of McCarter & English LLP was one of the first people Dorota Mani contacted in October 2023, shortly after Francesca discovered the pornographic media made with her image.

"She called me in a panic, not knowing whether this was a police matter [or] a school matter," he said. Palatucci would play a key role in the public debate that followed.

Kellermann said, "Our town was quietly divided. Some people told me this was just 'boys being boys.' That reaction minimizes real harm. These are image-based sexual abuses, not pranks."

All the victims were girls, but it's unclear how many were affected — the information is protected under Title IX rules, the federal civil rights law that prohibits sex-based discrimination in education programs or activities receiving federal funding.

When families learned about the deepfakes, they had few legal options. Section 230 of the federal 1996 Communications Decency Act shields tech companies from liability for user-generated content, making it difficult to sue the app developer.

Within the school system, Westfield's policies governed the response. Under Policy 5751 and Regulation 5751, the district must promptly investigate any report of sexual harassment, offer "supportive measures" such as counseling or schedule changes, and ensure an impartial process. Under Westfield's Title IX and Code of Conduct policies, students found responsible for sexual harassment could face disciplinary sanctions, up to suspension or expulsion, determined by administrators in consultation with the Title IX coordinator, according to district regulations.

The school system's Anti-Bullying Policy 5512 likewise covers electronic acts that substantially disrupt school operations; discipline can range from counseling to suspension depending on severity.

In practice, school administrators were left to navigate overlapping frameworks — Title IX, sexual harassment policy and antibullying rules — for a new kind of digital misconduct.

Francesca put it bluntly: "When I brought it to my [school] administration, the incident was brushed off as 'widespread misinformation,' and the response was limited to counseling sessions. It felt like a slap in the face."

Scott + Scott Attorneys at Law LLP partner Sindhu Daniel, who represented Francesca and several other girls in Title IX complaints against the school, recalls the challenges families faced.

"At the time, we had very limited legal recourse available to us," Daniel said.

"Section 230 made it nearly impossible to hold the apps or the platforms where these apps could be purchased and downloaded accountable, and existing laws at the time simply weren't designed to address this type of harm."

Daniel filed Title IX complaints individually on behalf of the victims, including for Francesca, on April 4, 2024. She added that some parents opted to file a federal case against the alleged creators of the deepfakes before U.S. District Judge Esther Salas, though it predates the state's law on deepfakes and is being litigated anonymously.

Dorota Mani said the school found one student responsible but gave only minimal punishment.

"Some administrators told us he was suspended for one day, others said two," she recalled. "Yet he still played sports and represented the school."

Daniel added that the lack of accountability compounded the trauma: "These girls had to see the student who harmed them in the hallway every single day. It's like forcing a victim of assault to share a classroom with their attacker."

The Mani family ultimately chose not to bring a civil lawsuit.

Westfield Public Schools declined to comment.

Local coverage at the time noted that the district declined to disclose any disciplinary action, citing student privacy rules. The Associated Press reported that Westfield officials "did not confirm any punishments," while The Guardian wrote that the school "has not publicly stated how many students were affected or what consequences were imposed."

A Tale of Two Schools

Mani said the Westfield case drew national parallels almost immediately. Around the same time, she noted, administrators in the Beverly Hills Unified School District in California faced an almost identical scandal involving AI-generated sexual images of students. But their superintendent "stood up," she said, and expelled the students responsible despite pushback from parents.

"It's literally apples to apples," Dorota Mani said. "You had two affluent, educated towns — ours just swept it under the carpet, and Beverly Hills did what was right to support its girls."

Westfield, a town of about 30,500 residents, has a median household income of $212,700, more than double the statewide median, according to the U.S. Census Bureau.

In letters sent to parents, administrators from the Beverly Vista Middle School said students had created and shared AI-generated nude images of classmates, and the school pledged "the most severe disciplinary actions allowable under California Education Code," including possible expulsion.

California's Education Code gives administrators clearer authority to suspend or expel students for sexual harassment, even when the conduct occurs online. New Jersey's rules, by contrast, emphasize investigation and supportive measures under Title IX procedures, leaving discipline largely to local discretion.

"This behavior is unacceptable and does not reflect the values of our school community," Principal Kelly Skon and Superintendent Michael Bregy wrote at the time, adding that the Beverly Hills district was working closely with police.

The district also announced new digital citizenship lessons and warned that "any student found to be creating, disseminating, or in possession of AI-generated images of this nature will face disciplinary actions."

Charles Gelinas, who serves as a Westfield school board member and as co-chair of New Jersey Gov. Phil Murphy's Commission on the Effects of Social Media Usage on Adolescents, said administrators in Westfield "acted as quickly as they could with the tools they had," but acknowledged schools and police "aren't equipped to be cyber-sleuths."

A Family Turns Pain Into Advocacy

Palatucci, the attorney and neighbor whom Dorota Mani had called soon after the discovery of the deepfake images, connected the family with several New Jersey lawmakers, including state Sens. Kristin Corrado, R-Passaic, and Jon Bramnick, R-Union, and U.S. Rep. Tom Kean Jr., R-N.J., catalyzing the push that ultimately produced New Jersey's deepfake ban. The Manis' advocacy on the matter sparked national media coverage.

The Manis hosted meetings from their home in the later months of 2023 with the lawmakers, local officials, police and families of other victims in an effort to have their plight heard.

Young woman in a white suit standing on outdoor steps with a domed government building and decorative lamppost behind her.

Francesca Mani visits Capitol Hill in Washington, D.C., in early 2024 to advocate for federal protections against nonconsensual AI-generated imagery. (Courtesy of Dorota Mani)

"Bills rarely move that fast," Palatucci said. "But because of Francesca's involvement, it got done."

Palatucci, a father of three daughters, said the Manis' response set a national example.

"They're a great illustration of taking a negative situation and turning it completely on its head," he said.

New Jersey's Deepfake Law

Signed by Murphy on April 2, New Jersey's deepfake statute, A.B. 3540/S.B. 2544, creates civil and criminal liability. Producing or using a deepfake to commit another unlawful act, such as harassment or election interference, is a third-degree crime punishable by up to five years in prison and $30,000 in fines. Knowingly or recklessly distributing deceptive media is a fourth-degree crime carrying up to 18 months. Sentences for third-degree violations must run consecutively to any underlying offense.

The law also establishes a private right of action, allowing victims to sue for actual and punitive damages, attorney fees, and other costs, a civil remedy that lawmakers said was designed to ensure "aggrieved victims ... may seek appropriate recompense," according to the legislation.

Palatucci said that while he applauds the new law, he wishes it were stronger.

"The governor's conditional veto added a high bar for proving intent," he said. "But I think students like the ones who did this to Francesca knew exactly what they were doing."

Murphy conditionally vetoed the original version of the bill, according to a March 17 statement from the governor's office warning that the bill's broad language could "chill protected speech," and required lawmakers to narrow it to intentional and harmful uses of synthetic media.

Lawmakers accepted the governor's recommendations and amended the statute before enactment, narrowing its scope and clarifying that liability requires proving a person knowingly used or distributed deceptive media for unlawful purposes.

As of October, a review of New Jersey court filings and public reporting shows no record of criminal or civil cases brought under the state's new deepfake statute. That absence appears to reflect the law's recent enactment and the high bar for proving intent; attorneys expect the first test cases to emerge once prosecutors or plaintiffs identify clear instances of knowing misuse.

Deepfakes on the National Stage

High-profile figures have also been targeted. In February 2024, U.S. Rep. Alexandria Ocasio-Cortez, D-N.Y., scrolled across a pornographic deepfake of herself on the X platform while riding in a car with aides. She later described the image as "triggering" and said it resurfaced past trauma, even though she knew it was fabricated, according to media reports at the time.

Sen. Amy Klobuchar, D-Minn., later revealed she too had been victimized, helping build congressional support for the Take It Down Act, a bipartisan law passed in May that criminalizes the nonconsensual publication of intimate images, including AI-generated ones, and requires platforms to remove such content within 48 hours of notice.

Ocasio-Cortez also backs the proposed Defiance Act, which would broaden the legal avenues for victims of nonconsensual deepfake imagery to sue those who create or distribute it. The bill enjoys bipartisan support but has stalled. While victims can already bring claims under privacy or defamation law, the measure aims to streamline those cases and fill gaps left by inconsistent state statutes.

The Defiance Act, short for the Disrupt Explicit Forged Images and Non-Consensual Edits Act, originally passed the Senate by voice vote in July 2024 but died in the House. It was reintroduced in May by Sen. Dick Durbin, D-Ill., and referred to the Senate Judiciary Committee, where it remains pending.

Even global celebrities have been victimized. In 2024, Houston-based rapper Megan Pete, better known as Megan Thee Stallion, sued blogger Milagro "Gramz" Cooper in Florida federal court, alleging that he harassed her and distributed a pornographic deepfake to defame her. The rapper alleges that Cooper acted on behalf of Megan's ex-partner, another popular artist named Tory Lanez, who was convicted in 2022 of shooting her in the foot after an argument.

A federal judge later issued a gag order restricting Cooper's online commentary to prevent further harm. The rapper also sought a restraining order alleging Lanez was coordinating online attacks from prison, recruiting third-party influencers to discredit her. The case has drawn national attention as one of the first celebrity-driven defamation suits explicitly citing AI-generated sexual imagery.

State Laws Leading the Way

The debate over synthetic media has entered national politics. In September, President Donald Trump circulated AI-altered videos depicting political opponents and protesters in degrading and racially caricatured ways. The posts drew condemnation from civil rights groups and some lawmakers, who said they illustrated how AI tools can amplify harassment and hate speech.

As Congress remains divided on how to regulate such material, states have begun moving faster than Washington, D.C., to enact their own laws.

In addition to New Jersey, states such as California, Utah, Michigan, Minnesota, Virginia and Texas have expanded or adopted laws governing deepfakes, particularly in the realm of elections and sexual imagery. As of October, at least 26 states now restrict or require labeling for AI-generated content in political advertising, typically focusing on disclosure and timing, according to the National Conference of State Legislatures.

Several of these laws have already faced constitutional challenges. In Minnesota, X Corpsued in April to block the state's deepfake election law, arguing it violated the First and 14th amendments.

In California, a federal judge struck down A.B. 2655 in August, siding with content creators and platforms that argued the law chilled protected political speech and finding it preempted by Section 230. As of October, there is no publicly reported appellate filing in the case, leaving the statute effectively blocked for now. Meta, Google and Epic Games warned that the law would force broad age-verification systems and chill online speech. 

Together, these measures form a patchwork that political campaigns, advertisers and platforms must navigate carefully. Noncompliance can lead to takedown demands, civil or criminal penalties or reputational harm, and courts are still determining how far states can go in policing synthetic media without infringing on free-speech rights.

Legal Hurdles and Future Cases

In Houston, attorney Tony Buzbee represents an anonymous TikTok influencer whose likeness was allegedly used without consent to create deepfake nudes. A prosecutor has already filed felony online-impersonation and misdemeanor distribution charges against the man accused of making the fake images, Jorge Abrego.

Legal counsel for Jorge Abrego could not be found by time of publication.

"There will definitely be a civil action, but I don't want to interfere with the DA," Buzbee said. "This case will set precedent. I hope it discourages this egregious conduct, raises awareness, and educates law enforcement about what is going on."

No prosecutions or civil suits have cited New Jersey's new law. Lawyers note many app developers operate overseas. Gelinas, the Westfield school board member, likened enforcement to "a game of whack-a-mole."

"These app developers can vanish overnight, reconstitute overseas, and meanwhile the content is already viral. Section 230 shields platforms, app creators disappear, and victims are left without recourse."

He pointed to international models like the U.K.'s Children's Code and California's Age-Appropriate Design Code Act, which emphasize privacy-by-default design, though U.S. enforcement has been slowed by constitutional challenges.

Under the U.K.'s Children's Code, major apps such as YouTube, TikTok, Instagram and Snapchat introduced tighter privacy defaults for minors: disabling targeted ads, turning off location tracking and limiting nighttime notifications, according to guidance from the U.K. Information Commissioner's Office.

While the code does not directly regulate deepfakes, Gelinas said its "safety-by-design" approach offers a model for how platforms could reduce the risk of synthetic sexual imagery reaching or targeting minors.

California's Age-Appropriate Design Code Act, temporarily blocked from enforcement on First Amendment grounds, sought to impose similar design obligations on platforms "likely to be accessed by children," including social media, gaming and streaming apps. 

While states draft new legal codes targeting AI technology, examples of misuse continue to grow online. The National Center for Missing & Exploited Children reports a sharp rise in AI-related exploitation: from 4,700 reports in 2023 to nearly 67,000 in 2024.

"Kids have done absolutely nothing wrong, but their images are being pulled from yearbooks, social media, and made into sexualized deepfakes," said Yiota Souras, NCMEC's general counsel. "The sense of powerlessness and mental trauma can be profound."

John Shehan leads the CyberTipline teams, which sift through tips from concerned citizens and look for credible examples of child sexual abuse material. Shehan said the charity's investigative team members now spend a lot of time looking into AI exploitation: "We're forced to spend time and effort on material that may not even be real, while other kids in genuine danger need our attention."

What Comes Next

Gelinas emphasized that laws alone won't solve the problem.

"This is like the war on drugs," he said. "You can chase the suppliers forever, but what really matters is education, prevention and rehabilitation. Schools are the place to start; we need to teach kids what this technology can do, and why misusing it isn't just a prank but a real harm."

He said the Westfield case showed both the promise and limits of local institutions.

"The school acted quickly and the police responded, but none of us were equipped for what this was," he said. "We need new watchdogs, new expertise, and stronger bridges between policymakers, educators and platforms."

Kellermann, the Westfield mental health professional, had a similar view.

"Takedowns don't undo screenshots. Survivors live with the fear it resurfaces," she said. "That's why prevention, clear rules, education and early intervention matter as much as penalties."

In a statement released by her mother, Francesca said young people should remember that they are worthy of respect, and that if necessary, they should stand up for themselves. And she warned young people that misusing AI can bring serious consequences not just for victims, but perpetrators, too.

"The NJ AI law is a step forward, not just for protecting our images, but for teaching all of us that misuse of AI is illegal. Use AI wisely, as a tool to create, learn, and grow, not as a way to harm others or yourself."

--Editing by Orlando Lorenzo and Marygrace Anderson.

Law360 reporter Corey Rothauser continues to follow news about deepfake law. If you have a case or situation you would like to highlight, contact him at corey.rothauser@law360.com.

For a reprint of this article, please contact reprints@law360.com.