Interview

Fox Rothschild AI Chief Talks 'Terrifying' Deepfakes, Biased AI

This article has been saved to your Favorites!
Mark McCreary, the chief artificial intelligence and information security officer at Fox Rothschild, leads his firm's internal AI strategy and provides counsel to other law firms trying to bushwhack their path through the often murky AI legal landscape, rife with hallucinated case law citations and disturbingly real deepfakes.

smiling man in suit

Mark McCreary

In an interview with Law360, McCreary discusses his disagreement with the Fifth Circuit's proposed requirement for attorneys to disclose whether they used generative AI in a filing, why he thinks "the deepfake problem is much larger than anybody realizes," why employers should think twice before issuing a blanket ban on employee use of tools like ChatGPT, and the concerns he has about society's willingness to tolerate the dangers of generative AI.

"We tolerate a lot of bullshit with it. We tolerate that it's biased. We tolerate that it may make stuff up. And I have a lot of concern about how much we're tolerating," McCreary said.

This interview has been edited for length and clarity.

What challenges are you seeing clients navigating regarding AI policy?

We're having a lot of clients that are questioning how they're going to control their workforce. They're really struggling with what their workforce is able to get their hands on. And even if you block ChatGPT, you then have the risk that the employee is going to take that company information to her personal device, which isn't controlled by the company, and start using it there.

So, that's possibly even a worse scenario because now you've got a copy of company data, law firm data, or whatever, sitting somewhere else.

And then, they're really struggling with, 'How do we invest our dollars into good products that are still going to be around in six months, are useful and are not smoke and mirrors?'

How are you seeing law firms navigating the adoption of AI tools?

Tentatively. I haven't seen anybody rush into it.

In my experience with law firms over my entire career, [the industry-wide discourse on AI] is the most collaborative thing I've been involved with. We all want to sit around and talk about our thoughts on it: 'How do we do this? How do we tackle this?' Nobody's viewing this as, 'I'm going to keep my cards close to the vest because I want to have a leg up on the law firm across the street.'

(iStock.com/CoreDesignKEY)

AI & The Law

In a series of Q&A's with attorneys who are focused on artificial intelligence, Law360 examines the challenges and opportunities that AI presents to law firms, their clients and society at large.

More in this series

Some of the smaller firms – like very small solo practitioners, people that have enough going on that they can't really focus on the Dos and the Don'ts or the Rights and the Wrongs when it comes to AI – they're the ones making the mistakes. They're having ChatGPT fake cases. They're taking what comes out without fact-checking it. I think that's probably more rampant than a lot of people realize.

But you don't see that from big firms. I'm sure there's black sheep running around here and there, but it's really the smaller firms. They just don't have the experts to worry about these things and are just going out and doing it as lawyers – which is just a recipe for disaster.

There's several [instances] where they've hallucinated cases. The one in New York, Mata v. Avianca, is the first and probably the most famous scenario [where a brief prepared by generative AI cited nonexistent case law attributed to real judges]. The second, and also very famous scenario, would be Michael Cohen, of Trump fame, where he gave cases to his defense team that turned out to be fake, and he generated from ChatGPT. That actually happened.

There's a lot more out there that's happening because people don't appreciate just how much these AI tools – and I love these tools – how much they make stuff up, how much they hallucinate.

What's your firm's AI and information security strategy?

This time last year, when everything started to pop with ChatGPT, we talked about what our options were. I came to the conclusion that I'm in a much better situation, as a firm, if I have a policy that says what the risks are, what you can do, what you can't do, what you need to be aware of.

That approach is so much better than coming out and just blocking ChatGPT, just telling lawyers: You can't do this. You can't use AI.

Because what's going to happen is – because you didn't train them on what the risks are and what they can and can't do and why it's bad and good – they are going to take data, they're going to go on their personal devices, and they're going to make mistakes.

I trust that approach, and it's worked well for us for the past year. We've had no issues.

What hopes do you have for AI regulation in the coming years?

There's not a lot of options out there. If you're going to have a federal regulatory approach to it, it's going to be focused on the publishers. They're going to say to – let's just again pick on ChatGPT – 'OpenAI, if you're going to have tools that use generative AI solutions in them, we need you to be able to tell us how the input happens, what data went into it, and really explain the technology.'

Unfortunately, where we [the legal industry] started out – with some of the courts [including the Fifth Circuit] issuing orders saying if you use AI in any filing you have to disclose that – well, that's silly in my opinion. Why do I have to disclose that, as long as I am still lawyering what comes out of it?

I think we're going to get away from that kind of disclosure to the court kind of approach, and instead it's going to be ethical guidelines: remove bias, make sure you're still reviewing everything, check the accuracy, talk to your clients.

Why do you think the Fifth Circuit issued that AI standing order?

I think their hearts, or their minds, were in the right place.

This is happening quickly. They want to get in front of it – which I applaud. I think it was a reaction to [the fake AI-generated opinions in] Mata v. Avianca in New York. I think it was a combination of [the Fifth Circuit judges] not really understanding the technology.

I guarantee there's more lawyers out there copying and pasting links from different online resources that may not be reliable than there are people using ChatGPT and doing the same thing. But they're [the Fifth Circuit] not worried about Googling. Suddenly, they're just worried about ChatGPT. I think it was an overreaction, at least in the approach that was taken.

I think that the correct approach is what we've seen in Florida, New Jersey, and some other states publishing their guidelines recently, which is really, 'Hey, lawyers, we're not saying don't do it, but we're saying you have always had ethical obligations to make sure that your work product is accurate, ethical free of bias and that you're responsible for what you ultimately make your work product. Don't let AI replace those responsibilities.'

I think it's the message they're trying to get across. I think that's the right answer.

How are you feeling about the future of AI?

I'm excited for it. I think it can be really positive for us. I really do, but I have real concerns. Absolutely.

I think the deepfake problem is much larger than anybody realizes, and I don't see a good solution for that.

It's just as much as we talk about it as much as we realize the concerns, we tolerate a lot of bullshit with it. We tolerate that it's biased. We tolerate that it may make stuff up, and I have a lot of concern about how much we're tolerating. I don't know how you legislate around that. You probably can't make it illegal for it to lie to you, even because that's part of the technology. But we are tolerating that happening to us, and you know, deepfakes might be a good example of how far it can go.

It's terrifying, frankly, how easy it is if you have a video of yourself giving a 10-minute speech on YouTube, this technology can make you say anything in the world, and it's convincing.

There's a lot of cultural and financial divide in this country. AI is only going to make that worse, and it's probably on a country level too. The countries that can get away with using AI more efficiently are going to continue to surpass those countries that cannot.

People that can use AI to benefit themselves, will.

I think we're kind of responsible for figuring out how to make that less of an impact and nobody's talking about that.

--Editing by Peter Rozovsky.


For a reprint of this article, please contact reprints@law360.com.

×

Law360

Law360 Law360 UK Law360 Tax Authority Law360 Employment Authority Law360 Insurance Authority Law360 Real Estate Authority

Rankings

Social Impact Leaders Prestige Leaders Pulse Leaderboard Women in Law Report Law360 400 Diversity Snapshot Rising Stars Summer Associates

National Sections

Modern Lawyer Courts Daily Litigation In-House Mid-Law Legal Tech Small Law Insights

Regional Sections

California Pulse Connecticut Pulse DC Pulse Delaware Pulse Florida Pulse Georgia Pulse New Jersey Pulse New York Pulse Pennsylvania Pulse Texas Pulse

Site Menu

Subscribe Advanced Search About Contact