Thumbnail Image
Michael Livermore

Episode 7: The Lawyer in Your Computer

From courtroom apps to analyzing law texts, UVA Law professor Michael Livermore explains how technology is reshaping legal processes and yielding new insights. More

Show Notes: The Lawyer in Your Computer

Michael Livermore

Michael Livermore is a professor of law at the University of Virginia, teaching administrative law, environmental law, and regulatory law and policy. He is a pioneer in the application of computational tools in legal studies, enabling a new methodological approach to evaluating legal texts. To help curate the Livermore regularly engages in interdisciplinary projects across disciplines ranging from economics, computer science and neurology. He is also a leading expert on the use of cost-benefit analysis to evaluate environmental regulation, co-authoring “Retaking Rationality: How Cost-Benefit Analysis Can Better Protect the Environment and Our Health” and co-editing of “The Globalization of Cost-Benefit Analysis in Environmental Policy.”

Before joining the faculty, Livermore was the founding executive director of the Institute for Policy Integrity at New York University School of Law, his alma mater. Following law school, Livermore was a fellow at NYU’s Center on Environmental and Land Use Law, then clerked for Judge Harry T. Edwards of the U.S. Court of Appeals for the D.C. Circuit. In one of his latest articles, “A Quantitative Analysis of Writing Style on the Supreme Court,” Livermore presents a quantitative analysis of the text of past U.S. Supreme Court opinions to discern patterns and implications about the institution’s behaviors.

Listening to the Show

Transcript

[MUSIC PLAYING]

RISA GOLUBOFF: Hello. And welcome to Common Law, a podcast from the University of Virginia School of Law. I'm Risa Goluboff, the dean.

LESLIE KENDRICK: And I'm Leslie Kendrick, the vice dean.

RISA GOLUBOFF: So you might have heard in our last episode, on this season about the future of law, how autonomous vehicles are changing the face of traffic laws, and how liability gets imposed. And autonomous vehicles are a pretty visible example of new technology changing the law. Today, we're going to talk about a less visible example. And that is how digital technology is changing, actually, the way lawyers work, and the way people access legal information.

LESLIE KENDRICK: That's right. We're going to be talking about big data and the cutting edge tools that allow computers to crunch massive amounts of information like never before. And that has implications in all sorts of areas, including legal practice and legal theory.

RISA GOLUBOFF: Absolutely. And so here to help us talk about this today is Michael Livermore. He's a professor here at UVA, who teaches administrative and environmental law.

LESLIE KENDRICK: Mike has also gotten interested in the power of computers to unlock human data, to reveal insights on law. One important part of that area of expertise is an online workshop that he leads on the computational analysis of law. Welcome, Mike.

MICHAEL LIVERMORE: Well, thanks very much. It's a pleasure to be here.

LESLIE KENDRICK: Yeah, we're so excited to have you.

RISA GOLUBOFF: It's great to have you. Why don't we start there, on that online workshop? The computational analysis of law-- what is that?

MICHAEL LIVERMORE: Yeah, that sounds super boring, right?

RISA GOLUBOFF: I think it sounds fascinating.

LESLIE KENDRICK: I think it sounds interesting. But I don't know what it is.

MICHAEL LIVERMORE: Right. So what we're doing in the workshop is research that uses advanced computational tools to study the law as a phenomenon. And what I mean by that is the law as text, right? So lawyers know this. I think everybody knows this-- that the law always shows up as text. That's what law is, you know?

Think back to the very first laws ever in the history of the world. It's chiseled on stone, some text, right? And what's happened in the last 15, 20 years is that the tools that we have to analyze text have really become much more sophisticated. And you think, you talk to your phone. Siri can understand what you're saying, right? That's just a simple manifestation.

But especially with text, we can get all kinds of interesting statistics. We can take a body of text that we know nothing about and automatically extract its content in the different categories. So there's all kinds of cool tools.

And what we do in this workshop is basically apply those tools to the law to see what we can learn about legal development, how law changes, what the law looks like in basically every domain imaginable. So it's super eclectic in terms of the topic matter. But we're using these new techniques to try to learn things.

LESLIE KENDRICK: Can you give us some examples of some of the topics because maybe that will give us a way into how this technology is actually working and helps us understand about law.

MICHAEL LIVERMORE: Sure. So I'll talk about my papers because I'm a law professor. So I have to do that.

LESLIE KENDRICK: Yeah. Yeah.

MICHAEL LIVERMORE: So in one paper and, actually, a few different papers, we look at the Supreme Court. It's an interesting, I mean, obviously, it's something lawyers care about. It's US Supreme Court. We have digital copies of every Supreme Court opinion. And so we've looked at a couple of different things.

So in one paper, we look at the evolution of writing style on the Court, right? We just ask, how has the Court's writing style changed over time? Are there stylistic epics, where style is relatively consistent within a group of people over a temporal period?

One of the interesting things we do in that paper is we do something that's referred to as sentiment analysis. So what you do in sentiment analysis is you try to extract out the sentiment, how the writer is feeling, from text. And you can do this at really a fairly well-established technique. And we find, over time, that the Supreme Court is gradually getting grumpier, and grumpier, and grumpier.

LESLIE KENDRICK: Really?

RISA GOLUBOFF: They're not supposed to feel, those judges, right? There shouldn't be in that--

MICHAEL LIVERMORE: No emotion, right? Yeah.

RISA GOLUBOFF: In the theory of the objectivity of judging, there shouldn't be emotion in there.

MICHAEL LIVERMORE: Maybe, it's because they're suppressing their feelings more than used to, right? So that comes across in a negative way, right?

RISA GOLUBOFF: So it's coming out, right?

LESLIE KENDRICK: You can find it with your tools.

MICHAEL LIVERMORE: And then another question that we ask in that paper is whether clerks have an influence over the writing style on the Court, right? So clerks are a relatively modern institution, really. In the mid-20th century, you start to see the clerk as we understand it.

RISA GOLUBOFF: So recent law school grads who clerk for a judge for a year, usually a year, and help them write their opinions.

MICHAEL LIVERMORE: Right, do serious, substantive work. And so in the earlier phases, there were, of course, employees of the Court, and what might even be called clerks, but they would be their drivers, or carry their briefcases, or whatever. And gradually, the roll took on a more substantive cast to it. And the mid-20th century is where there's a bit of a phase change.

And so we look to see whether the Court's style is different now than it was, essentially, before in a way that the time trends don't account for. And what we find is, actually, there is more year to year inconsistency. So each justice's writing style is more inconsistent, which is consistent with the story of you're turning over your clerks every year.

But also we find that the Court's writing style, just generally, not an individual justice, but the Court's is more consistent. So, again, in a way, that's consistent with the clerk story, where the clerks are all trying to achieve some--

LESLIE KENDRICK: Some voice.

MICHAEL LIVERMORE: Exactly-- Supreme Court voice. But what's lost is the individual justice's voice.

RISA GOLUBOFF: So does that vary across justice though, how much variation there is year to year? I mean, I clerked for Justice Breyer. And Leslie clerked for--

LESLIE KENDRICK: I clerked for--

TOGETHER: Justice Souter.

RISA GOLUBOFF: So I think we'd both be curious. Are there justices who really do have more of their own voice?

MICHAEL LIVERMORE: You know, there probably are. There probably is-- that phenomenon, I think a lot of people report that. And different justices, just, the scuttlebutt is some write their own opinions. Others, their clerks write their opinions for them.

We have a data problem to really be able to say that with enough-- basically, we need a lot of data to be able to say anything about any of this stuff because there's so much variation. And so we haven't been able to really get a bead on that at the justice level.

LESLIE KENDRICK: Can you tell us more about the technology here? So it sounds like, in all of these studies, there is some sort of tool that's being used to process and basically read scads and scads of legal text, and to identify patterns in them, and to help us understand, in some more bird's-eye view way, the patterns that you would see across just huge amounts of legal text. Is that the basic idea?

MICHAEL LIVERMORE: Yep. That's the basic idea.

LESLIE KENDRICK: How different is that from say, Westlaw, or Lexis, or tools that have existed for a long time that lawyers are familiar with?

MICHAEL LIVERMORE: Right. I mean, in a sense, it's fairly similar. The data's the same, right? So at some level, there was a big important phase change that occurred when all of this stuff that we're talking about went from being in an analog format, in books that you read with your eyes, to a digital format that could be processed. So that was--

RISA GOLUBOFF: And then searched, right?

LESLIE KENDRICK: And searched.

MICHAEL LIVERMORE: And then searched.

RISA GOLUBOFF: Right. So in the old days, you had to have indexes, and headnotes, and all kinds of analytical tools that allowed you to find the cases you wanted to find.

MICHAEL LIVERMORE: To even find them in the first place, right? And so that was a huge shift that occurred. I guess in the '70s and '80s is when that happened. And so that was step one. And then, basically, what's happened over the past several decades is computer scientists, and mathematicians, and data science people, and people that do natural language processing, and linguists, and so on, have come up with different ways of trying to think about what to do with this data, basically.

The sentiment analysis that I described is, literally, just there's a dictionary of positive words and negative words, you know? So the word nice is a positive word. And the word grumpy is a negative word. And you, then, just take a text. You reduce it-- to get a little technical, what's referred to as a term frequency vector. So what that is-- it's actually a very simple thing. It's just a list of every word that appears in the document and the number of times that it appears in the document.

And then from those lists of words, you can say, well, which ones are the positive words? Which ones are the negative words? And what percentage are positive words? And which percentage are negative words? There's nothing to that math. It's just counting.

But it illuminates what's going on in a text, and not as sophisticated as if you were to read it. You would have a much better sense of whether-- what the tone of the thing was. But you can't read 100,000 documents. Right? So it's a rougher measure. But you can apply it in such scale that you can learn things.

And so, basically, all of these techniques are, in one way or another, variants on this. It's counting. It's some statistics-- some of it more sophisticated than others, some of it quite computationally intensive and very sophisticated. But they're versions of taking lists of words, and their frequencies, and so on, and analyzing them in some way to tell us something about the document, that we might be able to learn from reading it. But we just can't do it at that scale.

RISA GOLUBOFF: So these new techniques, including artificial intelligence, AI, that I want to get to in a minute, they're based on the existence of big data combined with the technology that allows you to see patterns and understand what's going on in the big data in ways that we didn't used to be able to.

MICHAEL LIVERMORE: Right. That's the project.

RISA GOLUBOFF: OK. Great. So tell us, for lawyers, what are the challenges and opportunities of that project? What's changing in the world as a result of those transformations?

MICHAEL LIVERMORE: Right. So it's actually a huge world of these things. So if we think of, really, in practice stuff, one is in discovery, right? For folks listening, discovery is the process where if there's two litigants that are battling over something, and there's a phase of the litigation where, essentially, one can-- the plaintiff can make requests of the defendant for various documents, and so on, and so forth.

And in really big litigation, one of the ways that defendants can comply is to just send lots, and lots, and lots of paper over to the plaintiff's side. And the plaintiff, then, has to review all this. And so--

RISA GOLUBOFF: So it's time intensive. It's personnel intensive. And it's expensive.

MICHAEL LIVERMORE: Exactly. And it used to be that you would just send, on this big litigation, you would send 20 associates over to a doc review room, or something like that, to look through this stuff. So a couple of things have happened. One is documents have become digital, right? So now, it's no longer a warehouse holding all these documents.

Now, that has actually made discovery, in a sense, harder because there's more documents that are retained. So the number of things that are retained are just massive because that includes all emails, and all the attachments to every email, and so on. So the amount of stuff that's potentially subject to discovery requests is just humongous.

So as a consequence, though, of the digitization, you can also apply these computational tools. Just simple stuff, like keyword searches, that is super common. But what's happened in the last few years, there's been a move towards what's referred to as predictive analytics in the context of discovery. So the way that works is you might make some initial requests, and so on, and go back and forth.

But what you can also do, on top of that, is, essentially, take a random sample, or some narrowed sample, pull out the documents that are responsive. And then use those-- what you've hand-coded as a training set that you can, then, use to train a machine learning algorithm that can go through the rest of the documents, OK? So now, instead of 20 people on your discovery team, you can get that down to 5. So it's a huge cost savings.

So that's one example, right? But, I mean, that's an important one though. Discovery is a lot of work. It's a lot of what especially junior lawyers do when they first get in, if they're in a litigation practice. And so this is a very important change in legal practice that has come about. And there's an AI machine learning element to it.

What I think is on the horizon is a project that, again, I've been working on-- lots of other people work on this in different ways-- is law search. So another thing that lawyers do is search for the law, right? That's part of the expertise.

That's part of what we train students to do at a law school, right? You come into law school, you don't necessarily know how to find relevant law. You leave law school, hopefully, you do. That's the idea.

RISA GOLUBOFF: That's what we hope.

MICHAEL LIVERMORE: And then-- and you get better at it as you proceed in practice.

RISA GOLUBOFF: So a case comes up, and you've got to figure out, what are the relevant precedents? What are the authorities?

MICHAEL LIVERMORE: Exactly. What are the statutory authorities, the regulatory-- I mean, a matter comes up, right? And you're going to advise a client or you're going to litigate it. And the client-- it could be anything.

It could be, I have a dispute with somebody who could say, I want to open a factory. What are the relevant laws that govern what I can and can't do? What permits do I need to get, and so on? And so how does a lawyer in practice do that? Well, some of it's his or her personal experience with similar matters. And then they can get on Lexis or Westlaw and do that whole thing.

That's getting more sophisticated. So like I said, we started with, as Risa noted, you go to the library. You look up in the index, or whatever. Now, Westlaw, you can put in your Boolean terms, and connector search, and a little bit of natural language processing.

But on the horizon is more like artificial intelligence tools. So IBM's Watson, which was the famous-- beat whatever guy-- what is Ken, something or other, at Jeopardy. Does anyone remember his name?

LESLIE KENDRICK: Ken Jennings.

MICHAEL LIVERMORE: Yeah, Ken Jennings. I'm not a Jeopardy fan myself. But in any case, Watson was the AI application that beat the human world champion at Jeopardy, very difficult. And IBM is now deploying that same software in lots of different applications. And law is one of them.

There are a few companies and some law firms that are working with IBM's Watson to improve-- and it's search. So basically, you go up, and you ask an equivalent of a Jeopardy question, right? You can phrase it in terms of an actual question though, I think.

RISA GOLUBOFF: And you get an answer.

MICHAEL LIVERMORE: Right. And you get a declarative-- you just get an answer, like, this is the law. This is the relevant law. And there's another-- I've been working with some folks at Dartmouth and some folks at the engineering school here on an approach where you could, someday, if we get this sufficiently refined, input, essentially, a legal memo without any law in it. It just describes the matter in legal terms, essentially. And you up, input it into the machine, and the machine gives back to you all the relevant law.

LESLIE KENDRICK: Wow.

MICHAEL LIVERMORE: And you can imagine that being an iterative process. So you write down some initial thoughts. The machine gives you back some possibly relevant authorities. You add the ones that are relevant. You refine your thinking. It goes back to the machine, sends you more relevant authority.

Possibly then, if you're at a law firm, linking you up with internal information because the law firm has seen similar matters. And so, essentially, you end up cowriting a legal memorandum or a brief with the machine. And it's almost impossible to back out what was the contribution of the person and what was the contribution of the machine.

RISA GOLUBOFF: That's fascinating.

LESLIE KENDRICK: That sounds incredible.

RISA GOLUBOFF: So one thing that I think about when you describe that is I think there's a lot of fear-- and this has come up in a number of our episodes-- that new technologies will eliminate jobs, will eliminate the need for people to do things. But what you're describing isn't necessarily the elimination of lawyers doing work.

It's a new way of doing work. It might require, maybe, fewer lawyers in certain instances. But, I mean, especially that, right? That doesn't happen without the person participating in that iterative process.

MICHAEL LIVERMORE: Right. So there are two general views on this, what's going to happen as AI penetrates the economy? There's the massive dislocation, end of the world, up to end of the world. And then there's what more mainstream economists tend to think, which is we have dealt with technological change many, many, many times. Yes, there are typically some dislocations. But over time, there's enhanced prosperity, and actually more labor, workforce participation, right?

Vox, the news website, has a fun video series. And there was this one article that a reporter had dug up from the '30s, or something. And it was talking about the introduction of the concrete mixer and how this was going to just be horrendous.

And the way it described the concrete mixer, it was just hilarious. It was like, this machine that can mix concrete like it was dough. And it's going to replace hundreds of concrete-mixing people, which, sure, it did, actually. I have mixed concrete by hand, by the way. And if you can have a machine do it, have a machine do it.

RISA GOLUBOFF: That's better.

MICHAEL LIVERMORE: It's definitely better, right? And so, yeah, but there were fewer jobs. There's a whole argument that, now, this is different because, now, it's taking over human cognition. And that's the last bastion, or whatever. We'll see. I strongly suspect in our lifetime, it's mostly going to be positive in terms of employment.

As you mentioned, a lot of this stuff that we're talking about still requires a lot of human intervention, right? What machines are good at is very specific narrow task kind of work that you define the contours of, that a human being defines the contours of. And that is likely to be the case for the foreseeable future because if you think of the most impressive advances in artificial intelligence, beating a human being at Go, right? The domain is just super specified, right? You know what the rules are. You know what the flow of information is.

And so what I think we're learning is that where human beings can define a task to have certain characteristics, narrow domain, knowable rules, and so on, lots of data that you can get your hands on, either that you can generate yourself or get your hands on, because the way that these games do this through self-play. So you don't need to have the data out there. It can play itself. But, again, you have to know the rules.

And when you can do that, and you sic an AI on it, the AI is going to beat any human being, ultimately, right? But it still requires that first process of defining the scope, right? AI is good at operating in these constrained, narrow environments.

Human beings, because our neurological architecture evolved over hundreds of millions of years, operating in the real world, so early mammals didn't have a constrained environment. They had to operate in the actual physical, super data dense, real world. And over time, the animals that succeeded in that context, that had the neural architecture that worked better in that context, survived. And the other ones didn't.

So machines don't have anything like that deep evolutionary history in their neural architecture. And so it's going to be a long time, I suspect, until we start to get them. And they can beat us on computing power, energy consumption. You can you can pass a lot more energy through these things.

But the software part is, actually, I think, ultimately, going to be a bottleneck, not that it's not, ultimately, going to get surmounted. But I don't think it's going to happen anytime, certainly, in our lives or the lives of our students even.

LESLIE KENDRICK: It sounds like, partly, humans still have to be there to know what questions to ask, to ask the right questions.

MICHAEL LIVERMORE: Right.

RISA GOLUBOFF: I will say, for the historical work I've done, my archives were not digitized when I used them. And many of them now are. And I think, wow, what a time saver.

LESLIE KENDRICK: Yeah, that would be better.

RISA GOLUBOFF: --to do that. I mean, I love going to the archives and looking at-- I think there is something to actually looking and holding the pieces of paper that people had held before you and had created. But boy, it would really save a lot of time to be able to gather data digitally.

MICHAEL LIVERMORE: And travel costs. Yeah.

RISA GOLUBOFF: Yeah. It's a whole new world. So I want to change tack a little bit and ask, in terms of the uses of technology in the law, it's not just in the practice of law by lawyers in their everyday lives. But it's also in how non-lawyers are interacting with law and how people are resolving disputes that may not even require lawyers anymore. And so I wonder if you could talk a little bit about those dispute resolution mechanisms and how technology is changing those.

MICHAEL LIVERMORE: Yeah, so this is a fascinating new area. And so online dispute resolution, I've learned just a little bit about it. When you look at some of the first early applications, like eBay, or something like that, or Alibaba in China is a online dispute resolution platform, and those platforms actually deal with enormous numbers of disputes, just massive. In fact, on a pure numbers basis, more than the US court system.

But they're small dollar. This is our dispute over a $20 item that we would never taken into court. It would just, obviously, be too costly, and so on. And what these platforms have developed are different ways of essentially doing dispute resolution.

Like, I'm not happy with this thing. The seller is like, I sent it to you. It's as described. And you dispute back and forth. And on eBay, something like 90% or 99% of the disputes are just resolved by the machine. You can always appeal up to a person. But you can also just do it automatically.

In Alibaba, actually, the system is more human-oriented, even though it's still massive. You're still dealing with a huge number. But there are juries that they impanel. And they give little chits to the people that participate. And it's a fascinating, actually, more participatory mechanism that they've developed in China than the one that we use in the US, which is mostly automated.

But the company, essentially, the core unit of eBay where they developed this software spun off to a company called Modria, which was, then, acquired by a firm called Tyler Tech. And what Tyler does is it does the back-end software for a lot of courts. I think it was their case management software, and stuff.

And so the idea is that Tyler is going to integrate Modria's platform into their back end, basically. So when you go to small claims court or even divorce, family court, you'll have the option of doing a lot of the work online, so you don't have to go somewhere. You don't have to fill out forms. There's actually-- the cost savings are enormous, just the time savings.

And so the idea is that you can create a platform where folks can do these interactions, say, like a divorce and custody agreement. So a lot of times, a lot of the questions are not controversial. Both of the parents know what they want. They just need some boxes to check, know what their options are, boom, and it's done.

And it can be done in a convenient way. They don't necessarily-- these people might not want to be in the same room together, right? And this creates an option to do this remotely. And so that is something that's happening.

There's other apps out there. Matterhorn is one that's developed, in part, by one of our colleagues at the University of Michigan, JJ Prescott. That's essentially an app that courts use to process low dollar fines, and so on. If you take these out, and just extend them just a little bit, what they can start to help to deal with is a problem like nonappearance.

So nonappearance is a huge issue in our court system. And people have lots of reasons why they don't go in and appear for a court date, right? They can't get the time off of work. Babysitters would show up. They might-- a lot of people don't like courthouses. They have negative associations with those places, right? So there's psychological reasons why they might not show up.

But then, as you guys well know, that can turn into a very vicious cycle. So you don't show up. You get a penalty associated with that. There's an additional fine, which you can't pay. So you don't show up to your subsequent one. And, eventually, this can spiral into a warrant and some jail time, which then you lose your job all together. And what happens with the kids?

It's just not a good setup. And a lot of times, people don't show up to court, in part, because they feel like they can't pay the fine, or whatever. But there are options for them. If they actually had gone, they would find a form that they can fill out, that they're unable to pay. And they can get a break, or whatever.

So the idea is to take all of this mess, and put it on to a app. A person gets a text. And the text says something like, here's your-- you have your court appearance. And it can even say, if you can't pay, there are options available to you. And then you click through on your phone.

You put in a little information. You can, maybe, make a application for a waiver. You fill in the form. You're not employed. Or you show your pay stub. It shows what your wages are, or whatever. And all of this can be done on the front end, and eliminate, or at least severely curtail, this vicious cycle associated with nonappearance.

So for me, that's a very exciting application for this technology. Just think about the cost of going to a courthouse. It's insane, especially if you don't live near one. Charlottesville, it's like, oh, you take your car, but you're in Chicago, or something like that. And again--

RISA GOLUBOFF: It's only open up a business hours, right? It's limited.

MICHAEL LIVERMORE: It's during work, right, which is work hours.

RISA GOLUBOFF: It's always during the work hours. Exactly.

LESLIE KENDRICK: If you don't have a car in Charlottesville, it's really, really hard.

MICHAEL LIVERMORE: Right. If you have a car, you live in a rural area, you got to get in. That's right. And so you've got to-- maybe a family member has to take time off of work to take you. I mean, it can be very difficult and just enormously costly to people, in terms of their well-being and their employment prospects.

And so just the ability to take that onto an online platform where people can do it off-hours, tuck the kids, do your stuff that you need to deal with online. And you can imagine that becoming lots of things, checks in with probation officers, all kinds of ways that the state is touching people. And right now, it's done in a physical location. And it doesn't need to be.

RISA GOLUBOFF: I'm waiting for the DMV.

MICHAEL LIVERMORE: The DMV would be a good one.

LESLIE KENDRICK: That would be good. That would be good.

RISA GOLUBOFF: That would save everyone, everyone who drives many, many hours.

MICHAEL LIVERMORE: Right. I don't know if I want an online driving test.

RISA GOLUBOFF: Not that part. Not that part.

MICHAEL LIVERMORE: I still want that analog version of that.

RISA GOLUBOFF: Those lines at the DMV, they're just--

MICHAEL LIVERMORE: Yeah, that's right. That's right.

RISA GOLUBOFF: There's no need.

LESLIE KENDRICK: So in many ways, these technologies make-- they make justice more accessible. They make all sorts of parts of our legal system more accessible to people. Do you worry that there are downsides because all of these also create new intermediaries, both between regular people and the law, and between lawyers and the law, really, because, now, there are programmers out there. There are other people who control these different tools and who design these different tools. Are there potential risks or downsides with that?

MICHAEL LIVERMORE: Right. So I think there are. So one is that people might find some of these tools alienating, so as opposed to a real human being that they interact with. The second issue is that, as you reduce the cost of, essentially, administering the law and disputing through legal channels, you might actually facilitate the penetration of law into our lives, right? And I don't know if that's a good or a bad thing. And we could argue about that. But it's a potentially interesting consequence, right?

RISA GOLUBOFF: So more people using law more often. The state using law more often to regulate and intervene in peoples lives.

MICHAEL LIVERMORE: So like, your neighbor, instead of arguing about the fence, it would be way too costly to go to court, or something like that. But if there's a little online dispute resolution platform that you could use, maybe you use that instead. And is that a good or a bad thing? It's an interesting question.

And then third, Leslie, on your point, is the allocation of power. Whose power is it? And are those people accountable? Are they the right people? Are we even aware? Is it transparent in any meaningful way, right? So yeah, so I would say that there are these complications, right?

In terms of the alienation, again, I think it's pros and cons. I think it's a little bit about your personality type. I'm more than happy to interact with people online rather than in person. But that might not describe everybody, right? Or, again, it depends on the stakes. It's one thing if eBay says, you lose. So you have to pay up your $30 for something that came in the mail.

It's another thing to say, six years in prison, and the computer says that, right? That's potentially problematic. And there are arguments about why you would want a human in the loop. People talk about a human in the loop. But I'm not sure that that even fully resolves the situation.

And of course, you might worry about-- there's another one too, which is the discrimination bias. That's a fourth one. It's that if you use this data, say, you use data on recidivism rates to make predictions that you, then, use for sentencing purposes.

Well, if the recidivism rate data is biased in some way, say, against racial minorities, then you're just going to perpetuate that through the machine learning algorithm. And that's obvious at some level. I think the real problem there is when it happens subtly, and you don't know it. Essentially, you don't know what's happening, right?

LESLIE KENDRICK: So many challenges coming up, it's really something. How did you get interested in this?

MICHAEL LIVERMORE: Yeah, no, this is a great question because, at some level, I have no idea, right? I don't have the answer to that.

RISA GOLUBOFF: We can't write an algorithm--

MICHAEL LIVERMORE: Right. Exactly. Right. Exactly.

RISA GOLUBOFF: --that tells us the answer to that?

MICHAEL LIVERMORE: Yeah. So a core part of this is that you have to find the right collaborators because I'm not a programmer. Like I say, I'm just a country lawyer, you know? I don't really do much other than the law. And so--

RISA GOLUBOFF: I'm not sure that's really a fair description, Mike Livermore.

LESLIE KENDRICK: Yeah, Mike. I don't know.

MICHAEL LIVERMORE: But certainly, none of this work I could have done on my own. And so one of my main collaborators is this guy, Dan Rockmore, who's at Dartmouth. And he's in the computer science and math department there. And the way we linked up was, basically, I was procrastinating at work. And I was just reading blogs, or whatever. I didn't feel like doing whatever I was supposed to be doing.

And I came across a blog post about a proceedings of the National Academy of Sciences paper. And the paper was computational text analysis of the corpus of Western literature, basically. And it was a style analysis of the whole of Western literature that had been digitized and was in the public domain.

So I thought that people was fascinating. And it immediately occurred to me, well, we could do this in law. We could just do this with the Supreme Court. So I emailed Dan. I didn't know him. He was just the corresponding author for the paper.

And he got right back to me. And he was super enthusiastic about it. And we've, literally, been working together for 10 years now, just based on that email. We have many, many projects. We have a book we're editing together. And I've worked with his graduate students, who have now gone on, and they have their own faculty positions.

And it's really just expanded out to lots of groups of folks. But that was the origin. And then we always would-- whenever a project would be winding down, we just think, well, what's next? What data can we get our hands on? And what are the tools? What are some interesting questions? And then we go from there.

So that's really what's kept me going. So it's certainly Dan, but also all kinds of other collaborators that I've been working with. It's just such an open frontier. You almost just need to just walk in any direction, and you start to find some interesting things.

RISA GOLUBOFF: My understanding is that philosophers and ethicists are also part of this conversation for the reasons of we have to think about the values that are going into the questions and into the programming. And that seems like a key piece of what we're doing too.

MICHAEL LIVERMORE: Yeah, absolutely. I mean, I think that that's a fascinating section too. It's between philosophy, analytic philosophy, moral philosophy, and computer science, data analytics, and so forth. Then there are other things, I think, are interesting at the intersection as well. Like, what is machine knowledge?

What is machine learning? And how does it relate to human learning, so basically epistemology, right? So that's super interesting. And these days, whenever I read any moral philosophy, especially this stuff that I'm interested in, which is more about welfarism-- I do cost-benefit analysis as part of my research. These days, I can't get out of my mind, is this how we would program a computer to think about this?

Or I read some stuff in basic moral reasoning. Like, someone comes through our legal theory workshop that we have every couple of times a month, pretty sophisticated philosophers. And I think-- often, it'll be about deep questions and how to engage in moral reasoning at all.

And I think, again, are we mooting, at some level, how to program this into a computer, by thinking very carefully about each little step? And to me, that actually is almost a useful way to thinking about a lot of questions in moral philosophy is like, huh, would this be a useful way--

RISA GOLUBOFF: It requires enormous clarity, right? You have to have enormous clarity to be able to make that next step.

MICHAEL LIVERMORE: Right. And would this be a useful-- would this be what we would want the machine-- is this how we want the machine to reason? Or would we want it to reason in this other way? Do we want it to reason more in dynatological terms or in welfarist terms? And I think that's an interesting question.

LESLIE KENDRICK: It's a new categorical imperative. What would you want your machines to do?

MICHAEL LIVERMORE: That's right. That's right. That's right. Is it universalizable beyond human beings to other forms of intelligence? Yeah.

LESLIE KENDRICK: Right. Yeah.

RISA GOLUBOFF: Thanks so much for talking with us today, Mike.

MICHAEL LIVERMORE: Yeah. It was a real pleasure.

LESLIE KENDRICK: Thank you so much.

[MUSIC PLAYING]

RISA GOLUBOFF: So Leslie, I was struck here, as I was when we were talking about autonomous vehicles, that the nature of the language that we use to talk about these technological changes is really shifting. So in autonomous vehicles we moved from driverless cars to autonomous vehicles. And here, we moved from artificial intelligence to machine learning.

And in both driverless cars and artificial intelligence, there are people there. And then you take them away. This is intelligence that should be human, and now is artificial. Cars that should have drivers, but no longer do. And now, we talk just about the machines. And we kind of erase the people all together. So you end up with autonomous vehicles and machine learning.

LESLIE KENDRICK: That's really interesting. So our own language is erasing our role and moving these concepts further away from the human role that they used to require.

RISA GOLUBOFF: Yeah, and on the one hand, that probably makes sense, right? It's so anthropocentric to think we're at the center of how we should think about what machines do. And so now, we're learning that the sun doesn't revolve around Earth. And the machines don't revolve around people. But I wonder what it does to how we would regulate it and how we think about what machines actually are doing and what roles they're playing.

LESLIE KENDRICK: Yeah, I think you could think of that as being accurate. You could think of it as being an opening for realizing we really are creating things that are going to make decisions on their own. So we have to think very hard about what they're going to do. Or you could think of it as an abdication. You could think of it as it's the machines. It's not us, and signaling some lack of responsibility.

RISA GOLUBOFF: Yeah, which we know, given our conversation about philosophy and moral reasoning, right? We're not abdicating our responsibility. We still have enormous responsibility in setting up those programs, and how they're going to work, and then how we're going to regulate them, and what the law is going to do to them going forward.

LESLIE KENDRICK: I feel like that's where this whole field is going. It really does push you to ask these big questions about what is it that we actually think is the right thing to do in given situations, so that we can program machines to be making those choices without us having to control every single step, or to make every single decision.

RISA GOLUBOFF: And I think it's really easy to get caught in the weeds and the possibilities of what the technology can do and say, oh, look, it makes discovery easier. Oh, look, it makes resolving small legal disputes easier, without, then, thinking about, how does it do that? And what are the priors that are entered into there?

I mean, I think these are things that are happening all over the place, right? We have the capacity to alter genes. But should we, right? So it's the normative questions that, I think, the law asks and philosophy asks that we have to keep sight of, and not allow ourselves to get swept away by the headiness of what the technology can do without pausing to think, how do we want it to do that? And how should it interact with people and society? And what kind of questions are required to be asked before we can do it?

LESLIE KENDRICK: So I really loved that part of the conversation. And it made me think about what legal education will look like in the future, to respond to all of these technologies coming into law. What is it that law students are going to have to learn? And it seems like one thing that people will have to learn is to be adept at understanding technology, and being flexible, and unafraid in encountering new technologies, given the pace of change that they might see over the course of a career.

RISA GOLUBOFF: I think that's exactly right. I think that's our job. I think we know that. We talk about that all the time, right? What is the future, not only of law, but of legal education. And our guests come in, and they talk about specific areas of the law. And then it's on us to figure out what that means for our students and what it means for our curriculum.

LESLIE KENDRICK: And I think there are some-- there are worries that people could have in looking at how some of these processes and programs could change the way law practice looks. But I don't know that that's all a bad thing because it seems to me that if we're not going to advocate to just machine learning, that there's still always going to be a place for the critical thinking skills, what I think of as the deep skills that people come out of law school with to understand the tools around them, to understand how they're supposed to work, to understand how they're supposed to further legal practice and justice.

RISA GOLUBOFF: I think that's right. And I think, in some ways, the conversation we had with Mike just heightens that and says, we have to be even more precise in the analytical reasoning we're teaching our students, and even more precise in how we think about problem solving because we're setting up systems that, then, will operate without us. But in the setting up of the systems, we need thinkers who have the kind of educational background that we provide.

[MUSIC PLAYING]

That's it for this episode of Common Law. If you like the show, don't forget to review us on Apple Podcasts, or wherever you get your audio fix.

LESLIE KENDRICK: You'll find more about big data and the law in our show notes, plus tons of other data from past episodes on our website, commonlawpodcast.com.

RISA GOLUBOFF: The human intelligence behind our show includes Tyler Ambrose, Robert Armengol, Tony Field, and Mary Wood. I'm Risa Goluboff.

LESLIE KENDRICK: And I'm Leslie Kendrick. We'll be back in a couple of weeks with an episode that builds on today's conversation about AI to explore matters of national security, surveillance, and international law. Hope you'll join us then.