S6 E6 Throwback: A Future With Autonomous Vehicles
How To Listen
Show Notes: A Future With Autonomous Vehicles (Throwback)
Kenneth Abraham
Kenneth Abraham is one of the nation’s leading scholars and teachers in the fields of torts and insurance law. He is a fellow of the American Academy of Arts and Sciences and a life member of the American Law Institute. For 20 years he served on the ALI Council — the board of lawyers, judges and academics that governs the institute. He is also an adviser to the ALI’s “Restatement of Torts (Third)” and was the senior adviser to the “Restatement of the Law of Liability Insurance.” He has served on a number of other boards and commissions concerned with tort law and insurance reform.
In 2024, Abraham won the William L. Prosser Award from the American Association of Law Schools — the highest award AALS gives in the field of torts — for “outstanding contributions in scholarship, teaching, and service.” Abraham was a recipient of the All-University of Virginia Outstanding Teacher Award; the Distinguished Faculty Achievement Certificate from the State Council of Higher Education for Virginia for “outstanding achievement in teaching, research and public service”; and the American Bar Association’s Robert B. McKay Law Professor Award, given for “outstanding contributions to tort and insurance law.” He was the first law professor to be elected an honorary fellow of the American College of Coverage Counsel. He has also been a visiting professor at Harvard Law School.
Abraham is the author of six books and more than 90 law review articles. His first book, “Distributing Risk: Insurance, Legal Theory, and Public Policy” (1986), brought modern legal theory to the study of insurance law. His torts treatise, “The Forms and Functions of Tort Law” (6th ed. 2022), has become a basic text for first-year law students across the country. And his casebook, “Insurance Law and Regulation” (7th ed. 2020), has been used as the principal text in courses on insurance law in more than 100 American law schools.
Abraham has been a consulting counsel and an expert witness in a variety of major insurance coverage cases, involving commercial general liability, directors and officers liability, environmental cleanup liability, toxic tort and products liability, and property insurance claims. He has also served as an arbitrator for the Dalkon Shield Claimants Trust, resolving over 100 claims by women seeking damages for injuries caused by the Dalkon Shield intrauterine device, both in the United States and Europe. His 2018 article, “Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era,” co-authored with Robert Rabin, explores their proposed plan for handling liability issues as cars driven by people share the road with autonomous vehicles.
Michael Raschid
Michael Raschid ’86 is the co-founder and a member of Wave Law. Raschid has worked on many acquisitions, investment transactions, joint venture and collaboration transactions, and technology licensing and commercialization matters for companies ranging in size from startups to the largest multinational corporations in the world. He previously served as chief legal officer and vice president of operations of Perrone Robotics in Crozet, Virginia. A former student of Abraham’s, Raschid served as an attorney for IBM before becoming partner at the Silicon Valley firm Wilson Sonsini Goodrich and Rosati. He later joined Arnold & Porter as partner, managing intellectual property, information technology and technology transactions. At Perrone Robotics since 2016, Rashid helped develop several autonomous vehicle projects, one of which is the new driverless shuttle known as TONY (TO Navigate You).
Listening to the Show
Transcript
RISA GOLUBOFF: Welcome to Common Law, I’m Risa Goluboff, dean of the University of Virginia School of Law. We are taking a break for a couple of weeks, and now that Tesla has launched its full self-driving software out of beta, we thought it would be fun to look back at a season one episode. In that episode, we explored the future of autonomous vehicles with UVA Law professor Ken Abraham, an insurance law expert, and his former student, Mike Raschid, who was then working at Perrone Robotics in nearby Crozet, Virginia. Five years ago when we were putting together this episode, Mike was nice enough to let us try out an autonomous vehicle that his company was working on, which was quite an experience for both me and my colleague Leslie Kendrick, who was then a host on the show. At the time, Leslie was vice dean, and now, she is the incoming dean of the Law School. So it's going to be fun to revisit the way we were five years ago, see the predictions the experts made, and think about what has changed and what remains the same. Without further ado, enjoy “The Future of Autonomous Vehicles.”
RISA GOLUBOFF: Hello, and welcome to Common Law, a podcast from the University of Virginia School of Law. I'm Risa Goluboff, dean of the law school.
LESLIE KENDRICK: And I'm Leslie Kendrick, the vice dean.
RISA GOLUBOFF: A few weeks back, when the weather was still cold and gray here in central Virginia, Leslie and I hopped in my car and took a drive out to the little town of Crozet, just west of Charlottesville.
So we just arrived in Crozet. And we were told to drive straight past the no trespassing sign. So there it is.
LESLIE KENDRICK: There we go [LAUGHS]
Our destination for the day's field trip was the headquarters of a company called Perrone Robotics.
I would not have even guessed this was back here.
RISA GOLUBOFF: There's a bunch of warehouses, second no trespassing sign.
We were there to visit Mike Raschid, the chief legal officer and vice president of operations for Perrone and an alum of the law school.
All right. We're here.
He'd invited us there to have a look at some prototypes the company has been developing for the autonomous vehicles of the future.
MICHAEL RASCHID: So let me show you around our workshop, and--
LESLIE KENDRICK: For the past 15 years, Paul Perrone, who founded the company, has been developing software for a series of progressively more sophisticated autonomous vehicles. The first one Mike showed us was a big metallic capsule-looking thing from the early 2000s.
MICHAEL RASCHID: So let me take you back here.
RISA GOLUBOFF: It looks historic, let me just say.
[LAUGHTER]
I mean, I know it's not that long ago. But it looks--
MICHAEL RASCHID: It's certainly unusual.
RISA GOLUBOFF: But it looks a little like something from like Wally, you know? Like, it looks-- don't take this the wrong way, but it looks a little MacGyver, right? It looks like there's a robot in there. But it also looks, like, put together. You can see the rivet.
MICHAEL RASCHID: Very much so, very much so.
RISA GOLUBOFF: We also got to see the cars that came next. They looked a lot more like, well, cars. Though they did have a whole lot of high-tech sensors and instruments sprouting from their sides and their roofs.
MICHAEL RASCHID: Those are presumably radar units from way back when. And you could see they're very--
RISA GOLUBOFF: And then came the main event of the day. Mike invited us to take a spin in one of the newest additions to the autonomous fleet.
MICHAEL RASCHID: So, yeah. Let's go out. And I'll ask Ralph to give you a ride in the--
RISA GOLUBOFF: That sounds great.
MICHAEL RASCHID: --level 4 autonomous space shuttle that we have.
RISA GOLUBOFF: We're ready!
MICHAEL RASCHID: Excellent.
LESLIE KENDRICK: The car was a Range Rover with an electric motor, its trunk packed absolutely to the gills with wires and flashing lights and digital devices running Perrone's latest software. It's what's known in the business as a level 4 vehicle--
RISA GOLUBOFF: --which means, as you'll learn later in the show, that it can function in normal traffic, completely autonomously. But at any moment, a human driver has the ability to take over.
LESLIE KENDRICK: That human's name, in our case, was Ralph.
MICHAEL RASCHID: So Ralph, meet the dean of the law school, Risa--
RISA GOLUBOFF: Hi, Risa.
MICHAEL RASCHID: --Goluboff.
RALPH: Nice to meet you.
RISA GOLUBOFF: Nice to meet you.
MICHAEL RASCHID: And the vice too, Leslie Kendrick.
RALPH: Nice to meet you, Leslie.
LESLIE KENDRICK: So nice to meet you.
RISA GOLUBOFF: Thanks for having us--
RALPH: Oh, sure.
RISA GOLUBOFF: --and taking time to drive us, or not drive us, as the case may be [LAUGHS].
First it was my turn. We buckled in. Ralph typed some things into the laptop next to him that was controlling the car.
RALPH: It's on autonomous mode right now.
RISA GOLUBOFF: Yep. Hands aren't touching. Feet aren't touching.
RALPH: I'll press a button again. And we're just going to start going.
RISA GOLUBOFF: So off we went. We went around this test track at a steady speed of about 15 miles an hour. And sure enough, the car stopped by itself at stop signs. It turned by itself around the curve.
So it puts one in mind of like a poltergeist or a ghost, that there must be some invisible being turning the steering wheel. Maybe you don't--
LESLIE KENDRICK: Risa and I took turns riding in the autonomous vehicle. And I was up next. Seems that Risa and I had similar reactions to the uncanniness of the whole experience.
RALPH: Just like a person does.
LESLIE KENDRICK: It looks like a ghost is driving the car.
RALPH: Yep. I'm just going to--
LESLIE KENDRICK: At one point, one of Ralph's colleagues showed us what happens when a person walks out in front of a level 4 car.
Oh, here's a pedestrian. Oh my goodness. Oh. And it stops. Run.
We went through a little tunnel, switched over into human mode to pass another car, and then we were back at the garage.
That is totally incredible.
RALPH: I have the best job in the company. That's what I say.
LESLIE KENDRICK: Right. I mean, you see everything before is real, right?
RALPH: Yeah. I get to experience it before everybody else does.
[MUSIC PLAYING]
RISA GOLUBOFF: So as you may know from listening to past episodes of Common Law, this season is all about the future of law. And as you can probably imagine, the future of driving, or autonomous driving that we got a glimpse of that day, could have a profound impact on our society in so many ways.
LESLIE KENDRICK: On today's show, we're going to zero in on one area that will most certainly have to adapt to the new world of autonomous vehicles. And that's torts and liability law. In a world of autonomous vehicles, who is responsible in an accident?
Later in the show, we'll be speaking with torts expert and UVA Law professor, Kenneth Abraham, about a new proposal he has for revamping the law of the road, as it were. But first, we're going to hear a little more from Mike Raschid, chief legal counsel at Perrone Robotics. We brought him into the studio to tell us more about his work at the company.
[MUSIC PLAYING]
Tell us a little bit about the software platform that Perrone Robotics has.
MICHAEL RASCHID: Oh, sure, yeah. So essentially, in plain English, we can characterize it as sort of the brains of an autonomous car. And if you dig down a little bit deeper, we tend to think of it as a hardware agnostic software platform that will sense, plan, and act, and let your car sort of sense what's out there, plan your movements, and then actually act and send these signals to the actuators.
And the reason we say hardware agnostic is in the industry, it's interesting. The software is really the driver of these autonomous cars. The hardware is available from any number of manufacturers that make the radars, the LIDARs, the GPS sensors that are on top of the roofs that give the car, the vehicle, information about where it is, how fast it's going, how it's maneuvering. Those are all pretty much third-party hardware products.
The fusion of all the various inputs that they provide-- so that's actually called sensor fusion-- is something our software does. And then it'll sort of say, OK, here's where the car is. I need to take a left. But I'm going at 18.6 miles an hour. I better slow down.
So the software then acts on that. And then, of course, if it sees another vehicle, it'll stop and so forth. And so that's basically what the platform is. It's sort of the brains of an autonomous car.
LESLIE KENDRICK: So how autonomous is it at this point? Can you walk us through what those levels are and what they mean?
MICHAEL RASCHID: Sure. So actually, Paul, our founder, CEO, was actually on the SAE committee that sort of came up with these levels of autonomy.
LESLIE KENDRICK: The SAE?
MICHAEL RASCHID: I'll explain all of those, yeah. The Society of Automotive Engineers, it actually had its origins not just in automotive but also aerospace. But it's really Society of Aerospace and Automotive Engineers. So SAE International is actually one of our collaboration partners. But they've defined-- and NTHSA has also-- NTHSA being the National Highway Transportation Safety Agency. NTHSA has also adopted the same levels of autonomy.
So right now, most of the vehicles are running at about level 2 autonomy, meaning they have Advanced Driver Assistance Systems called ADAS. So things like-- some of the vehicles like Cadillacs have these-- the cruise control, that's called dynamic cruise control. So it, of course, senses how far ahead the car is in front of you and how far back you need to be and the like. Those are sort of level 2ish.
The Tesla Autopilot is more in the level 3 area, where you can actually let go of the steering wheel on a highway and feel like the vehicle's doing what it needs to do. But I think-- and, again, I don't own a Tesla. I'm going to assume that the manufacturer has told all the drivers you've got to pay attention.
LESLIE KENDRICK: Yeah.
MICHAEL RASCHID: So--
LESLIE KENDRICK: It's in the manual.
MICHAEL RASCHID: --that's level 3ish. And there are many manufacturers coming out with-- I think Audi came out with a level 3 car this year. But really, there's a big gap between 3 and 4. And we're focused on level 4, which is the car can really run autonomously. But it still has a steering wheel. And it still has a driver at the helm.
It could, if you wanted it to, run without a driver, which is what level 4 really is. And level 5, the best way to explain it in plain English is a vehicle that doesn't literally have a steering wheel or even brakes that a human being could push. And you could make a level 5 car with those things. But think of level 5 as totally, fully autonomous without a human driver present.
RISA GOLUBOFF: So not even a--
MICHAEL RASCHID: [INAUDIBLE]
RISA GOLUBOFF: --not even a human driver present. But no station, no driver's seat, right? No place for there to be people present.
MICHAEL RASCHID: Sure. They might be in the passenger seat
RISA GOLUBOFF: But it would just be--
MICHAEL RASCHID: Yeah, exactly.
RISA GOLUBOFF: It could just be a passenger's seat.
MICHAEL RASCHID: Instead of a driver, it would be just a passenger, perhaps facing inward, and not away.
LESLIE KENDRICK: What do you think is the aim of the industry? Is level 5 the goal or the ultimate goal?
MICHAEL RASCHID: Level 5 is absolutely the ultimate goal of most of the people in the industry. But everyone acknowledges that it is really far off. So and I think one of the things that we talk about is level 4, when you talk about level 4 autonomy even, depending on which car manufacturer-- and we just call them OEM for short, Original Equipment Manufacturers, depending on which OEM you speak to, they will say, OK, we've got level 4 cars coming out in a mass production phase in about three to five years.
Level 5, I think people would be being very unrealistic if anybody predicted that level 5 cars would be out there anytime before about 10 to 15 years from now because that requires a lot of engineering, a lot of infrastructure improvements or the like to really have that much confidence in a vehicle that you would just let it drive around by itself in cities and highways without any human intervention whatsoever.
RISA GOLUBOFF: What are the legal differences between the different levels? How do the liability questions or the risk questions or the regulatory questions change as you go up the levels?
MICHAEL RASCHID: So really, the biggest change that will occur will be at levels 4 and 5, because even at level 3, you've got a driver in the driver's seat. I think while the human drivers are on board, the same legal issues arise in terms of tort liability, in terms of who was negligent and not negligent.
The critical questions will come when you really have it running without any human intervention, let's say a level 4 car, but without a driver sitting in the driver's seat who is in the back dealing with whatever. And then the question arises, who's going to be liable? And frankly, the law hasn't caught up with all that stuff.
And so the critical issues for us will be, will artificial intelligence, for instance, ever be recognized as a legal person? And then when you talk about manufacturers in the automotive context, it's a pretty fluid concept now because, who is really the manufacturer of a vehicle that has third-party components that are cobbled together to come up with the hardware on an autonomous car?
And then I could see a situation where there are three car manufacturers that decided that the software development costs ought to be shared and the learning ought to be shared. And they've come up with the software and the artificial intelligence that basically is the brains of the car. Who is really the manufacturer in that context if, for instance, those three-- let's say there are three German companies that came up with the software. Let's say they licensed it to the big Japanese OEMs. Who then is the manufacturer? How do you follow the trail of who did what?
So I think it likely will come back to the basics of-- again, I go back to law and economics, who is the cheaper cost avoider? Do you impose liability on somebody who actually programmed the software? And is that the person who should be liable? Or is it the manufacturer who vetted the artificial intelligence from several sources and decided to incorporate that in his or her-- I mean, or its vehicle?
And so there are all those issues that have yet to be determined and sorted out. Lots of interesting issues have been raised. But I think few have actually been answered thus far.
[MUSIC PLAYING]
LESLIE KENDRICK: As Mike says, lots of interesting issues and few answers so far. But it turns out there are at least some possible answers to these questions that are beginning to trickle out of the legal academy.
In a recent Virginia Law Review article, UVA Law professor, Kenneth Abraham, and his co-author, Stanford's Robert Rabin, outline a proposal for a new legal regime that would deal with accidents involving autonomous vehicles.
RISA GOLUBOFF: Ken is one of the nation's leading experts on torts and insurance law. He literally wrote the book on these subjects for both students and scholars alike. We are so pleased to have him with us today in the studio to talk about the brave new future coming down the pike. Ken, thanks so much for being here.
KENNETH ABRAHAM: Well, it's great to be here. Thank you.
LESLIE KENDRICK: So this-- I mean, obviously, there are huge ramifications for all sorts of different parts of society. There are economic ramifications of this. And there are all sorts of different kinds. But some of the ramifications are legal ramifications. And that's what you and Bob Rabin are really focused on in your article.
KENNETH ABRAHAM: Yes. So if nothing is done, then the existing tort liability and products liability regimes would apply to accidents involving autonomous vehicles and other autonomous vehicles or autonomous vehicles and pedestrians or autonomous vehicles and conventional vehicles.
And so we set out to think about whether some alternative liability approach would make more sense once there was a critical mass of autonomous vehicles on the road.
LESLIE KENDRICK: And you call that manufacturer enterprise responsibility. But to get us to that, I wondered if you could lay out a little bit what is it that the law looks like right now? So now, we have primarily conventional vehicles on the road. There is some level of software that some cars have that make them level 2 cars.
And if you have an accident, you're going to have potential tort claims, that is claims of fault between drivers and potentially products liability claims saying that one problem in the accident related to a product that was involved, the vehicle that was involved. Could you tell us a little bit about those regimes that currently operate?
KENNETH ABRAHAM: That's right. You described it correctly. The thing to understand as a background is that 80% of all automobile accidents are caused by driver error. Some of that error is negligent fault. And some of it isn't. But 80% of all accidents are the result of something the driver did or didn't do.
And for accidents between vehicles in which driver error is the only cause, we have what's called a negligence liability fault. The driver that's at fault has to pay the driver who isn't at fault if the driver who isn't at fault is injured.
Now there's in most states what's called a comparative negligence defense. So if both the drivers are at fault, there's a reduction in the amount of damages that the injured driver recovers. But that's the current regime.
It's also true that under current law, if the reason the accident occurs is because of some defect in the vehicle, that the driver or pedestrian who's injured, including the one in the car that had the defect, can sue the manufacturer. So you can sue Ford or Toyota or General Motors. And if you can prove that there was something defective that caused the injury, then you can recover from the manufacturer.
What's going to happen over time is that more and more accidents will be caused by the vehicle rather than by the driver because there'll be fewer and fewer drivers. The car will be driving itself. So hopefully-- this is certainly what gets predicted-- the accident rate is going to go way down.
80% of the accidents right now are caused by drivers. The fewer drivers you have, the fewer accidents you'll have.
RISA GOLUBOFF: And that sounds like good news overall in terms of--
KENNETH ABRAHAM: From the safety standpoint, that's good news. So what we're talking about is, what are we going to do about that reduced number of accidents that are caused by the vehicle? Are we going to apply the current products liability regime that requires that there be something defective about the vehicle before the manufacturer of the vehicle has to pay? Or are we going to have some other approach that sets aside the complicated inquiry into whether there was a defect-- we can talk about that if you want-- and uses some other criterion for determining whether or not the manufacturer is liable?
LESLIE KENDRICK: Let's do talk just a little bit about the current products liability regime. So you said that this is what operates when the product, the vehicle might be implicated in causing the accident. Let's talk through a concrete example. Let's say there's an allegation that the car has faulty brakes. And that's part of what led to the accident. So what does that look like now?
KENNETH ABRAHAM: So right now, if the brakes were manufactured improperly, they just came off the assembly line and they didn't conform to the manufacturer's own specifications, that's referred to as a manufacturing defect. And somebody that was injured in an accident because the brakes failed in that way can bring a lawsuit against the manufacturer. That's a tiny percentage of the products liability suits that are brought up against manufacturers because quality control on the assembly line has improved so much over the last 50 years.
Most of the auto products liability suits that are brought involve what are described as defective design. The rear bumper is alleged not to be strong enough to prevent impact-related injuries. Or the brakes are designed in such a way that when moisture gets into them, then they don't hold. Or the steering mechanism doesn't allow the driver to do something that is necessary to do in an emergency.
So the manufacturer isn't liable for all injuries that involve the vehicle, only liable for injuries that involve something defective. And the defective design-- design defect litigation gets pretty complicated because, first of all, you have to determine what went wrong, that is, what caused the accident. And then you have to determine whether that cause is defective or just an inevitable feature of a car.
There are 25,000 different parts in a car.
RISA GOLUBOFF: So they wear out. And you can't sue the manufacturer.
KENNETH ABRAHAM: They wear out.
RISA GOLUBOFF: Or they break.
KENNETH ABRAHAM: Well, they're imperfect too. A car's going 60 miles an hour that collide with something aren't going to be able to prevent all accidents, even with side and front airbags and collapsible chassis and all the rest of it. So there has to be an effort made to weed out the injuries that result from something that's inherently necessary in a vehicle and injuries that result from something substandard in the design that could have been avoided.
LESLIE KENDRICK: And this often boils down to a risk utility test, where you're looking at the benefits and the costs of a particular design. A convertible, that design is going to have certain benefits. But it's going to have certain risks in terms of crash worthiness. And you might want a variety of different types of vehicles on the road so that people can choose a different risk benefit profile rather than everything looking exactly the same and all being at a very high price point.
KENNETH ABRAHAM: That's right. Some designs are obviously defective. But lots of litigation involves designs that entail trade-offs. And there are trade-offs not only between safety and utility in a general sense but safety and price, safety and aesthetics, safety and maneuverability.
And as you say, Leslie, people want different combinations of those factors. So there's a bare minimum of safety that the design defect test requires. But above and beyond that, it becomes complicated. And it's a matter of expertise. Ordinary juries don't have the capacity to redesign a vehicle so that it maximizes risk and utility. You need experts. The experts disagree. The calculations are very complicated. And that's what design defect litigation involves.
RISA GOLUBOFF: So does that mean that it's pretty hard to win one of those lawsuits as a plaintiff or not?
KENNETH ABRAHAM: Well, you have to invest a lot in proving your case. It's not automatically pretty hard to win. As I said, some designs are obviously defective. But others aren't. And it's a complicated technological enterprise to prove a design defect case.
And as more and more accidents involve an interaction between the hardware-- brakes, steering, bumpers-- and software, then the effort to figure out whether there was a defect, indeed, the effort to figure out what exactly happened and why is going to become more and more complicated.
LESLIE KENDRICK: So given that we're going to have a greater share of the accidents going back to the vehicle in some way, even if in absolute terms we have fewer accidents, we're going to have to think about these product liability rules and how they interface with autonomous vehicles. And you and Bob Rabin suggest that we need to think about that. That's not necessarily going to map on so easily. What are some of the issues of translating the products [INAUDIBLE] liability regime onto autonomous vehicles?
KENNETH ABRAHAM: Well, deciding whether there's a defect in the vehicle is going to become more and more difficult as a matter of principle. Another way to describe the defect problem is to say that it requires an answer to the question, how safe is safe enough?
And when we have autonomous vehicles and software, the question is going to be, well, how safe is safe enough software? How safe does the software have to be or how safe does a software have to make the vehicle be before the software is adequately designed? Well, so now, all of a sudden, products liability litigation is going to involve esoteric questions about the state of the art of software. How well can you write the code to deal with every single problem?
There is also something called machine learning in which the code is not static but learns from highway situations something about accidents. And so, in theory, we can map the notion of a defect onto software problems. In practice, it's going to become more and more complicated and, in principle, more and more difficult to arrive at a normative conclusion about, how safe is safe enough?
And our point is not that products liability law can't in theory handle this, but to raise the question, whether the game is worth the candle? Maybe we just shouldn't-- there's going to come a point at which it just isn't worth trying to figure out in fine detail exactly what happened and whether that was safe enough.
RISA GOLUBOFF: So what should we do?
KENNETH ABRAHAM: Well, we ought to just say, OK, there's going to be a smaller number of accidents. Trying to figure out whether they resulted from something defective is going to be complicated and expensive and even in principle difficult. Let's just say that the manufacturer of a fully autonomous vehicle is responsible for all injuries that arise out of the operation of the vehicle. Don't worry about defect.
Take defect out of the picture and simply have a test like we have, say, in workers' compensation, where the employer is liable for all injuries that arise out of working. And we don't worry about whether there's something defective in a drill press or defective in the factory floor. We just say, if you're injured on the job, you get some compensation. Well, if you're injured by or in an accident involving an autonomous vehicle, you get compensated by the manufacturer.
LESLIE KENDRICK: So that's a form of what in tort law would be called strict liability.
KENNETH ABRAHAM: Yes.
LESLIE KENDRICK: So your term for it is manufacturer enterprise responsibility. And enterprise liability is a name for a form of strict liability where an enterprise is liable for, responsible for the accidents that arise out of the enterprise. And there's kind of an idea that if this is your enterprise, you're internalizing the costs that it imposes on other people.
But you mentioned workers' comp being another example. That's a strict liability test. But we're not asking about fault. We're not asking about defect. We're just asking, is this something that arose in the course of work? And if so, the employer is going to internalize the costs of it.
KENNETH ABRAHAM: That's right. And workers' compensation has been pretty successful over a period of a century. There are lots of criticisms of workers' compensation. Employers say that it doesn't control health care costs. And employees say the benefit levels aren't high enough.
But importantly, almost nobody wants to go back to the tort system. And that's a lesson for us. The employees would much rather have swift, certain compensation without having to prove fault. And employers want labor, peace, and satisfaction in the workplace. And so that compromise of over a century has had staying power.
In the case of autonomous vehicles, our idea is let's have the end manufacturer-- not clear whether it would be Ford or Toyota or Google, but whoever manufactures the vehicle-- be the focal point for viability. That's the enterprise that would be liable. Just as now there are lots of makers of component parts with autonomous vehicles. And it might be that some completely independent company like Google designs the software. But to keep things simple, have the manufacturer be liable to the people who are injured.
And if the incentives really ought to be on the software designer, then they're quite capable of contracting with each other the way manufacturers and makers of component parts do now to align incentives. In the meantime, there'll be one clear focal point, the name of the manufacturer that's on the vehicle that injured people can make claims from.
And of course, the price of the vehicle will incorporate the risk that there'll be a need for compensation. And that'll give them focal point enterprise and incentive to make careful decisions about how safe the vehicle ought to be.
RISA GOLUBOFF: So you mentioned in the article, and you mentioned here, that we're in a long transition phase. And our whole season is about the future of law. And we've talked to a lot of people who think the future is now. And I get the sense from you, you don't think the future is quite now, and we're going to be in a kind of in-between state for a long time.
Does the manufacturer-- at what point should that kick in? And does it make a difference whether we're talking about conventional vehicle accidents with autonomous vehicles or whether we're talking about autonomous on autonomous? Or how does that look to you in the future?
KENNETH ABRAHAM: So our idea was to write something that had some detail in it, not because we expect that these details are necessarily going to get enacted into law right away or even eventually enacted into law the way we wrote them, but to have a proposal that had enough meat on the bones so that not just legal scholars but policymakers could look at it and not say as sometimes gets said, well, you know, it's all well and good for you to have a pie in the sky proposal. But what about all the practical features of it? How are you going to do this? How are you going to do that?
So we thought we'd actually describe how we're going to do those things. Even if they're not adopted, they will identify pivot points that people will have to make decisions about. So our proposal is that when 25% of all the vehicles on the road are level 4 or level 5, that this approach should go into effect. That'll give everybody time to see what kind of accidents there are, what kind of difficulties there are with products liability applying to this.
If we're wrong about what would be superior about our approach, there'll be time to look at it. But once you have 25% of the vehicles on the road as autonomous vehicles, that ought to give us a considerable time for thought.
Now as you say, Risa, we're going to have a long period when there's a mix of vehicles on the road and, therefore, there's a mix of accidents. We're still going to have accidents between conventional vehicles. And the tort system as we know, it's going to apply to those because they're mostly driver error accidents. And we will increasingly have accidents between two autonomous vehicles. And our system would apply in straightforward fashion to that.
The difficult problem is what to do about accidents between conventional vehicles and--
RISA GOLUBOFF: Autonomous.
KENNETH ABRAHAM: --autonomous vehicles. And one approach would just be to say, well, we're going to apply the tort system to that, which means that if somebody is in an autonomous vehicle and is injured by the fault of a driver of a conventional vehicle, he or she could bring a tort suit.
We think that doesn't make a lot of sense because one of the things that autonomous vehicles ought to do is avoid accidents with negligently-driven conventional vehicles.
LESLIE KENDRICK: Right.
KENNETH ABRAHAM: That's part of the point.
RISA GOLUBOFF: Right, they were supposed to be better.
KENNETH ABRAHAM: Right. So we would have our approach take away the tort suit that the party in an autonomous vehicle would have against conventional vehicles and have the manufacturer of autonomous vehicles have an incentive to design the software or improve the software to develop the software to a point where it could avoid an increasingly high percentage of negligently-driven vehicles.
LESLIE KENDRICK: So I have a question about that.
KENNETH ABRAHAM: Yeah.
LESLIE KENDRICK: So the scenario that you just set up is autonomous vehicle and negligently-driven conventional vehicle.
KENNETH ABRAHAM: Right. That's one half of the problem.
LESLIE KENDRICK: Right. So have the autonomous vehicle internalize these costs because part of what it's supposed to be doing is anticipating negligence on the part of other drivers and avoiding those accidents. Same thing I take it, if you have autonomous vehicle and negligent pedestrian, say, who steps out in front of the vehicle, and the vehicle is supposed to see that, it's supposed to slow down and make every reasonable effort to brake, right?
So are you saying that in those types of situations, the only form of liability that would occur under your regime is liability on the autonomous vehicle? I guess another way to ask my question is you say enterprise responsibility for accidents arising out of the operation of an autonomous vehicle. But how do you define what are accidents that arose out of the operation of the autonomous vehicle as opposed to accidents that arose for some other reason, because someone else was doing something wrong?
KENNETH ABRAHAM: Well, mostly it'll be if the vehicle's moving.
LESLIE KENDRICK: So--
KENNETH ABRAHAM: We think--
LESLIE KENDRICK: --if it's moving, it's involved.
KENNETH ABRAHAM: --a moving vehicle is an operating vehicle. And accidents arise out of the operation even if it's somebody else's fault. If the vehicle hadn't been moving, the accident wouldn't have occurred.
We do make exceptions for some rare sorts of accidents that will occur. So, for example, the example we give in the article is if I'm parked at a light or moving two miles an hour and a motorcycle runs into me from behind, that's not an injury arising out of the operation of the vehicle.
The courts are sometimes going to have to draw lines about that. But we don't think that's going to be a very high percentage of the cases. We also think that even when a vehicle's moving, if there's a vandalism or terrorism that causes an injury, that that would be an exception.
So if somebody drops a cinder block off an overpass onto an autonomous vehicle, you ought to be able to sue the party who drops a cinder block. If somebody hacks into the software, you ought to be able to sue the hacker if you can find him. But that's going to be a tiny percentage of all accidents involving autonomous vehicles.
RISA GOLUBOFF: Do you worry about the moral implications for people and their negligent behavior if they're never liable for it?
LESLIE KENDRICK: There's a moral component to this. And there's a deterrence component to this. Right? So the deterrence tort people would say there's a moral hazard problem here if people who are acting negligently don't have to internalize the costs of being negligent. So it seems like you're saying this is all on the autonomous vehicles. They have to anticipate when people are going to behave negligently. Are you concerned about that?
KENNETH ABRAHAM: Well, that, no. Not really. This is a refrain that we've heard for more than 50 years in connection with auto accidents. Most auto accidents now and most accidents of the sort you're talking about where a pedestrian steps into the road are caused by inadvertence. They're the kind of careless mistakes that everybody makes. And while we in tort might label them as negligence, let's face it. Everybody's negligent. There is an old department of transportation study that found that the average driver commits an act of negligent driving every two miles.
So the idea that somehow the threat of tort liability is going to deter people from doing that, it's practically impossible. You're careless from time to time. And in any event, anybody in a vehicle who's got control of things has with or without compensation, with or without tort liability has an incentive of self-protection to be as careful as it makes sense to be.
Now you can gin up hypotheticals like the pedestrian that isn't looking. But that's another case of inadvertence. The pedestrian isn't going to have any more of an incentive to cross the street carefully. If the pedestrian's cause of action is abolished, then if it's liable for something-- people are either careful crossing the street or they're not because they don't want to get struck by cars, not because of what the legal system does.
RISA GOLUBOFF: So the other half is what happens when there's a conventional vehicle, autonomous vehicle accident caused by the autonomous vehicle.
KENNETH ABRAHAM: Right. So that seems to me to be the most difficult problem we face. And there are several alternatives. And we don't come down strongly in favor of one. One is just to leave the tort system in place. Now the autonomous vehicle's not going to cause an accident by virtue of the driver. So if there's any lawsuit by the person in the conventional vehicle, it would have to be against the manufacturer. And you'd have to prove that there was a defect with all the difficulties that I've described to you.
An alternative is simply to give the driver of the conventional vehicle access to the MER, Manufacturer Enterprise Responsibility that the autonomous vehicle has. So if I strike a conventional vehicle, just as somebody who was a passenger in my vehicle gets compensated by my manufacturer, so the driver of the conventional vehicle would be compensated by my manufacturer. So those are the main alternatives.
RISA GOLUBOFF: What makes that second alternative challenging? Why is it different if it's the driver of the conventional vehicle?
KENNETH ABRAHAM: Well, it's challenging in two ways. First of all, the driver of the conventional vehicle might have been negligent. And then the question is, well, you're going to give that driver access to my compensation rights even though he was at fault? And so as a matter of policy, some people would say, well, he's at fault. There's a moral concern there. I don't buy that. But I understand it. That's problem number one.
Problem number two is if that's the way it works, then the cost of autonomous vehicles is going to be higher than it would be otherwise. And the people who buy autonomous vehicles are going to, in effect, be paying for protection not only for themselves and their family members and their passengers, but also for the drivers of conventional vehicles that aren't contributing to the system. They're not paying anything for it.
Well, once a large percentage of vehicles are autonomous vehicles, that'll be less of a problem. In the meantime, I think there might be some kind of transition tax rules that give the buyers of autonomous vehicles some kind of tax break that's proportional to the probability that they'll be involved in accidents with conventional vehicles and the price of their vehicle actually includes the cost of compensating totally unrelated third parties and people say, why the hell should I have to pay for that in the price of my vehicles? Well, let's give you a little tax break for the first 10 years. So in effect, you're made whole for that.
LESLIE KENDRICK: So that gets us to the big picture of the costs and benefits of manufacturer enterprise responsibility because it seems like you've said on the pro side of this, it cuts down on administrative costs involved in lawsuits where someone's been injured and an autonomous vehicle's involved, rather than having to figure out what's a defect and what isn't. Rather than having to identify which party is the responsible party, you have one party and you have one liability regime.
And it seems like you've also said that maybe it's not just administrative costs but also manufacturer enterprise responsibility puts the costs of accidents where it ought to be, that the autonomous vehicles, part of their job is to try to anticipate negligent behavior out there on the road and avoid it. And also other drivers and pedestrians, what we call fault is often not fault in a kind of moral sense, but a form of inadvertence that is actually very hard to deter through negligence liability or liability of any kind.
So I've heard you say both administratively this makes sense in terms of administrative costs. It might also be where the liability ought to be. But it sounds like there are also costs to it. And you just averted to one, which is that you might worry that this is imposing too many costs on manufacturers of autonomous vehicles and maybe, particularly, in the nascent period of this, costs that would make this technology prohibitive.
Are there other things we should add to this cost benefit analysis? What are the cost and benefits of your regime?
KENNETH ABRAHAM: Well, the point of criticism will be the same kind of generic point of criticism that you get whenever there's a move that limits freewheeling tort liability. And that is that people will be deprived of their right to go to court and prove that somebody did something wrong and get unlimited damages for it. That's a matter of principle I think more than costs and benefits.
In this country, we really value the right to put authority on trial. And that's part of our populism. And what will happen here over time is that instead of lawsuits that pin responsibility on enterprises or individuals, we'll have an administrative system in which you file a claim and you get a check from a faceless and nameless fund that is formed by assessing surcharges on the manufacturer of vehicles.
So in some ways, it's a matter not of costs and benefits in the literal sense, but how much do we want to retain lawsuits? I'm a fan of lawsuits in lots of contexts. I just don't think that in the connection with auto liability where we're going to have drivers taken out of the picture and fault taken out of the picture and defect being an esoteric concern, I don't think that doing away with the lawsuits is going to be much of a sacrifice, especially since, in practice, suits don't go to court. They're settled.
We have, as you know, Leslie, we have the vanishing jury trial in this country. 98% of all auto liability suits get settled. It's true that what happens in the occasional trial helps determine the way in which settlement occurs. But it's settlement, nonetheless. Settlement is an administrative compensation scheme too in practice for most people.
So I think it's undoubtedly some sacrifice not to give people the unlimited right to bring a lawsuit and to put Ford or Toyota or somebody else on trial. But for the vast majority of people, having to go through the tort system is going to be more onerous than our proposal.
RISA GOLUBOFF: It sounds like the trade-off is both on the level of your day in court versus some bureaucratic administrative system. And you don't think there's a huge trade-off there given the 98% settlement rate. But the second trade-off is unlimited damages, unlimited compensation, which you're trading off against more predictability through the bureaucratic system. Right? You're not going to get all the money in the world that you might want.
KENNETH ABRAHAM: Right.
RISA GOLUBOFF: But you'll get some predictable--
KENNETH ABRAHAM: So--
RISA GOLUBOFF: --amount.
KENNETH ABRAHAM: --we haven't talked about the dollar part of compensation in our proposal. And what we propose is that there be a million dollars available for out-of-pocket expenses, like medical expenses and lost wages, and specified amounts payable for pain and suffering, such as loss of a limb or loss of an eye or loss of the use of a finger, and that those put limits on what can be recovered.
Most people don't get anything like that amount of recovery. The occasional, very occasional, very seriously injured victim would not get as much under our system as that person might get in the once in a lifetime sort of a tort trial. But it's worth noting that in nearly half the states, there are already legislated caps on the amount of pain and suffering damages that people can recover so that in half the states, they'd be limited anyway.
$1 million is not going to compensate the most seriously injured victims for their lost wages over a lifetime. But we already have very widespread health insurance. So it's hard to see how we need more than a million dollars in additional health insurance for people.
And people should be able to buy insurance against the losses that they might incur in an auto accident that wouldn't be compensated by this. You can buy life insurance now. You can buy disability insurance now. People ought to be allowed to buy excess MER to protect them against the possibility that they'll have $5 million worth of damages rather than $1 million. That's our approach.
RISA GOLUBOFF: So you've been talking about other forms of insurance and how people can supplement the MER with other forms of insurance. You're obviously an insurance expert, the most important insurance expert we have I think. But what happens to car insurance in this new world?
KENNETH ABRAHAM: Well, car insurance, if you mean driver's liability insurance, is going to disappear because there won't be drivers. And drivers won't need it. It might be that an owner of a vehicle has to buy a little bit of insurance to protect it against certain kinds of liabilities. I don't really know how, for example, how level 5 vehicles are going to deal with getting a vehicle onto a hydraulic lift at a gas station. So there might have to be a little bit of insurance. But mostly, conventional auto insurance is going to go away because it's driver's insurance. And we're not going to have drivers.
It might be that the great auto liability insurers find markets for other kinds of things. I'm not sure what it might be. I don't think State Farm is going to disappear. I don't think Geico is going to disappear. But they're certainly going to have to shift from one product line to another because they're not going to be selling much auto liability insurance in the future.
LESLIE KENDRICK: We've been talking about some of the pros and cons of this approach. And I want to suggest a potential con. So we've talked about what, if anything, this does to the incentives of potentially negligent drivers or pedestrians. But let's talk about the incentives for the vehicle manufacturers and software designers.
So one knock on strict liability and all forms a strict liability that comes up, a common criticism, is that because the strictly liable party is going to be liable no matter what, it reduces their incentives to actually act with care because even if they act with care, even if they act with reasonable care, even with utmost care, if something still goes wrong, they're going to be liable for that.
So let's think about software designers and the choices they have to make with designing software. So there's been a lot of talk about how they're going to have to decide if there's an impending accident, the software has to make decisions about, do you let this car collide with the car in front of it? Do you direct the car to pull over to the side of the road? What if there's a pedestrian on the sidewalk that you're going to hit? How do you make decisions about life trade-offs, hitting the car in front of you, hitting this innocent pedestrian when maybe that might kill them but it might be better for the driver of the car, might be less dangerous for them?
I guess I'm just interested in how that interfaces with the strict liability idea because it sounds like under your regime, the manufacturer is going to be liable no matter what decision the software designers made about that question. And we, as a society, might think there are better and worse answers to the question of how vehicles should behave in those types of situations. But it sounds like they're going to be liable no matter what.
KENNETH ABRAHAM: Well, I don't buy the idea that strict liability reduces incentives.
LESLIE KENDRICK: I don't either, but--
KENNETH ABRAHAM: I know you were just paraphrasing other people. The manufacturer of the car, anybody who's strictly liable, always has an incentive to the care and payer of the costs of avoiding accidents with the costs of having accidents and paying for them. And so if it's cheaper to have an accident than avoid it, the manufacturer will choose to have the accident.
That's true in negligence as well. The manufacturer who is held liable for negligence is still going to take a look at the universe of accidents it's held liable for and decide which ones it's worth avoiding and which ones it's worth paying for.
So the manufacturer is always going to have an incentive to keep looking at ways of designing the software in a way that reduces the amount of liability that it incurs. And it's always going to have to compare the cost of additional safety in terms of software or software research with the savings it gets on liability.
So this system in that respect is going to, I think, do a better job rather than a worse job as far as this choice, which in philosophy circles is referred to as the trolley problem, echoing an article in which the question was, well, does this trolley driver swerve to avoid one person and take a risk of hitting two?
In the absence of any regulatory directives, the system would give the manufacturer an incentive to design the software in a way that minimized its liability. It's going to be very hard, I think, in the foreseeable future for a manufacturer to design software if it takes into account the costs of the kinds of people it's going to hit in an instant. You're not going to be able to know when you swerved to avoid one person what income the person you're going to hit is going to lose. So I think that's in some ways a red herring.
But it's also possible for the federal regulators to direct manufacturers to design their software to make particular kinds of choices about discrete problems that are going to arise. If the software is always having the vehicle turn off the road and run onto the sidewalk where there are a lot of pedestrians, then there can be a directive to design the software so that it doesn't make that choice.
LESLIE KENDRICK: So I do think some of this is pretty realistic, particularly when you come to manufacturers having an incentive to protect buyers and buyers' passengers, protecting the people in the vehicle. They might privilege the people in the vehicle over people who are outside the vehicle. And we as a society might think that's not societally optimal.
But part of what I hear you saying is we don't necessarily have to rely on the liability regime to address that. We could have regulations that address those types of problems if they arise.
KENNETH ABRAHAM: We could. But it's not obvious to me that they're going to privilege the people in the vehicle. If you have one person in the vehicle and 42 pedestrians nearby, it doesn't seem likely to me that the software is going to be designed so that the manufacturer pays $50 million as opposed to $1 million because it protected the owner of the vehicle. Quite the contrary.
LESLIE KENDRICK: So part of what you're saying there is your regime would have some incentive effects on how they design software.
KENNETH ABRAHAM: It should. It's only if we don't like the sort of cold dollar choices that the manufacturer makes that we'd need some regulation to say, well, don't be so good at comparing costs and benefits. In the following kinds of situations, we don't want you to compare costs and benefits. We want you to write the software so that it makes a particular decision.
RISA GOLUBOFF: Thanks so much for coming Ken Abraham. It's been a pleasure to talk to you.
KENNETH ABRAHAM: My pleasure too.
[MUSIC PLAYING]
RISA GOLUBOFF: So fascinating, Leslie. I mean, I really enjoyed our field trip. But I also just really enjoyed the conversation and thinking about where is this going to take us and what's going to happen to law and regulation.
LESLIE KENDRICK: It's super-interesting. And it's an interesting kind of symbiotic relationship between law and technology that we see here.
RISA GOLUBOFF: Absolutely. And I think we've seen it in a number of our episodes. And there are themes that keep coming up. So once technology changes, what happens to the people who used to do the jobs that are now being automated, whether that's blockchain or autonomous vehicles? But one of the things that strikes me, and something I've thought a lot about in my own scholarship, is what's the relationship between technology and law or more broadly, between society and law?
And I feel like one of the lessons that keeps coming out of these episodes is the technology changes and then the law has to catch up. But one of the things I think a lot about in my scholarship is there are all these legal regimes that structure how the technology changes in the first place and what technology can change and what directions it goes in. And so it's much more cyclical and dynamic than it might seem given that we're starting in these episodes with the technology and then moving to the law.
LESLIE KENDRICK: That seems right, that these technologies develop within a particular regulatory and legal regime. And that constrains them in various ways. But then they bring up new questions where the road runs out, so to speak. And the law and the regulatory regimes haven't dealt with this question yet. And then the law side has to build new road for the whole project to keep going and is building that kind of in response to questions that have come up.
Which is one place where the title of our podcast, Common Law, really comes into play here because torts is a kind of classic, common law subject that always was the product of judges hearing cases and deciding on what the law was based on the problems that were brought to the court. And that's exactly the type of dynamic that we're seeing here.
You've got all of the case law that's come before that informs how-- we hope informs how people behave. And then you've got new questions that come and get adjudicated.
RISA GOLUBOFF: And aren't we lucky that we have Ken Abraham who knows the old regime--
LESLIE KENDRICK: Right!
RISA GOLUBOFF: --and then is thinking about the hard questions for the new regime. I have to just comment, pun intended or not intended, on the road runs out [LAUGHS].
LESLIE KENDRICK: Totally intended. Nothing but road metaphors in my head.
RISA GOLUBOFF: I love the road metaphors. They're great. Yeah, you know. We're a nation of the open road. So--
LESLIE KENDRICK: We are.
RISA GOLUBOFF: --people are attached to their cars. It'll be interesting to see whether having autonomous vehicles makes people less attached and you do have the sort of autonomous vehicles wandering about and people getting in them, a future of Uber kind of thing, or whether people remain as attached, especially in this country, to their particular car even if they're no longer the drivers.
LESLIE KENDRICK: Yeah. I think that is really interesting. And I've heard that there are people who are interested in organizing to preserve, to fight to preserve their kind of individual right to drive. And they feel, number one, that is a right that they have, and number two, that it's under threat from the development of autonomous vehicles.
RISA GOLUBOFF: So it's interesting, right? Because from the tort liability perspective and a safety perspective, the elimination of the human driver looks like progress. We'll have fewer accidents, Ken's very striking statistics about what causes accidents. You'll have fewer accidents. We'll have better road safety. But that overlooks the personal interests that people have in being a driver and in being able to drive.
And maybe there is such a right. I mean, we don't know whether there's a right until people start to articulate it. And then we can all discuss it and think about whether it looks like other rights we have, whether it's something we would protect, how would we protect it. Is it a constitutional right, a non-constitutional right? Who decides that? Does it end up in court? Does it end up in the court of popular opinion? And I think all those questions remain ahead of us.
LESLIE KENDRICK: Yeah. No. I think that's right. And speaking as both a driver and a potential victim of drivers, the idea that people have a right to do something less safely than could be accomplished through technology strikes me as on first glance somewhat implausible. I could imagine airline pilots asserting that the real way to fly was to do a whole bunch of stuff manually that now gets done through technology. And as a passenger on an aircraft, I am so glad that all of that stuff is automated and think it's really good.
RISA GOLUBOFF: The reason Boeing crashes would suggest otherwise. Fair enough, right?
LESLIE KENDRICK: So I think there are always going to end up being additional checks that have to happen to ensure the technology is working appropriately. And I think we're going to see that in the autonomous vehicle context too, which is why some of these liability questions really matter.
But I think a lot of the time, we actually think-- within the torts context, we think about driving coming along with a set of duties, not claims that you can make to be able to drive, but duties that you undertake when you do drive, duties to watch the road appropriately and to take all necessary safety precautions. And the accident rate suggests that we're not very good at doing that.
So if we're thinking about it in terms of rights, I think we should also think about our capability of fulfilling all the duties that come along with it as well.
RISA GOLUBOFF: Absolutely.
[MUSIC PLAYING]
That's all for this episode of the show. We hope you enjoyed it. And if you did, please do us the favor of reviewing it on Apple podcast or wherever you listen. You'll find more resources about today's topic in our show notes as well as photos and a video from our visit to the autonomous vehicle test track that you heard at the beginning of the show.
LESLIE KENDRICK: Definitely check out the video.
[LAUGHTER]
At commonlawpodcast.com, you can subscribe to our show. And all of our past episodes are posted there also.
RISA GOLUBOFF: Common Law is not yet a level 5 podcast, meaning that there are producers still with us here in the driver's seat. They include Tyler Ambrose, Tony Field, and Mary Wood.
LESLIE KENDRICK: We'll be back with you in two weeks with another episode of Common Law. Until then, drive safely and thanks for listening.
[MUSIC PLAYING]