Thumbnail Image
Deborah Hellman

S3 E2: The Bias Baked Into Algorithms

UVA Law professor Deborah Hellman discusses her work on how algorithms can compound injustice, and the evolution of her theory on discrimination.

Show Notes: The Bias Baked Into Algorithms

Deborah Hellman

Deborah Hellman joined the Law School in 2012 after serving on the faculty of the University of Maryland School of Law since 1994. She is the director of UVA Law’s Center for Law & Philosophy.

There are two main strands to Hellman’s work. The first focus is on equal protection law and its philosophical justification. She is the author of “When Is Discrimination Wrong?” (Harvard University Press, 2008) and co-editor of “The Philosophical Foundations of Discrimination Law” (Oxford University Press, 2013) and several articles related to equal protection. The second strand focuses on the relationship between money and legal rights. This includes articles on campaign finance law, bribery and corruption, each of which explore and challenge the normative foundations of current doctrine. Her article "A Theory of Bribery" won the 2019 Fred Berger Memorial Prize (for philosophy of law) from the American Philosophical Association.

In 2020 she won the Association of American Law Schools Section on Jurisprudence Article Award for “Measuring Algorithmic Fairness,” which was published in the Virginia Law Review. Her other article on algorithms, “Big Data and Compounding Injustice,” is forthcoming in the Journal of Moral Philosophy. 

Hellman also writes about the obligations of professional roles, especially in the context of clinical medical research. She teaches constitutional law, legal theory and contracts, as well as advanced classes and seminars on questions related to these fields.

She was a fellow at the Woodrow Wilson International Center for Scholars (2005-06) and the Eugene P. Beard Faculty Fellow in Ethics at the Edmond J. Safra Center for Ethics at Harvard University (2004-05). She was awarded a National Endowment for the Humanities Fellowship for University Teachers in 1999.

Listening to the Show

Transcript

BRING MUSIC IN

Risa Goluboff: You can’t see them. You can’t shake their hand. You may not even know they’re there. But they’re making more and more decisions on your behalf. 

Leslie Kendrick: Will you get the loan you applied for? Will you get that apartment you covet? Will you get that job interview? 

Risa Goluboff: Will you get pulled out of line for extra screening at the airport? If you’re arrested, will you be refused bail or parole?  

Leslie Kendrick: Algorithms may determine the answers to all of these questions.

Risa Goluboff: So when it comes to ensuring equity, will algorithms help or hurt … and what role can the law play? That’s what we’re discussing on this episode of Common Law.         

FADE MUSIC OUT

THEME MUSIC IN

Risa Goluboff: Welcome back to Common Law, a podcast from the University of Virginia School of Law. I'm Risa Goluboff, the Dean.

Leslie Kendrick: And I'm Leslie Kendrick, the Vice Dean. In this season of Common Law, we're diving into the issue of law and equity.

Risa Goluboff: We started the season with a really thought-provoking conversation with professor Randy Kennedy of Harvard Law School, who spoke to us so eloquently about racial justice and the idea of promised lands. 

Randall Kennedy: In America, the law has been on the side of oppression, that’s true. On the other hand, the law has also been on the side of liberation. And that’s why I tell law students they have an important role to play in all this.

Risa Goluboff: If you missed that episode, we hope you’ll go back and listen. In this episode, we’re going to talk about equity and discrimination. 

THEME MUSIC OUT

Leslie Kendrick: As we talk about what makes society equitable this season, it also helps to consider when it is NOT. So today we'll be speaking to an expert on discrimination and the law. Back in 2008, Deborah Hellman came out with her seminal book on discrimination, intriguingly titled When Is Discrimination Wrong?

Risa Goluboff: Debbie's now our colleague and a professor here at UVA, where she teaches classes on constitutional law and discrimination theory, among others. But these days she's advanced her ideas by thinking about the real world ways that discrimination can be baked into the algorithms that power so much of our technology. Debbie, thanks so much for joining us today.

Deborah Hellman: Thanks so much for having me. What a pleasure.

Leslie Kendrick: So, Debbie, before we get to your recent article on algorithms, I want to ask about your earlier work on discrimination, because rumor has it that your opinion has changed since you wrote your book When Is Discrimination Wrong? I know at the time, back in 2008, there wasn't actually a whole lot of literature on discrimination and what makes it wrong. So, first of all, why were you interested in answering that question?  

Deborah Hellman: Well, precisely really for the reason that there wasn't much literature on the topic I was, uh, writing and teaching about constitutional law and in particular equal protection. And I thought equal protection law is kind of the law's theory of when is discrimination wrong. Nobody is really thinking about the moral question that underlies it and I wanted to get into that more deeply.

Risa Goluboff: Yeah, on the one hand it seems like kind of an odd question to ask ‘when is discrimination wrong,’ because ‘discrimination’ on its face sounds like it’s ALWAYS wrong. But you say it’s not so simple. How come?

Deborah Hellman: So I think we can all identify instances where laws and policies distinguish between people on the basis of traits, where we all think, well, that's perfectly okay. And other cases where we think that's clearly impermissible. And then there are a range of cases that people disagree about. So the perfectly permissible might be, you have to be 16 to drive. Nobody thinks that's wrongful discrimination. 

Risa Goluboff: Sure.

Deborah Hellman: Then there are cases we think are clearly impermissible and think about race discrimination and the Jim Crow South. So I wanted a theory that accorded with our moral intuitions and that had something helpful to say about the cases that we disagree about. 

MUSIC FADES IN

Deborah Hellman: So my view about that is when the law or policy distinguishes in a way that's demeaning, then it constitutes wrongful discrimination.

Risa Goluboff: So how do we know when something is demeaning? I mean, that sounds like a pretty subjective term. So how would you define that? 

Deborah Hellman: Being demeaning has two components. The first is what I call an expressive component. Does it express that the other person is of lesser moral status? So I don't think that when we tell five-year-olds that they can't drive, that it expresses that they're lesser human beings. It just expresses that they probably don't have the skill and maturity to drive safely. So one part is: what is expressed by the differentiation. And the second is whether the entity has the degree of power to change the social status of the person affected by the law or policy.

FADE MUSIC OUT

Risa Goluboff: So we often talk about — when we're talking about equal protection or in Leslie's area, in free speech — we often talk about intent and effects. That seems like slightly different kind of language and concepts than you're talking about, but does there need to be intent for the expressiveness or does it need to be subjectively felt?

Deborah Hellman: Yeah. Great question. So my short answer is: it's not what you intended, nor the effect, but what you're doing in differentiating. 

BRING MUSIC IN

Deborah Hellman: So when Rosa Parks is asked to get to the back of the bus, her subjective feeling is not related to whether I'm going to conclude it's discrimination. It's discrimination because objectively the social meaning of the action of asking her to get to the back of the bus is one that expresses denigration and the state or the bus driver has the capacity to lower her social status. It doesn't turn on how she subjectively experiences it.

Leslie Kendrick: So Debbie, your book came out over a decade ago and your work on discrimination theory and these kinds of questions have really inspired a whole field and I'm curious, over that time, has your thinking changed at all? And how has that informed your thinking about these questions?

FADE MUSIC OUT

Deborah Hellman: It's definitely affected my thinking about these questions.  When I was trying to think about what makes discrimination wrong, I was just assuming implicitly without really examining this, that there would be kind of one answer — it's wrong because it, whatever, and I came up with this idea of demeaning. But now I've changed my view about that. I think actually a pluralist account is probably a better account, and so I would supplement my view with the idea of compounding injustice, that is: discrimination is wrong when it takes a situation which is already unjust and makes it worse.

Leslie Kendrick: Okay, so what you’re saying is that’s it's wrongful discrimination if it's both demeaning AND it compounds injustice. So can you give us an example of what compounding injustice would mean?

BRING MUSIC IN

Deborah Hellman: Sure. So imagine that a life insurer is trying to decide what rates to charge. Younger people get lower rates, older people get higher rates, because they're more likely to die during the policy period and therefore to collect from the insurance pool. And on that basis, the insurer charges higher rates to victims of domestic violence. Because if you're being beaten up, you're more likely to be killed and therefore your heirs are gonna collect the life insurance. I would describe that as compounding a prior injustice. 

BRING MUSIC UP, THEN UNDER

Deborah Hellman: And it has two elements. One is that the insurer is using the victim's status as a victim of injustice as its reason for action. And secondly, it's taking that injustice and making it worse. 

BRING MUSIC UP, THEN UNDER

Deborah Hellman: So that to me is an example of compounding prior injustice.

FADE MUSIC OUT

Risa Goluboff: So that makes me think about this distinction that we've been talking about on the show between equity and equality. I think you're going to push back on this, but I'm a lumper, not a splitter, we've talked about this before, so I'm trying to lump what you just described into our equity/equality analysis. So is it fair to say that charging higher insurance there would be equality, right? So, treating likes alike, they are like other people at higher risk of dying would lead to charging them higher premiums. But equity would say you don't want to compound injustice and so you don't want to treat them like others, because the way they got to that place is bad. And so we don't want to continue to do that and I’m curious if that comports with what you think or if you would describe it differently.

Deborah Hellman: I agree with you, but there is another meaning of equality which could be rather than to treat people equally, to treat them AS equals, as if they matter equally. And I think that's what you're getting at, maybe with equity, but in your way of conceiving of it, I would agree.

Risa Goluboff: Could you just say a little more of what you mean by treating people as equals?

Deborah Hellman: What equality demands to me is to treat people AS people of equal moral worth which sometimes requires treating them differently. And that is, I think what's happening in the battered women example. 

BRING MUSIC IN

Deborah Hellman: Let me just say also that some of the listeners may be wondering, well, what about the insurance company? It can't go out of business or why does it have to bear this loss? And I want to emphasize that I think the fact that an action will compound injustice provides a reason not to do it, but that reason can be outweighed by other reasons, so I don't think it's always dispositive. And the second thing is in that context in particular, I think it provides a good reason to actually adopt a law that forbids insurance companies from charging higher rates to domestic abuse victims. So that one insurance company doesn't lose competitively to those who are less attuned to what equity demands.

Risa Goluboff: This is really interesting and you know, one of the things we talk about is law as a tool for oppression versus law as a tool for equity, and it sounds like one of the things you’re saying is it really can be a tool for equity, which is something, of course, that we love to hear when we’re having this conversation.

Deborah Hellman: Yeah.

FADE MUSIC OUT

Leslie Kendrick: Yeah, awesome. So some of your recent work has gone from the level of theory to applying theory to particular phenomenon in the world. And in particular, you’ve talked about algorithms and how we're now getting kind of automated discrimination in lots of different fields, and how that might compound injustice. So if you could tell us a little bit about this, and please recognize we're beginners and maybe a lot of our listeners are beginners when it comes to some of the technology that might be involved.

Deborah Hellman: Right. Algorithms are used in a lot of contexts. Of course algorithms are used without fancy machine learning and artificial intelligence in that before those were possible, people had algorithms in their head, which just meant a decision rule of some kind. But when we speak about algorithms, we probably have in mind some fancy machine learning tool being used. And those are used in any context in which there are complex decisions and limited time. 

Risa Goluboff: Right, and as we mentioned at the start of the show, algorithms can help determine all sorts of crucial things like what you pay on your mortgage, what medications you get prescribed, whether you get pulled out of line by TSA at the airport for additional screening …   

Leslie Kendrick: Yeah, it’s clear that they’re everywhere, but I have to say Debbie, I wouldn’t necessarily have ever expected you to be an algorithm scholar, so why have they become so interesting to you?

Deborah Hellman: They're fabulously interesting cause we're treating people differently on the basis of different kinds of traits. I also think they're fabulously interesting because — Risa mentioned the fact that constitutional anti-discrimination law focuses a lot on intent — and what's interesting about the way algorithms work is they either lack that human intent or at least they're very attenuated from it. 

BRING MUSIC IN HERE

Deborah Hellman: Just for your listeners who are less familiar, let's take an employer. You're interested in hiring employees and you want to sort through all those applicants, so you think, well, I want to look at who's been a successful employee in the past and tell the algorithm, "Here, these are the people who are successful employees.” And so the machine learning tool learns what are the traits of great employees and then uses those traits to develop an algorithm to select further new employees. 

BRING MUSIC UP, THEN UNDER

Deborah Hellman: And that kind of a process raises two kinds of important ways that injustice can infect the machine learning process.

FADE MUSIC OUT

Risa Goluboff: Tell us about those two ways. So what are they?

Deborah Hellman: So I label them accuracy affecting injustice and non-accuracy affecting injustice. So let me take the accuracy affecting injustice first. Suppose that the people who are labeled “good employees” are disproportionately men and the reason they're disproportionately men is that the managers who've been writing performance reviews are biased against women and so they give them less good evaluations, even when they're equally good. So if that were to be the case, then the machine learning tool is going to kind of “learn” that men are better employees than women. And what the algorithm is doing is it's just carrying that bias forward. Sort of bias in, bias out, people would say, it's just automating the bias.

Leslie Kendrick: Okay, so let me just see if I’m following you. So, in this instance, the algorithm itself is built on BAD information. And so you could say it's compounding injustice because women who were already being treated unfairly by their bosses now won’t get hired or maybe won’t get promoted because that bad information is baked into the algorithm. 

Deborah Hellman: Yes.

Leslie Kendrick: So that's what you’re calling “accuracy affecting injustice,” right, because it’s the ACCURACY of the information that’s in question. 

Deborah Hellman: Right.

Leslie Kendrick: Now you said there was another way -- “NON-accuracy affecting injustice." So what does that look like?

Deborah Hellman: So if we imagine that injustice in the world has real effects, that is, let's say, racial minorities have disproportionately gone to schools that didn't have their fair share of resources, such that let's say they in fact, get a less good education. Then, it may be the case that racial minorities are disproportionately among the people who are less good employees. So the algorithm is accurately picking up this fact. So it's not that the algorithm has an error in it. This is why I call it nonaccuracy affecting injustice, but we're still compounding that prior injustice by picking out the people who are stronger, but they're stronger because they have not been subject to injustice, or the other people are weaker because they have been subject to injustice. 

Risa Goluboff: So let me just point out: you know, this kind of bias has always existed, even before algorithms. I mean, we’ve always had to worry that women, or African-Americans or people with disabilities, or any other minority group might be treated less equitably -- less equally -- than everyone else. That’s not DISTINCTIVE to algorithms. 

Deborah Hellman: But what is distinctive is any tool that allows us to do that more, in more contexts, with greater scope is going to compound more injustice, which is one worry I have about algorithms.

BRING MUSIC IN HERE

Leslie Kendrick: So they have the possibility of doing this on a larger and grander scale. That's part of what they do, is make decisions and provide data efficiently. 

Deborah Hellman: Yeah.

Leslie Kendrick: Another concern might be that people are going to think of the algorithm as not being biased, so people might think of it as being more reliable and not having all of the problems that individual decision making has, and therefore they might place more trust in it, when in fact, it still has all these same problems. 

Deborah Hellman: Yeah. I think that is a problem that we think of it as more neutral and so maybe we're less skeptical. Whereas we are primed to recognize that humans can be biased. And, so we might be less challenging of the algorithm. And so it's important to recognize that the algorithm CAN be AS biased as the human. But I also think it does have some potential to disrupt some of those biases too, that is the machine itself is going to automate prior biases, but it's not going to introduce its own kind of bad intentions, I guess.

FADE MUSIC OUT

Risa Goluboff: I’m not sure any of us think of our congressional representatives as the most tech savvy people, but your article references Congresswoman Alexandria Ocasio-Cortez, who seems to understand pretty well the potential problems that algorithms may present. So at an MLK event in 2019, she spoke in front of an auditorium full of people -- do you remember those? -- so let’s listen to that.

Rep. Alexandria Ocasio-Cortez: "Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They're just automated. And automated assumptions, it's like you're auto -- if you don't fix the bias, then you're automating the bias, and that gets even more dangerous ...[applause] ... and that gets more dangerous because then you take away the individual and you just say, oh, this was a computer glitch instead of actually saying this was a bias that was coded in whether it was subconscious or conscious or not."

Risa Goluboff: So to what extent would the argument be that behind every algorithm is a person who's programmed it? Is it the existence of some person early on in the process of programming the algorithm that would then subject it to law, or is that gonna fall away because of machine learning? How do you find legal liability or how does the law then regulate these algorithms?

Deborah Hellman: Well, I think it matters whether we're talking about constitutional law or statutory laws, since there's more of a space for disparate impact liability under various statutory regimes. In the context where we have disparate impact liability, we're just looking at whether the use of the tool, just like a standardized test, if it produces a disparate impact on a legally protected group, then you have the possibility for legal liability in those domains. 

Risa Goluboff: Okay, that’s at the statutory level, at the level of laws that Congress has passed that people can bring claims under. But you had mentioned that it would be different at the constitutional level, and a lot of the time, constitutional rights create narrower claims than legislative, statutory rights do, so can you talk a little bit about the differences? 

Deborah Hellman: In the constitutional domain where disparate impact liability is not going to be enough -- unless you adopted it specifically to produce that effect, but that's going to be very few cases, if any, then we're going to be looking for intent. And I think you're right, that it's going to be hard to attach. But on the flip side, what worries me actually is that our current constitutional regime sees the use of particular classifications in constitutional law as giving rise to heightened review. So if I'm the state employer, and I allow the algorithm to take account of race in some way, it seems that I might get in constitutional trouble. But that's problematic, I think, and not clearly legally impermissible in my view.

Leslie Kendrick: Okay, can you give us an example of an instance where including that racial classification might actually help ensure equitable treatment? 

Deborah Hellman: Imagine that we're trying to figure out, let's say who's going to recidivate. And it turns out housing instability is predictive of recidivism. But then when I get into the data a little more, in a little more depth, I see that housing instability is highly predictive of recidivism for let’s say, white inmates, but not so predictive for minorities. And if that's right, I might want the algorithm to take account of race so that it considers housing instability, when it's looking at a white defendant, but doesn't consider it when it's looking at a racial minority. And I'm worried that that kind of correction for the nuances in the data is going to be constitutionally troubling or problematic, given our current constitutional law.

Risa Goluboff: So to take it out of the algorithmic example for a second, and then I want to come back to it. If you think about affirmative action, right, that's exactly what affirmative action is doing to some extent, right, it's taking account of race in a way that's intended to not compound injustice, to create diversity, and the court has been pretty stingy about whether governments or universities can take account of race in that circumstance, right? And often when you're talking about disparate impact, the remedy for disparate impact is to think about race, because if you don't think about it, you will be compounding injustice. Listeners can't see, but you're nodding, you're agreeing with me maybe I think, so I would love for you to say more about why you think it would be okay as a legal matter to take account of race in those circumstances.

Deborah Hellman: I'm worried that the court and other people will see it as just like affirmative action, which the court has at least explicitly said is to be treated the same as any other race-based classification and subject to strict scrutiny. 

BRING MUSIC IN

Deborah Hellman: And instead, I think the relevant analogy should be to racial suspect descriptions, which the court has NOT found to be problematic. So if somebody attacks you and you say, “It was a black man who was five eleven and blah, blah, blah.” Then the police are entitled to go and look for black men. 

Risa Goluboff: Okay.

Deborah Hellman: The reason it's perfectly permissible is that the state isn't saying being black is predictive of criminality. Rather they're saying eyewitnesses are generally reliable. And so we're going to go on the basis of this eyewitness description, which just happens to include a racial classification.

Leslie Kendrick: Got it.

Deborah Hellman: So now I think about what's happening within the algorithm, in the context of housing instability and recidivism. And within that, race is making an appearance, if you will, but it's making an appearance that's closer to the way it does in suspect descriptions than the way it does in profiling. 

BRING MUSIC UP, THEN UNDER

Deborah Hellman: And so I think it's too strong to say that our legal regime NEVER allows race-based classifications. We actually do allow them, even without strict scrutiny in some contexts. And I'm trying to pull out what are the features of those contexts, like race-based suspect descriptions, where the court does not require strict scrutiny and see which is the closer analogy.

FADE MUSIC OUT

Leslie Kendrick: So you've certainly given us a lot of reasons to be concerned about algorithms. Would you say you're an optimist or a pessimist when it comes to algorithms and discrimination?

Deborah Hellman: Well, I want to say I'm an optimist because I'm with my two good friends who are both so sunny and Risa's always an optimist. But I think I'm going to say I'm in between. 

That is I think they — I think there's a lot of criticism out there that's going on and that's going to help to keep some of the worst problems in line. And I think they have the potential to also allow us to see more concretely the ways in which our structures of prior injustice really do get reinforced in the next iteration. So those are the positives, but I also think that avoiding the problems of compounding injustice of both kinds is actually really difficult to undo. And to the extent that we've become enamored of these tools and use them in so many contexts, they have a lot of potential to close opportunity for a lot of people.

FADE THEME MUSIC IN 

Leslie Kendrick: Well, this has been really fascinating and we can't tell you how much we appreciate your coming to talk with us.

Risa Goluboff: Thank you so much, Debbie, for being here.

Deborah Hellman: Well, it was so fun. Thank you so much for inviting me.

Leslie Kendrick: That was UVA Law Professor Deborah Hellman, author of When Is Discrimination Wrong. The article we spoke about “Big Data and Compounding Injustice” will be published in the Journal of Moral Philosophy. 

THEME MUSIC UP, THEN FADE OUT

Risa Goluboff: What an interesting conversation that was for me, I mean, substantively it's so right in my wheelhouse and yet methodologically, right, Debbie's a philosopher and so she's thinking in different kinds of conceptual categories than I think in. One of the things we didn't have time to talk about was the fact that I think courts have had trouble with those kinds of questions. And, you know, you go all the way back to Brown vs Board of Education, 1954, and the NAACP used these doll studies as evidence of the stigmatizing nature and the creation of feelings of inferiority on the part of the children who were going to segregated schools. And the court relies on those studies in what becomes its famous footnote 11 and talks about those studies and it comes under enormous criticism, partly because it turns out the social science wasn't, you know, the best social science, but also because it raises all these questions about, you know, to what extent should our constitutional law depend on social scientific data, and to what extent should the court be relying on that? But I do think that her view begs the question of how you know what's demeaning and who gets to choose it. And I think the fact that the court went to the doll studies at that moment suggested their own insecurity or their own feeling of inadequacy to be able to say in the kind of declarative mode that I think Debbie is calling for, this was demeaning and we can say it was demeaning without turning to some kind of external, empirical evidence that they seem to feel the need to turn to.

Leslie Kendrick: And I see analogs now, which mean that we're going to have to keep asking these questions. I think about Masterpiece Cake Shop, where there's an amicus brief that says, well, really what's the problem here. There's economic discrimination that could happen, but if a same-sex couple can go get a cake at another bakery, there's no economic harm, so all we're left with is quote unquote mere dignitary harm. And what, what is that really and how are we going to define it? And there are legal problems in defining it. So we're still asking these questions about when is something wrongful discrimination and what makes it wrong. And Debbie is answering by saying, we have to look at the social meaning. We have to look at the historical context and the current social significance of the action that we're talking about and we shouldn't run from that. I see why courts do run from that. But she, from her vantage point saying we really can't get away from that. So we should run toward it and try to define it and try to think really rigorously about it. And I think we have a real need for that now just as the court did in 1954.

Risa Goluboff: Yeah. And I, I agree with that. And I imagine she would also say it's not a different kind of inquiry than the court, you know, answers all the time. We're often asking the court to answer hard questions of social meaning and definition and my guess is she would say this is not different in kind from the other kinds of inquiries that, you know, courts routinely are undertaking.

Leslie Kendrick: It's funny I have so many more questions. I'd like to have a whole other conversation.

Risa Goluboff: Me too! And I think these are definitely questions that we'll have other opportunities to talk with our guests about over the course of the season. 

THEME MUSIC UP THEN UNDER AGAIN

Leslie Kendrick: If you’d like to learn more about Deborah Hellman’s work on discrimination and algorithms, visit our website, Common Law Podcast Dot Com. You’ll also find all of our previous episodes, links to our Twitter feed and more. We’ll be back in two weeks talking about equity and the family with Naomi Cahn from UVA Law.  

Naomi Cahn: Long live the institution of marriage. It’s just, perhaps, not long live the institution of marriage as the basis for a series of a thousand plus federal and numerous state benefits. 

Risa Golubuff: We’re excited to share that with you. I’m Risa Goluboff.

Leslie Kendrick: And I’m Leslie Kendrick. See you next time!

CREDITS: Do you enjoy Common Law? If so, please leave us a review on Apple Podcasts, Stitcher — or wherever you listen to the show. That helps other listeners find us. Common Law is a production of the University of Virginia School of Law, and is produced by Emily Richardson-Lorente and Mary Wood.