There has been an explosion of concern about use of computers to make decisions – from hiring to lending approvals to setting prison terms – affecting humans. Many have pointed out that using computer programs to inform decisions may result in the propagation of biases or otherwise lead to undesirable outcomes. Many have called for increased transparency and others have called for algorithms to be tuned to produce more racially balanced outcomes. The problem is likely to draw increasing attention as computers make increasingly important and sophisticated decisions in our daily lives. Drawing on both the computer science and legal literature on algorithmic fairness, this paper makes four major contributions to the debate: First, it provides a legal response to arguments for incorporating “fairness” in algorithmic decisionmakers by demonstrating that legal rules generally apply in the form of side constraints, not fairness functions that can be optimized. Second, by looking at the problem through the lens of discrimination law, the paper recognizes that the problems posed by computational decisionmakers closely resemble the historical, institutional discrimination that discrimination law has evolved to control, a response to the claim that this problem is truly novel because it involves computerized decisionmaking. Third, the paper responds to calls for transparency in computational decisionmaking by demonstrating how transparency is unnecessary to providing accountability and that discrimination law itself provides a model for how to deal with cases of unfair algorithmic discrimination, with or without transparency. Fourth, the paper addresses a problem that has divided the literature on the topic: how to correct for discriminatory results produced by algorithms. Rather than seeing the problem as a binary one, I offer a third way, one that disaggregates the process of correcting algorithmic decisionmakers into two separate decisions: a decision to reject an old process and a separate decision to adopt a new one. Those two decisions are subject to different legal requirements, providing added flexibility to firms and agencies seeking to avoid the worst kinds of discriminatory outcomes.

Citation
Thomas B. Nachbar, Algorithmic Fairness, Algorithmic Discrimination, 48 Florida State University Law Review, 509–558 (2021).