Because judges are expected to decide cases through the impartial application of existing law, they are often reluctant to admit that they must make law in hard cases. Many judges claim that such hard cases are rare, constituting roughly 10 percent of cases. In stark contrast, economic models of the selection of disputes for litigation predict that easy cases will settle, so that only hard cases would remain in trial and appellate courts. Empirical indicators, such as dissent rates or voting differences between Democratic and Republican appointees, have yielded muddled conclusions about the proportion of easy and hard cases in appellate courts. In fact, none of these crude statistics relate directly to the proportion of easy cases. This article develops a new approach for empirically analyzing the proportion of easy cases. Although the easiness and hardness of cases are subjective, it is possible to estimate feasible combinations of the proportion of hard cases and clear errors. This approach relies only on the basic premise that reasonable judges should not disagree in easy cases. The article then illustrates this approach using two datasets of appeals. An analysis of asylum appeals in the 9th Circuit finds widespread disagreement, implying high proportions of hard cases, clear errors, or both. By contrast, voting data from labor and environmental cases in the D.C. Circuit is consistent with the claim that 90 percent of cases are easy and 1 percent of decisions are clear errors.

Joshua Fischman, How Many Cases Are Easy?, 13 Journal of Legal Analysis, 595–656 (2021).