Algorithmic Risk Assessment in the Hands of Humans
We evaluate the impacts of adopting algorithmic risk assessments as an aid to judicial discretion in felony sentencing. We find that judges' decisions are influenced by the risk score, leading to longer sentences for defendants with higher scores and shorter sentences for those with lower scores. However, despite explicit instructions that risk assessment was supposed to lower prison populations, there was no net reduction in incarceration. Nor do we detect any public safety benefits from its use. We document racial disparities both in the risk score and in its application: judges are more likely to follow the leniency recommendations associated with the risk score for White defendants than for Black. However, sentencing in Virginia was already quite racially disparate and risk assessment use neither exacerbated nor ameliorated the differences. Risk assessment did, however, increase relative sentences for young defendants. We explore several theories around human-machine interaction to better understand our results: conflicting objectives, learning through use, adopters versus nonadopters, and ineffective use of information.