As criminal justice actors increasingly seek to rely on more evidence-informed practices, including risk assessment instruments, they often lack adequate information about the evidence that informed the development of the practice or the tool. Open science practices, including making scientific research and data accessible and public, have not typically been followed in the development of tools designed for law enforcement, judges, probation, and others. This is in contrast to other government agencies, which often open their processes to public notice and comment. Lack of transparency has become pressing in the area of risk assessment, as entire judicial systems have adopted some type of risk assessment scheme. While the types of information used in a risk tool may be made public, often the underlying methods, validation data, and studies are not. Nor are the assumptions behind how a level of risk gets categorized as “high” or “low.” We discuss why those concerns are relevant and important to the new risk assessment tool now being used in federal prisons, as part of the First Step Act. We conclude that a number of key assumptions and policy choices made in the design of that tool are not verifiable or are inadequately supported, including the choice of risk thresholds and the validation data itself. Unfortunately, as a result, the federal risk assessment effort has not been the hoped-for model for open risk assessment.
Citation
Brandon L. Garrett & Megan T. Stevenson, Open Risk Assessment, 38 Behavioral Sciences & the Law, 279–286 (2020).