Actors in our criminal justice system increasingly rely on computer algorithms to help them predict how dangerous certain people and certain physical locations are. These predictive algorithms have spawned controversies because their operations are often opaque and some algorithms use biased data. Yet these same types of predictive algorithms inevitably will migrate into the national security sphere, as the military tries to predict who and where its enemies are. Because military operations face fewer legal strictures and more limited oversight than criminal justice processes do, the military might expect – and hope – that its use of predictive algorithms will remain both unfettered and unseen.
This article shows why that is a flawed approach, descriptively and normatively. First, in the post-September 11 era, any military operations associated with detention or targeting will draw intense scrutiny. Anticipating that scrutiny, the military should learn from the legal and policy challenges that criminal justice actors have faced in managing the transparency, reliability, and lawful use of predictive algorithms. Second, the military should clearly identify the laws and policies that govern its use of predictive algorithms. Doing so would avoid exacerbating the “double black box” problem of conducting operations that are already difficult to legally oversee and contest, using algorithms whose predictions are often difficult to explain. Instead, being transparent about how, when, why, and on what legal basis the military is using predictive algorithms will improve the quality of military decision-making and enhance public support for a new generation of national security tools.
Citation
Ashley S. Deeks, Predicting Enemies, 104 Virginia Law Review, 1529–1592 (2018).