The Double Black Box: AI Inside the National Security Ecosystem
At the heart of national security decisions lies a paradox: these decisions are among the most consequential a government can make, but often the least transparent and democratic. The “black box” of national security decision-making — driven by extensive classification and characterized by real difficulty overseeing executive actions — has expanded in the United States as executive power continues to grow. Over the past two decades, this expansion has significantly eroded the constitutional checks and balances that we rely on to superintend presidential authority. Although Congress at times works hard to be a faithful surrogate, thin staffing, limited expertise, and politics complicate those efforts. Meanwhile, the courts largely defer to the executive on these matters. As a result, it is increasingly hard to confirm that the executive is acting consistent with public law values such as legality, accountability, and the requirement to justify decisions.
Rapid advances in AI are compounding this trend. Defense and intelligence agencies, including the National Security Agency, the CIA, and the Departments of Defense and Homeland Security, have begun to deploy AI in decision-making processes and operations. For example, the Department of Defense is using AI-powered computer vision tools to identify threatening activities — and ultimately potential targets — among thousands of hours of drone footage. Cyber operations are increasingly driven by AI, raising the possibility that autonomous U.S. and foreign cyber tools could clash and escalate attacks to an armed conflict — without any affirmative human decision to do so. It will not be long before AI takes a seat in the Situation Room. AI systems, however, are often “black boxes”: users and even programmers generally cannot access the algorithms’ internal processes, making it very hard for the users to understand why or how the system reached the recommendation that it did.