Congress is trying to roll up its sleeves and get to work on artificial intelligence (AI) regulation. In June, Sen. Chuck Schumer (D-N.Y.) launched a framework to regulate AI, a plan that offered high-level objectives and plans to convene nine panels to discuss hard questions, but no specific legislative language. Sen. Michael Bennet (D-Colo.) has advocated for a new federal agency to regulate AI. With others, Rep. Ted Lieu (D-Calif.) is proposing to create a National Commission on Artificial Intelligence. And at a more granular level, Sen. Gary Peters (D-Mich.) has proposed three AI bills that would focus on the government as a major purchaser and user of AI, requiring agencies to be transparent about their use of AI, to create an appeals process for citizens wronged by automated government decision-making, and to require agencies to appoint chief AI officers. But only a few of these proposed provisions implicate national security-related AI, and none create any kind of framework regulation for such tools. Yet AI systems developed and used by U.S. intelligence and military agencies seem just as likely to create significant risks as publicly available AI does. These risks will likely fall on the U.S. government itself, not on consumers, who are the focus of most of the current legislative proposals. If a national security agency deploys an ill-conceived or unsafe AI system, it could derail U.S. military and foreign policy goals, destabilize interstate relations, and invite other states to retaliate in kind. Both the Defense Department and the intelligence community have issued policy documents reflecting their interest in ensuring that they deploy only reliable AI, but history suggests that it is still important to establish a basic statutory framework within which these agencies must work.
Citation
Ashley S. Deeks, Regulating National Security AI Like Covert Action?, Lawfare (July 25, 2023).