War could unfold so rapidly in the future that nations may need to rely on machines and artificial intelligence to make split-second decisions to repel or carry out attacks.

Ashley Deeks
Professor Ashley Deeks

Professor Ashley Deeks of the University of Virginia School of Law looks at the legal implications of using autonomous systems during war in a new paper, “Delegating War Initiation to Machines.”

In her paper, Deeks urges Congress to create laws ensuring strong oversight to minimize risks associated with the use of AI in the resort to force. Both in the U.S. and abroad, governments should bear in mind the potential challenges from using AI systems in initiating force, she says.

Deeks is a senior fellow with UVA’s Miller Center and directs the Law School’s National Security Law Center. In a Q&A, she answers questions about the realities of delegating war initiation to machines and the ethical and policy implications surrounding such decisions.

What inspired you to explore the topic of delegating war initiation to machines?

I recently wrote a longer article that examined national security delegations by the president, which often are classified but which can raise constitutional concerns. For example, President Eisenhower delegated the authority to launch nuclear weapons to seven military officials, and more recently presidents have delegated to the military the authority to do things such as launch offensive cyber operations without presidential approval.

These delegations raise interesting legal questions: Are they constitutional? May Congress limit them? If not, can it at least mandate that the president report these delegations to Congress? The delegations may raise concerns because they can dilute civilian command over the use of military force; lead to situations in which the president’s agent does not act in a way that reflects the president’s intent; and obscure the actual decisionmaker’s identity in a particular case.

At the end of that article, I briefly consider whether using autonomous systems to conduct self-defense actions may actually be a type of national security delegation — only to a machine, rather than to a military official. This shorter article picks up and explores that idea. For example, a state might decide to allow its nuclear command and control system to make autonomous judgments about when to launch a nuclear weapon in response to a perceived imminent attack. Or a state could allow its cyber systems to respond autonomously to certain attacks on its military installations, setting off pre-placed implants that cause physical damage to the attacker. Any state that considers introducing significant autonomy into systems like this needs to assess whether and how the use of autonomy in war initiation would comport with its domestic laws governing delegations of decisions to use force.

Can you explain what “hyperwar” means and why it might necessitate the use of autonomous systems? ​

General John Allen and a co-author of his coined this term. It refers to the fact that today the speed of attacks in armed conflict is so fast that it will require machines to do most of the work, while minimizing the ability and opportunity for humans to make decisions during those operations. When a conflict is unfolding that quickly, states will have a strong incentive to deploy autonomous systems, which can make decisions much faster than humans while incorporating much more information than a person can process.

How do you see the role of AI evolving in future military decision-making?

Because of the “hyperwar” phenomenon, which seems increasingly near at hand when we consider systems like hypersonic missiles and autonomous fighter jets, states with major militaries seems increasingly likely to introduce AI and autonomy (which AI facilitates) into a wide range of their operations. This includes command and control systems, aerial and sea drones, and cyber operations —  and it may eventually include kinetic targeting operations, too.

As a sign of where things are heading, in August 2023, the Defense Department rolled out its Replicator Initiative, announcing that it aims to field thousands of inexpensive, autonomous systems, including aerial and maritime drones, by August 2025. In Ukraine’s current conflict with Russia, it has used loitering, uncrewed drones to target Russian forces, though the systems still seem to have humans overseeing them. Of course, a range of states and nongovernmental organizations are very concerned about the prospect of lethal autonomous systems and have been advocating for years for international restrictions on such systems, but states haven’t concluded an international agreement on them to date.

What are some international legal challenges associated with allowing machines to make decisions about warfare? ​

There are two different bodies of international law that a state needs to think about if and when it decides to deploy autonomous systems. First is the “jus ad bellum,” or the rules regulating the resort to force. If a state wanted to use an autonomous system that could act in “anticipatory self-defense,” targeting an imminent threat to its territory, the state would need to make sure that the system would only respond to actual, proximate threats and that it would use no more force than was necessary to suppress the threat.

The second relevant body of law is the “jus in bello,” or the law of armed conflict. Under that set of laws, a state may only target its adversary’s military objectives, and if the operation is expected to harm civilians or civilian objects, the state may only conduct the attack if the harm to civilians will not be excessive in relation to the military advantage gained. That means that the autonomous system would have to be able to comply with those rules before launching an attack.

I think we are still a decent way from systems that can incorporate that body of rules — and some people are skeptical that states could ever develop such systems. But the more static and defensive your autonomous systems are, the more likely you are to be able to ensure that they only target military objectives and do not cause excessive harm to civilians. Another lurking set of questions relates to accountability, and how to determine who to hold responsible when an autonomous system violates the jus ad bellum or the law of armed conflict.

How does the U.S. legal system currently handle the delegation of war powers, and what changes might be needed to accommodate autonomous systems? ​

In the U.S. system, the president has broad power to decide when to resort to force — both because Congress has delegated certain authorities to him in several Authorizations for Use of Military Force (AUMF) and because the Justice Department long has taken the position that he may use force abroad whenever it is in the national interest, as long as that force does not rise to war in a constitutional sense.

It is not entirely clear whether and when the president can delegate these constitutional or statutory powers. In 1951, Congress enacted a general statute authorizing U.S. presidents to further delegate any function vested in them by law and requiring them to publish those delegations. Notwithstanding that statute, DOJ has opined that some powers that Congress delegates to the president may be of such gravity that Congress would not have intended the president to further delegate them. Initiating force pursuant to an AUMF might be one of those powers. And though DOJ has opined that the president may not delegate his powers as commander-in-chief, it has said that the “President may make formal or informal arrangements with his civilian and military subordinates, in order to ensure that the chain of command will function swiftly and effectively in time of crisis.”

It is unclear how autonomous systems map onto this somewhat uncertain legal terrain. The first question is whether it makes sense to treat the deployment of autonomous systems as delegations of decision-making authority at all. I think there is at least a colorable argument that those are a type of delegation. One challenge to using autonomous systems here would be a technological one: how to ensure that the systems as agents accurately reflect the intent of the principal — especially because a system trained to reflect the intent of one president might not reflect the intent of a subsequent president.

Another concern would be about transparency. If the executive branch did decide to deploy systems like these, it would be important to ensure that some subset of Congress knew about their use, particularly because these systems very likely would be classified. Congress could even act prophylactically by regulating or prohibiting delegations to autonomous systems. Some senators put forward a bill like this related to nuclear command and control systems, though it didn’t pass. For now, the executive branch has said clearly that it will not introduce autonomy into nuclear command and control, and nuclear states like France and the United Kingdom have followed suit.

What ethical or policy implications should states consider when deciding whether to delegate war initiation to machines? ​

Delegations to autonomous systems raise policy and technological questions as well as the legal ones just discussed. First, a state might conclude as a policy matter that it is inherently unsafe, immoral or ineffective to delegate resort-to-force decisions to a machine. Second, I worry about a president delegating his power to resort to force to an autonomous system because that system may act in a manner that does not faithfully reflect what the president would have wanted done had he made the decision himself. The stakes of using force are very high, and the public deserves to know which agents to hold accountable for those specific decisions, even if we can still hold the president ultimately accountable as the delegator-in-chief. Autonomous systems may obscure or complicate that process.

Founded in 1819, the University of Virginia School of Law is the second-oldest continuously operating law school in the nation. Consistently ranked among the top law schools, Virginia is a world-renowned training ground for distinguished lawyers and public servants, instilling in them a commitment to leadership, integrity and community service.

Media Contact