What Laws Will Govern Battles in Cyberspace?

Professor Paul Stephan ’77 Explores Questions on Future Conflicts Over Big Data
Paul Stephan

Paul B. Stephan ’77 is the John C. Jeffries, Jr., Distinguished Professor of Law at UVA. Photo by Jesús Pino

August 27, 2021

As cyberattacks grow more frequent and the use of big data increases the costs of security breaches, a University of Virginia School of Law professor is looking at how conventional laws might apply to the future of warfare.

Paul B. Stephan ’77 offers a new framework for looking at the issue in his paper “Big Data and the Future Law of Armed Conflict in Cyberspace,” which will be published as a chapter in the book “The Law of Armed Conflict in 2040,” forthcoming from Oxford University Press.

The U.N. Charter declares that armed attacks on other states are unlawful unless either the U.N. Security Council gives its approval, the state on whose territory the attack takes places gives its permission, or a state is exercising its “inherent right to self-defense.”

What these rules mean, and particularly how they apply to cyber operations that cause various degrees of harm, has garnered plenty of attention lately from governments as well as legal scholars. Stephan’s paper concentrates on the risks of bringing cyber operations against big data under the law of war; he argues that doing so will increase the chances of allowing targeted states to treat such operations as an armed attack that justifies a lethal response.

Stephan, who recently returned from a stint as special counsel to the general counsel of the U.S. Defense Department, answered questions about his new work.

How are big data and artificial intelligence used by some nations for national security?

First, one must be clear about what we mean by big data and AI. Big data is a resource that, when properly organized, can serve as the basis for deriving the algorithms that constitute AI. The quality of a data set — that is, its extent and organization — determine how effective the algorithms can be. Second, AI assists a range of national security tasks, including intelligence assessment, the research and development of weapons systems, and system targeting and use. It can help us manage complex systems, such as our troops. For example, I see no reason why it could not help us do things like identify places and people where the risk of sexual assault among members of our armed forces is greatest.

How might AI and the accumulation of big data change conventional warfare?

There are two questions implicit here, one about how we do war and the other about when we do war. I focus less on the how part, although there are answers here that people besides me have offered. Big data and AI contribute to autonomous weapons systems, which many people find troubling for understandable reasons, but also have the potential for greater precision and accuracy in the use of violence. Some might say that making violence better will mean more violence, which we should deplore. One might rejoin that if violence is inevitable, we should do as little harm as possible. This is not my debate. There also is a general critique of AI based on big data that because of the garbage-in garbage-out problem, the technology will expand and increase the impact of systematic biases and false assumptions on the part of those who design the systems. Our own Ashley Deeks and Deborah Hellman have done important work along these lines.

What my paper tries to do is shift the focus from AI to the big data sets that are a necessary prerequisite to creating effective AI. Big data sets can be essential resources but also can be abused, especially as tools for destroying privacy. The goals of my paper were to make two arguments: that attacks on big data sets should not be treated the same as attacks on physical things (buildings, bridges and people) for purposes of justifying retaliation through armed force, and that the challenge for the future is thinking about how to manage privacy concerns while building good data sets. U.S. adversaries build away and don’t worry about privacy. That should not justify indifference to privacy on our part, but it should drive us to find ways to build data sets safely but effectively.

How do you think the traditional laws of war might or might not be applied to wars in cyberspace?

They provide analogies, but like all analogies, require refinement and elaboration to be useful. One way of framing the issue is how much do we want to make of the distinction between impairment of well-being and functionality, on the one hand, and of death and destruction in the physical world, on the other hand. Everyone agrees that cyber actions that lead to death and destruction should be treated no differently than physical actions, but is that all there is?

You suggest there are downsides to the law of armed conflict expanding as a result of cyberattacks. Can you explain?

There are people in the academic world who think that we can use the law of war to regulate attacks on big data that occur during ongoing armed conflict without getting to the question of whether, in the absence of a preexisting armed conflict, a cyber action that seriously compromises big data but does not directly lead to death and destruction can lawfully be punished with physical violence. In other words, they believe that the law governing how to fight can extend to cyber operations without also bringing along the law governing when we may fight.

For example, in the struggle between Hamas and Israel, an ongoing armed conflict (whether international or non-international isn’t relevant here), Israel appears to have justified an attack on a building as a lawful (that is to say proportionate, necessary, and with precautions to distinguish between legitimate and civilian targets) effort to shut down computers that were undertaking cyber invasions and compromising Israeli data without necessarily causing direct death and destruction.

My concern is that arguments about how to conduct armed conflicts can morph into arguments about when to engage in armed conflict, justifying physical attacks on adversaries in the absence of a prior armed conflict. Imagine a hypothetical adversary, but not a country with which we are at war, that does not have significant deterrent capabilities (no nukes, in other words) bringing down a financial market. Could we launch cruise missiles against that adversary? I would like to see the jus ad bellum (the conditions under which states may resort to armed force) function as an obstacle to such retaliation. I worry that treating costly but not deadly cyber operations within armed conflicts as armed attacks subject to legal regulation under the jus in bello — the limits on how to use force once a conflict has started — can lead to treating them as attacks under the jus ad bellum, justifying an armed noncyber response and thus leading to more armed conflicts.

Founded in 1819, the University of Virginia School of Law is the second-oldest continuously operating law school in the nation. Consistently ranked among the top law schools, Virginia is a world-renowned training ground for distinguished lawyers and public servants, instilling in them a commitment to leadership, integrity and community service.

Media Contact

Mary M. Wood
Chief Communications Officer
wood@law.virginia.edu / (434) 924-3786

News Highlights