By Robert H. Latiff
Mr. Latiff is a retired Air Force major general.
July 16, 2018
In 2014, the United States Army Research Laboratory published a report predicting what the battlefield of 2050 would look like. Not surprisingly, it was a scenario largely driven by technology, and the report described a sort of warfare most people associate with video games or science-fiction movies — combined forces of augmented or enhanced humans, robots operating in swarms, laser weapons, intelligence systems and cyberbots fighting in a highly contested information environment using spoofing, hacking, misinformation and other means of electronic warfare.
In one sense, this is nothing new. The way wars are fought have always changed with technology. But humans themselves don’t change so rapidly. As a retired Air Force major general with special interests in both technology and military ethics, I have a specific concern: that as new weapons technologies make soldiering more lethal, our soldiers will find it more difficult than ever to behave ethically and to abide by the long-established conventions regarding the rules of war.
. . .
Can soldiers under the influence of behavior-modifying drugs or electronics be held to account for their actions? If the soldier is using drugs to enhance his cognition or reduce his fear, what is the role of free will? Might a soldier who fears nothing unnecessarily place himself, his unit or innocent bystanders at risk? What about the impact of memory-altering drugs on the soldier’s sense of guilt, which might be important in decisions about unnecessary and superfluous suffering?
These are important decisions in war, and they form the basis for many of the tenets of “just war” theory. Gen. Paul Selva, the vice chairman of the Joint Chiefs of staff, supports “keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.”
The role of revising and recasting these conventions should be taking place at the highest levels of government. So far, it hasn’t. The White House’s Select Committee on Artificial Intelligence, formed in May, has not even acknowledged the major ethical issues surrounding A.I. that have been very publicly raised by an increasing number of scientists and technology experts like Elon Musk, Bill Gates and Stephen Hawking.
While it is important that leaders openly recognize the critical nature of these issues, the Department of Defense needs to follow up on its 2012 directive on autonomy with guidelines for researchers and commanders. It should require that both researchers and military commanders question — throughout the development process and long before the systems are ready for deployment — how the systems will be used and whether that use might violate any of laws of armed conflict and international humanitarian law.
If you’re worried about terrorism, here’s a bigger threat to lose sleep over: an all-out cyberattack.Suddenly, the electricity goes out at the office. Cellphone networks and the internet have also gone black, along with subways and trains.The roads are jammed because traffic lights aren’t working. Credit cards are now just worthless bits of plastic, and A.T.M.s are nothing but hunks of metal. Gas stations can’t pump gas.Banks have lost records of depositors’ accounts. Dam floodgates mysteriously open. Water and sewage treatment plants stop working.