PKSOI/GLOBAL TRENDS CASE STUDIES A Drone's Strike Away | Page 12
Case Study #1118-08
PKSOI TRENDS GLOBAL CASE STUDY SERIES
and to avoid excessive collateral damage. As a moral matter, many of them do not believe that decisions to intentionally
kill should be delegated to machines, and as a practical matter they believe that these systems may operate in unpredict-
able ways or be used in irresponsible—or even in the most ruthless—ways.” 92
During the AeroAstro Centennial Symposium at the Massachusetts Institute of Technology (MIT) in October 2014,
Elon Musk declared AI to be “the most serious threat to the survival of the human race.” 93 Worried about a “Terminator
scenario arising from research into artificial intelligence,” 94 Musk warned that with AI “we are summoning the demon.
In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control
the demon. Doesn’t work out.” 95 Echoing Musk’s sentiment, James Hendler, a professor of Computer, Web and Cog-
nitive Sciences at the Rensselaer Polytechnic Institute (RPI), a former member of the US Air Force Science Advisory
Board and a former Chief Scientist of the Information Systems Office at the US Defense Advanced Research Projects
Agency (DARPA), warns, “Applied as tools of war, robotics raises the threat of ruthless dictators with unfeeling killing
machines to use against civilian populace. Laws governing the development and proper use of these machines are needed
now, before it is too late.” 96
Take out the Human, Take out Humanity
In October 2013, a number of human rights NGOs and more than 270 computing experts from 37 countries called for
a global ban of automated weapons. “Governments need to listen to the experts’ warnings and work with us to tackle
this challenge together before it is too late,” said Professor Noel Sharkey, Chair of the International Committee for Ro-
bot Arms Control (ICRAC). “It is urgent that international talks get started now to prevent the further development of
autonomous robot weapons before it is too late.” 97
With some delay, 89 signatory nations to the United Nations Convention on Conventional Weapons (CCW) voted to
convene groups of governmental experts in 2017 to discuss the implications of autonomous weapons choosing targets
with little or no human oversight. A treaty ban, however, is unlikely to work, according to American University Law
Professor Kenneth Anderson and Columbia Law School professor Matthew Waxman, “especially in constraining states
or actors most inclined to abuse these weapons—and gives them an advantage of possessing such weapons if other states
are banned even from R&D into the weapon technologies that enable such systems, as well as autonomous defenses to
counter them. Because automation of weapons will increase gradually, step-by-step toward full autonomy, it is also not as
easy to design or enforce such a ban as proponents assume.” 98 More even, Waxman and Anderson explain that a global
ban may be counterproductive:
Besides the self-protective advantages to military forces that might use them, it is quite possible that autono-
mous machine decision-making may, at least in some contexts, reduce risks to civilians by making targeting de-
cisions more precise and firing decisions more controlled. True, believers in artificial intelligence have at times
overpromised before, but we also know for certain that humans are limited in their capacity to make sound
and ethical decisions on the battlefield, as a result of sensory error, fear, anger, fatigue, and so on. 99
By contrast, former US drone program analyst Heather Linebaugh cautions: “What the public needs to understand
is that the video provided by a drone is not usually clear enough to detect someone carrying a weapon, even on a crys-
tal-clear day with limited cloud and perfect light. This makes it incredibly difficult for the best analysts to identify if
someone has weapons for sure. One example comes to mind: ‘The feed is so pixelated, what if it’s a shovel, and not a
weapon?’ I felt this confusion constantly, as did my fellow UAV analysts. We always wonder if we killed the right people,
if we endangered the wrong people, if we destroyed an innocent civilian’s life all because of a bad image or angle.” 100
But this, of course, is inherent in the nature of warfare. Yes, computers are not 100% reliable, but neither are human be-
ings. According to researchers at Stanford University, “Computers are complex and not deterministic. However, people
10