PKSOI/GLOBAL TRENDS CASE STUDIES A Drone's Strike Away | Page 11
Case Study #1118-08
PKSOI TRENDS GLOBAL CASE STUDY SERIES
autonomous robots much like the Remotec Androx Mark V A-1 that killed the Dallas, TX gunman in July 2016. The
proposal is for the robots to be used in supply roles only, but that could obviously change in the long term. Sometime in
the next couple of decades, drones will be given the tools to take on human opponents all by themselves.” 83 Aside from
minimizing American battlefield casualties, defense experts predict that continuing progress in automation paralleled by
plummeting production costs may soon make autonomous drone militaries cheaper than traditional infantries, let alone
bombers, destroyers, tanks or artilleries. 84
Indeed, the US Army’s thinking is moving in the direction of increased automation. “We should be thinking about
having a robotic vanguard, particularly for maneuver formations,” explains Dr. Bob Sadowski, the Army’s chief roboti-
cist. “There’s no reason why the first contact with an enemy force should be with a man-platform, because it means that
platform is at the greatest risk.” 85
However, some in the military push back on this idea, suggesting that robots and machines have limitations that might
actually harm the military cause. For instance, former military weapons systems specialist Pierre Sprey explained, “The
soldier is able to cover most of the horizon very quickly, within fractions of a second, sweeping visually to check any kind
of threat that he might have. And you’re doing the same thing sitting in front of a laptop, you know, with a sensor that’s
as relatively poor, relative to the human eye, as a video camera, you know, first of all, you’re going to miss most of the
really important cues. And the ones you pick up, it’s going to take you a lot longer to react, because they’re going to be
more indefinite.” 86
Artificial Intelligence and the Evolution of Killing Machines
In early 2018, the Department of Defense requested solicitations seeking to “develop a system that can be integrated and
deployed in a class 1 or class 2 Unmanned Aerial System (UAS) to automatically Detect, Recognize, Classify, Identify
(DRCI) and target personnel and ground platforms or other targets of interest.” 87 For some observers, this was a big
step forward. 88 While current military drones are still controlled by people, this artificial intelligence (AI)-based tech-
nology, according to Peter Lee, Director of Security and Risk and a Reader in Politics and Ethics at the University of
Portsmouth, will be able to
decide who to kill with almost no human involvement. Once complete, these drones will represent the ultimate
militarisation of AI and trigger vast legal and ethical implications for wider society. There is a chance that
warfare will move from fighting to extermination, losing any semblance of humanity in the process. At the
same time, it could widen the sphere of warfare so that the companies, engineers and scientists building AI
become valid military targets. 89
The idea of autonomous weapons systems has received intense criticism, especially for the elevated risk of civilian deaths
and liability. Human Rights Watch observer Steve Goose suggests that programmers, manufacturers and operators not
only open themselves up to liability but may also open up a Pandora’s box, the consequences of which are unknown.
“Once those weapons exist, there will be no stopping them,” he said. 90 Automated technology relies on self-learning
algorithms – i.e., programs that independently learn from whatever data they collect – and, as a result, becomes better at
the assigned task. University of Portsmouth expert Peter Lee warns that “someone will need to decide on an acceptable
stage of development … In militarised machine learning, that means political, military and industry leaders will have to
specify how many civilian deaths will count as acceptable as the technology is refined.” 91
Autonomous weapons systems have been heavily criticized by science experts and tech leaders from Stephen Hawking to
Tesla Motor CEO Elon Musk and Apple co-founder Steve Wozniak for dehumanizing warfare and eroding ethical con-
straints on it. AI, which drives much of the new technology, in their mind, “will never be capable of meeting the require-
ments of the laws of war (a.k.a. international humanitarian law) to distinguish between combatants and noncombatants
9