Military Review English Edition May-June 2014 | Page 16
elsewhere in the world.”5 Such narratives raise
concerns about the lack of human control, and as a
result, they confound the determination of human
responsibility.
However, in robotics and computer science,
autonomy has many different meanings. It tends
to be used metaphorically to emphasize certain
features of a computational system that set it apart
from other systems. Three conceptions of machine
autonomy—as high-end automation, as something
other than automation, or as collaborative autonomy—illustrate that humans do not necessarily lose
control when tasks are delegated to autonomous
systems. Rather, the delegation of tasks to these
systems transforms the character of human control.
Autonomy as high-end automation. In its
report, “The Role of Autonomy in Department of
Defense Systems,” the Defense Science Board Task
Force characterizes autonomy as “a capability (or a
Lt. Gen. Jeffrey Talley (then Brig. Gen.), commander of 926th Engineer
Brigade, Multi-National Division, watches a demonstration of robotic routeclearing equipment at 6th Iraqi Army Division headquarters motor pooI, Iraq,
5 January 2009.
14
set of capabilities) that enables a particular action
of a system to be automatic or, within programmed
boundaries, ‘self-governing.’”6 Capability, here,
refers to a particular process (or processes) consisting of one or more tasks, such as navigation or flight
control. This definition echoes a more traditional
way of conceptualizing machine autonomy as at
the high end of a continuous scale of increasing
automation. In this way of thinking, automation
involves the mechanization of tasks, where routine
actions are translated into some formalized and
discrete steps such that a machine can perform
them.7 At the high end of the automation scale are
systems in which the automated machine performs
most or all of the steps in a process. At the low end
of the scale are systems in which decision making
and control of the process are left largely to human
opera F