Military Review English Edition May-June 2014 | Page 15
ROBOTIC WARFARE
In principle, humans retain control of—and
responsibility for—the behavior of autonomous
machines. However, establishing precisely who is
responsible for a machine’s behavior is challenging. Autonomous machines, no matter how they
are defined, developed, or used, operate as part of
broad sociotechnical systems involving numerous
individuals and organizations.
We advocate the concurrent development of
new responsibility practices with the development
of new technologies rather than before or after
those technologies are developed and adopted
for use. This is necessary because literature in
the field of science and technology studies shows
that the trajectory of a te chnology’s development is unpredictable; how a technology takes
shape depends on complex negotiations among
relevant social groups.4 The technologies eventually adopted and used are not predetermined
by nature or any other factor. No one can predict
with certainty how a developing technology will
turn out or what new technologies will emerge.
In the course of development, a new technology
may change in response to many factors, including changes in funding, historical events such as
wars, changes in the regulatory environment, and
market indicators. The technologies that succeed
(i.e., that are adopted and used) are the outcome of
complex negotiations among many actors, including engineers and scientists, users, manufacturers,
the public, policymakers, politicians, and others.
Negotiations among the actors involved with a
new technology are part of the overall discourse
around that technology from its earliest stages of
development. The discourse about responsibility and autonomous military robots is a case in
point; current discourse provides an opportunity
to observe issues of responsibility being worked
out early in the technology’s development. The
negotiations between researchers, developers,
engineers, philosophers, policymakers, military
authorities, lawyers, journalists, and human rights
activists are taking place in the media and academic journals, at conferences and trade shows,
through drafting new policies and regulations, in
negotiating international treaties, and also through
designing and developing the technologies. This
process contrasts starkly with the all-too-common
idea that issues of responsibility are decided after
MILITARY REVIEW
May-June 2014
a technology is developed or separately from
technological design.
Framing robots as autonomous challenges ordinary notions of responsibility. Autonomy in daily
life and moral philosophy implies acting on one’s
own, controlling one’s self, and being responsible
for one’s actions. On the other hand, being responsible generally means that individuals have some
kind of influence or control over their actions and
the outcomes of those actions. The idea of the
autonomy of robots suggests that humans are not in
control of the robots. Hence, at first glance, it may
seem that humans should not be held responsible for
autonomous robot behavior. However, this narrative
of future autonomous robots operating on their own,
without human control, is somewhat misleading,
and it draws attention away from important choices
about responsibility—choices made at the level of
design and implementation.
Our analysis of the discourse on autonomous
artificial agents and responsibility shows that
delegating tasks to autonomous technologies is
compatible with holding humans responsible for
the behavior of those technologies. This is so for at
least two reasons. First, the definition of machine
autonomy has numerous interpretations, but all
involve various kinds and degrees of human control.
Second, humans decide who is responsible for the
actions of a machine. Their decisions are affected
by, but not entirely determined by, the nature of
technology. Responsibility for the behavior of
autonomous machines is and must continue to
be determined by ongoing negotiations between
relevant interest groups during the development of
new technologies.
Negotiating Autonomy
Popular accounts of future military robots often
portray these technologies as entities with capabilities that rival or surpass those of humans. We are
told that robots of the future will have the ability to
think, perceive, and even make moral decisions. In
Discover Magazine, for instance, Mark Anderson
writes, “As surely as every modern jetliner runs
primarily on autopilot, tomorrow’s military robots
will increasingly operate on their own initiative.
Before the decade is out, some fighting force may
well succeed in fielding a military robot that can
kill without a joystick operator behind a curtain
13