Military Review English Edition May-June 2014 | Page 18
opers and designers would delimit the problem
any robotic system is intended to solve. If the
envisioned robotic technologies were based on
artificial intelligence methods now in development, then those artificial intelligence methods
would limit any robotic system’s abilities to act
independently. Although programmers and developers would not have to specify all the possible
situations with which the software has to contend,
designers would have to generate a model that
approximates the behavior of particular aspects
Humans exert their influence by defining the conditions for machine behavior.
of the world and their uncertainties. Learning and
probabilistic algorithms would be able to operate
more flexibly than a preprogrammed deterministic
algorithm because they would allow for variations
and could respond to certain unanticipated contingencies. Nevertheless, this flexibility is a function of the
problem definitions and the world models that the
developers or programmers of the algorithm have
formulated. Therefore, even where machine autonomy is considered more than high-level automation,
the autonomy of the machine does not mean there is
no human control because humans design, choose,
and plan for the strategies employed by the machine.
Collaborative autonomy. Both conceptions of
machine autonomy described above (autonomy as
high-level automation and autonomy as something
other than automation) focus on what machines can
do without direct human control. However, machine
autonomy does not necessarily mean that humans
will be taken out of the loop. Human operators may
still be involved in the decision-making processes
that autonomous robots execute. As explained in an
earlier edition (published in 2009) of the Roadmap
(Unmanned Systems Integrated Roadmap FY 2009–
2034): “First and foremost, the level of autonomy
should continue to progress from today’s fairly high
level of human contr