The problem illustrated by V’Ger is that a smart algorithm that learns
from all it encounters while pursuing its mission may get smart
enough to do a curiously enhanced version of what it was originally
programmed to do—something that may counter the vision of the
programmers (seemingly) in charge.
LIMITED ETHICAL BOUNDS
Researchers, theorists, engineers, and some outspoken CEOs are
increasingly concerned with the practical question of how to control
algorithms that have been designed to teach themselves. Many technology
leaders, including Michael Dell, see the challenge as one of the
responsibilities that comes with propagating the technology that will
change how the world works—for the better. All acknowledge the
need to understand and mitigate unintended consequences and unexpected
steps that some machines may take as they work to achieve
the objective they were programmed to reach.
“As long as the algorithm has no boundaries, then it can get to its
goal any way it figures out,” says Mark Halverson, who co-chairs the
Institute of Electrical and Electronic Engineers’ (IEEE) Global Initiative
on Ethics of Autonomous and Intelligent Systems. “We have to
acknowledge that our ability to put moral and ethical bounds around
our technology is not that great.”
AI researchers have compiled a spreadsheet of some astonishing
deviations made by AI bots. For example, in 1997, an algorithm designed
to play Tic-Tac-Toe achieved victory by hacking its opponent’s
algorithm and crashing its systems, ending the game with a forfeit.
Another machine that was taught to detect poisonous mushrooms
from those that are safe to eat correctly observed during learning
that every other mushroom it was shown was safe. The problem was
that when it went to work sorting safe mushrooms on its own, it
adopted the same every-other pattern, essentially classifying safety
based on how, rather than what, it had learned.
The reason for the errors is fairly intuitive: Algorithms do what they
are programmed to do, not necessarily what we intend, explains Katja
Grace of the Machine Intelligence Research Institute at University of
California Berkeley. That means the deviations are simply and plainly
the result of bad programming, her colleague, Stuart Russell, says.
The problem of allowing room for unintended consequences isn’t
new, but as machine learning capabilities become more sophisticated,
researchers and experts have begun to pay closer attention to the
33