campusreview.com.au
using Google’s DeepMind computer, monstered the undisputed
world champion – the ‘Roger Federer of Go’ – Lee Sedol. “To be
honest, we are a bit stunned,” Google DeepMind chief executive
Demis Hassabis said, mirroring Sedol and the Go community’s own
shock. “We came here to challenge Lee Sedol, as we wanted to
learn from him and see what AlphaGo was capable of.”
For some in the AI and supercomputer developing community,
this was seen as the Holy Grail or White Whale moment. Hassabis
said AlphaGo ‘trained’ for this victory by playing millions of games
a day to learn the intuition Go requires from its elite players,
alongside the necessary tactical acumen.
Not everyone is best pleased, however, with AlphaGo’s AI’s
gathering alacrity.
Michael Harre is a senior lecturer in complex systems at The
University of Sydney. He said that as AIs like AlphaGo become more
and more sentient, they may unilaterally cut human beings out of
the programming paradigm.
“The worry is that a computer becomes autonomous, in the
sense that it takes the humans out of the control decision loop,”
Harre said. “And to that extent, is it possible that we could lose
control over those AIs? That’s the question I think most people are
concerned about.
“Do I think that there’s a particularly significant risk? I think the answer
is probably not, but without the discussion, how would we know?”
But would we even be able to tell if we were losing control?
Could AIs trick us into thinking we were in control, like inmates
running the asylum?
“I think we’re a long way from a deceptive AI, from an AI that
is aware of what we know and is able to hide its intentions from
us,” Harre replied. “But certainly, we have the potential for rogue
machinery.”
Weren’t we also told that AlphaGo beating a professional
human was a long way off, though? Harre’s colleague, professor
Dong Xu recently moved from Singapore to be chair in computer
engineering at The University of Sydney’s School of Electrical
Engineering and Information Engineering. He said there weren’t
many more steps for supercomputers like AlphaGo to traverse
before we reach some potentially frightening adversaries.
“A human player can be affected by emotions such as pressure
or happiness, but a computer will not,” Xu said. “If a supercomputer
could totally imitate the human brain, and have human emotions
Subscribe for less than $5 a week
FACULTY FOCUS
such as being angry or sad, it would be even more dangerous.”
In the recent Academy Award-winning film Ex Machina, a tech
wizard in the style of Mark Zuckerberg hosts one of his talented
programmer employees at his cabin in the woods. Over the course
of the week, the pair interface with a humanoid robot (Ava) who,
over the course of the picture, evolves more humanistic traits and
tendencies, as the humans around her regress. This cinematic fable
plays on our fears of the mechanical other, in much the same way
as Frankenstein and Blade Runner. In Ex Machina, Ava fetishises
freedom from her bucolic cage, which makes one wonder if all the
inchoate robots and androids we are increasingly seeing might one
day plot similar escapes.
“If a robot escaped,