CLINICAL INNOVATORS
within normal bounds). I’m hoping it’s possible
to do a lot better and to reduce mortality,
morbidity, and the enormous costs associated
with intensive care.
You were asked to speak at the United Nations
this past spring to address some of the
dangerous applications of AI. Could you tell us
about that?
The UN is very concerned about lethal autonomous
weapons—robots that decide where to go and
whom to kill. In April, I was asked to explain to
the UN meeting in Geneva the basic concepts of
AI and autonomy, and how they will be applied to
autonomous weapons in the future. After considering
this question for a while, I concluded that an arms
race in this area would lead to cheap, mass-produced
weapons of incredible agility and lethality that
would leave humans largely defenseless. I helped
write an open letter, promoting a treaty to ban fully
autonomous weapons that has been signed by about
3,000 AI and robotics researchers.
Is superhuman AI a reachable goal?
I see no reason to suppose that progress in AI
will come to a halt. The benefits of progress
are potentially enormous, and so the rate of
investment in research is likely to increase.
Almost certainly the kinds of AI systems we
build will not have much in common with human
intelligence, any more than Google has much
in common with a human librarian, so there
will be no meaningful notion of “machine IQ”
and no obvious cross-over point to superhuman
AI. However, it seems likely that machines will
exceed human capabilities in more and more
spheres of activity, and that gradually those
spheres will become more integrated, so that
it will make sense to talk of general-purpose
intelligent systems.
What constitutes a conscious machine?
Humans, and perhaps some animals, are conscious
machines. We have really no idea how or why,
and no idea how to make a conscious computer
or to determine that a given computer is or is not
conscious. Sorry!
What does the future of AI look like?
It depends on our choices. Artificial intelligence
systems could provide the greatest increase in
human capabilities and human happiness of any
technological advance in history, mainly because they could enable so many other advances.
However, this future requires solving an important open problem: ensuring that the objectives
we put into the AI systems are perfectly aligned
with those of humans, so that we are happy
with the resulting behavior and we are confident
that the AI system will always act as a faithful assistant. This is far from easy. We aren’t
aligned with each other and sometimes not even
with our own selves. ■
Katlyn Nemani, MD, is a physician
at New York University.