policy & reform
campusreview.com.au
The morality of
driverless cars
The public should have a
say on how autonomous
vehicles make life-threatening
decisions, scientist argues.
By Loren Smith
A
t least 11 auto manufacturers,
including Toyota, Audi and Tesla,
are putting driverless car plans in
motion. With the soon-to-be robot drivers
comes the need to program ethics into
them. Think about a situation like the trolley
problem, where a driver must choose
between intervening and killing one person
to save five, or doing nothing and allowing
those five people to die.
But who should make these life‑or‑death
decisions? A Syrian-Australian MIT scientist
and his team are trying to convince tech
companies and policymakers that the public
should have a say.
Associate professor of media arts and
sciences Iyad Rahwan and his colleagues
have compiled a database, called the Moral
Machine, of 40 million individual decisions
to make their case. The responses
derive from the citizens of 233 countries
and territories.
The research shows that humans have
some core, shared morals. When it comes
to who dies and who should be spared,
people privilege humans over animals, the
young over the old, and saving more lives
than fewer.
Yet across the globe, certain opinions
differ. For example, the researchers claim
that residents of Central America and
France feel similarly about prioritising the
lives of women and those with athletic
10
prowess. Also, for those from more
economically unequal countries, like
Russia, social status appears to be an
important determinant of who should live.
The researchers’ reflections on people’s
responses were compiled in an article
published in Nature.
Other researchers, including some from
Australia, also weighed in on the findings.
Distinguished Professor Mary-Anne
Williams, director of disruptive innovation at
the Office of the Provost at UTS, wondered
about the trickle‑down effects of driverless
car morality programming.
“In order to minimise liability, car
companies may design cars that slow
down in wealthy neighbourhoods, or
that kill humans rather than cause more
expensive serious injuries,” she pondered.
Williams added that car morals may be
difficult to apply in practice, as accidents
often aren’t as clear cut as choosing
between the death of one person or the
deaths of many.
Hussein Dia wondered whether human
thoughts, crowd-sourced or not, should
dictate machine ethics. The associate
professor in transport engineering at the
Swinburne University of Technology noted
that the only existing proposed framework
for this does not discriminate between
people dying on the basis of personal
characteristics.
“This clearly clashes with the strong
preference for sparing the young [such as
children] that was assessed through the
Moral Machine,” he said.
Professor Toby Walsh, perhaps the most
well known Australian AI specialist, added
that people‘s stated intentions don’t always
tally with their behaviour. Therefore,
their responses in this study should be
treated with caution. The research leader
of the Optimisation Research Group at
Data61 also raised a potential issue with
the study’s design, throwing its findings
into further doubt: “I completed their
survey and deliberately tried to see what
happened if I killed as many people as
possible. As far as I know, they didn’t filter
out my killer results.”
Regardless, like Dia, Walsh questioned
whether human morals should be applied
to machines at all.
“We should hold machines to higher
ethical standards than humans for many
reasons: because we can, because this
is the only way humans will trust them,
because they have none of our human
weaknesses, and because they will sense
the world more precisely and respond
more quickly than humans possibly can.”
Associate Professor Jay Katupitiya, from
the UNSW School of Mechanical and
Manufacturing Engineering, extended this
viewpoint: “In my opinion, programming
these intentions is more immoral than not.”
Unlike Walsh, however, he believes the
morality of driverless cars isn’t the biggest
road safety issue; human-driven cars are.
As such, he thinks we shouldn’t become
too obsessed with “rare ‘dilemma’ cases”
like the trolley problem. Rather, we should
focus on the “real, everyday safety gains” this
technology can offer.
Yet Katupitiya is something of a lone
voice in this debate. Most academics in
the field seem deeply concerned with this
morality issue – which extends to AI more
broadly. Perhaps when it comes to life or
death, the resolution of moral dilemmas
has added urgency. ■