“There are going to be times where
the driver has to take over. And that turns
out to be by far the most dangerous
and totally understudied issue.”
how the cars should be designed to
ensure the trade-off is done safely.
Nass, the simulator’s chief
champion, boasts that Stanford’s
new tool is unique in its ability to
shift instantaneously from fullto zero-automation, and Nass
plans to track drivers’ concentration, attention, emotional state
and performance when they take
over for the self-driving car under different conditions.
His lab’s findings will help inform the design of future driverless cars — from the layout of
their dashboards and infotainment systems, to how they deliver alerts and ask drivers to take
control. Do people drive more
safely if their cars speak to them,
flash messages or, say, vibrate the
steering wheel? Should cars give
an update on road conditions just
before the human driver takes
over at the wheel, or are such details distracting? And how does a
driverless car clearly outline what
it can and can’t do? Nass has a
laundry list of such questions, the
answers to which are likely to be
monitored closely by automakers: In addition to his position at
Stanford, Nass also consults for
Google on its driverless cars and
for major car companies, such as
Nissan, Volkswagen, Volvo, Ford
and Toyota (Toyota helped fund
the Stanford simulator).
These car manufacturers, along
with Google, have assured the
public that driverless cars will
make our commutes safer, more
efficient and more productive.
They point out that machines
don’t drink and drive or doze off
at the wheel. Since algorithms
react more quickly than humans,
cars can be grouped into platoons,
eliminating stop-and-go traffic
and conserving fuel. Drivers will
be able to read, text and work
while their intelligent vehicles
handle four-way stops.
Yet despite these rosy predictions, carmakers won’t immediately deliver robo-taxis. The first
generation of self-driving cars are
more likely to be capable co-pilots that pass driving duties back