AI Trustworthiness Challenges and Opportunities Related to IIoT
suddenly pitched upward and nearly
stalled. The pilots were able to avoid
the stall, but the plane behaved
erratically for two minutes, after
which it went back to normal. The
issue was identified as a bad solder
joint that caused a control unit to
transmit erroneous signals.
15
potentially stalling the plane. Both
planes crashed.
As always, the “garbage in – garbage out”
rule applies, and in these cases, issues with
bad data lead to failures in automated
systems placing the human pilots in
extremely stressful situations due to loss of
control of their airplane, and eventually
resulting in three crashes killing everyone
onboard. With proper training, AI systems
may be able to identify that the data is
inconsistent with other sensors, make better
decisions to avoid involving a critical out-of-
control situation and respond appropriately
without involving the humans in the process.
In November of 2014, a Lufthansa
Airbus A321 began acting strangely
on autopilot. When the copilot
turned it off, the plane went into a
dive. With the help of the captain, a
crash was avoided, but investigators
determined that two of the plane’s
sensors had frozen in place causing
them to feed bad data.
Using AI to Improve the Trustworthiness of
IoT Systems
In January of 2016, alarms suddenly
went off in a West Air Sweden Flight
294 and the autopilot disengaged.
The captain’s instruments showed
that the nose was high, putting the
plane at risk to stall. The captain
obeyed his instruments and pushed
the plane forward aggressively to the
point where it exceeded its
maximum operating speed, and
within 80 seconds of the first alarm,
the plane slammed into the ground.
AI AND S AFETY
When AI decisions involve actions in the
physical world, safety is involved because in
the physical world the consequences of a
bad decision can endanger human health
and welfare, including the lives of people,
their health and the environment in which
they live. The goal of safety considerations is
to protect people.
We can structure the impact of AI decisions
to the physical world into the following
classes 15 :
In October of 2018, and again in
March of 2019, a Boeing 737 Max 8
went into a steep nose dive believed
to be a result of a faulty sensor
erroneously sending bad data to an
automated system designed to keep
the nose from pointing up and
Advisory: An AI system provides an
operator with useful data that influences
operational decisions. The data source is
so complex that the operator’s mind is
not able to produce a necessary
conclusion about the data in a timely
see also SAE's automation level definitions, https://en.wikipedia.org/wiki/Self-driving_car#Levels_of_driving_automation
IIC Journal of Innovation
- 84 -