AI Trustworthiness Challenges and Opportunities Related to IIoT
Biased Data
Train Machine
Algorithm
Learning
“Attacked” Data
Inappropriate lessons learned
Real Data
Use Trained Machine
Learning
Bad Output
Figure 4
For example, if people feed an AI system
specific data to train it to get results they
want, they may not achieve the benefits that
machine learning could offer since the
system may not have the wide variety of
inputs to derive surprising conclusions. An
example is limiting medical case data to a
limited sample.
There have been several documented safety
issues that involved automated processes
within air travel. All of them were associated
with bad data being fed into the system:
In October of 2008, Qantas Flight 72
suddenly went into two abrupt
nosedives after warnings and alarms
triggered on the flight deck, even
though the plane was flying stable
and level. The crew’s controls had no
effect at first, but eventually the
pilots were able to regain control.
The problem was traced to a
malfunction in an electronic
component that determines the
planes position and motion, resulting
in faulty information being fed to the
autopilot.
In May of 2011, a Dassault Falcon 7X
business jet was descending when it
Attacked data could be deliberately
introduced into training in order to influence
results. For example if false data were used
to train a predictive maintenance system
this could be used to damage or destroy
equipment. Learning the low oil levels are
‘ok’ could be effective.
Bias in an input training data set can be a
problem even if unintentional and could lead
to problems, such as breaking the law. An
example might be causing redlining in a loan
application approval system.
- 83 -
June 2019