AI Trustworthiness Challenges and Opportunities Related to IIoT
Communicating this understanding can
increase confidence in the system as part of
the broader community context.
AI decisions around physical actions such as
controlling IoT actuators are not 100%
predictable, yet trust based on evidence that
they operate appropriately is needed. This is
an argument for creating a model (e.g. digital
twin) of a system, making it possible to test
and simulate the operation of the system
and anticipate outcomes. 14
It is difficult to trust a system that cannot be
understood, such as a neural net system that
makes decisions without providing a clear
record of how decisions are reached. This
has become a concern with systems used to
automatically approve loans since such
systems can have unintentional bias that
could break laws, without being explicitly
programmed to have such bias 11 12 . One
approach that is being taken is to perform a
sensitivity analysis by varying the inputs in a
methodical manner to determine the
behavior of the neural net to create
evidence of how the system works. 13
T RUSTWORTHINESS OF AI S YSTEMS
Trustworthiness of AI systems that learn
requires that the data and approach used to
train the system be trustworthy, as well as
the system itself.
11
“Dangers of Human-Like Bias in Machine- Learning Algorithms”, May 2018,
http://scholarsmine.mst.edu/cgi/viewcontent.cgi?article=1030&context=peer2peer
12 “
This is how AI bias really happens—and why it’s so hard to fix”, MIT Technology Review,
https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/
13
“Methods for Interpreting and Understanding Deep Neural Networks”, http://iphome.hhi.de/samek/pdf/MonDSP18.pdf
14
for example, see “Model-Based Engineering of Supervisory Controllers for Cyber-Physical Systems” in “Industrial Internet of
Things, Cybermanufacturing Systems”, Springer, 2017 https://link.springer.com/chapter/10.1007/978-3-319-42559-7_5
IIC Journal of Innovation
- 82 -