IIC Journal of Innovation 11th Edition | Page 89

AI Trustworthiness Challenges and Opportunities Related to IIoT   manner, so the AI system is needed to instantly determine whether the data is correct or not and to present it to the operator in a useful manner. Warning: The AI system creates a warning, so that instant operator decision is necessary to prevent an incident. Again, the operator is not able to completely determine in the available time if this warning is correct or not. Autonomous: The AI system takes over the physical control from the operator and executes instead the operations directly in the physical world. incorrect decision algorithms. In the “advisory” case, the driver may be irritated about the information; in the “warning” case, the driver may probably trust the system more than their own interpretation of the situation and follow incorrect advice, possibly leading to an accident; and in the “autonomous” case, the AI system could cause an accident with a bad decision. In the last two cases, a redundant safety system could prevent an accident if designed without relying on the same AI system. One example of this approach is multiple independent AI learning systems that compare results, such as two-out-of-three voting. The more independent systems become the greater the number of redundant systems required. Logic suggests three independent redundant systems where high levels of autonomy are desired. Ultimately these ‘redundant systems’ have to be combined into one model ensemble. The costs of this approach suggest that other ways will be developed, possibly in methods of validating and cross checking the data on each device to avoid unexpected decisions. In the world of intelligent cars, sensors can provide information about the distance to the car ahead and enable different approaches. “Advisory” means that the system tells the driver “your distance is safe/unsafe/dangerous.” “Warning” means that the AI system explicitly warns about an impact if the operator does not react. “Autonomous” means that the AI system uses the brakes to prevent an imminent impact. In contrast to a static distance control system which is widely available in new cars, an AI-based distance control system could use additional contextual information to produce better decisions. Such information could include information about the status of the street (wet/dry), driving behavior of the car ahead (stable/unstable speed) and the latest cloud-based information about crashes due to learning about traffic situations in the past and the likelihood of an accident. In the case of incorrect decisions, it is necessary that the AI system learn and improve decisions in the future. This alone is not enough since to have trust in the system there is a need for an explanation of the reason for the accident and clarity about lessons learned (think about the need for confidence in airlines for example). For such an investigation, the AI system must record the “decision path”. Otherwise the reason for a wrong decision and a future enhancement of the AI system to prevent a similar case again would be impossible. In the case of a neural net, a decision path may An AI system makes an incorrect decision due to incorrect data, incomplete learning or - 85 - June 2019