IIC Journal of Innovation 19th Edition The Role of Artificial Intelligence in Industry | Page 47

Securing the ML Lifecycle
distinct model owner . At this point , we already note that full transfer of ownership is different from a licensed model . The actual inference code that is required to query the model may again be owned by a separate actor . Likewise , the data of a query used for inference may be owned by an entity different from all the mentioned actors . Finally , all this data ( training data , training code , trained model , inference , …) may be processed in a technical environment not under full control of the owner ( i . e ., in a cloud service ).

4 ATTACKING THE MACHINE LEARNING LIFECYCLE

A variety of attacks has been identified over the last years that could compromise the individual stages and assets of the machine learning lifecycle 11 . We align our observations with several of the taxonomies provided by academic 12 , industrial 13 , non-profit , 14 and governmental institutions 15 . Note that we do not discuss how machine learning could be used by malicious actors 16 . Again , our intention is not to present a full list of all possible attacks , but rather a realistic lifecycle model , the stakeholders involved , selected attacks , and selected countermeasures .

4.1 ATTACKING THE TRAINING DATA

The quality of the training data is the baseline for a model that can be used for accurate predictions or classifications . If an attacker succeeds in changing existing data or injecting data at will , we consider this a poisoning attack .

4.1.1 POISONING ATTACK

The intention behind such an attack could be to diminish the model ’ s quality or even cause a denial of service . The attacker could also try to influence the model generation process , so that the model would only fail in certain situations to the advantage of the attacker . In this case , we would speak of targeted poisoning attacks , as the attacker wants to have only specific examples misclassified . How such a poisoning attack could even be carried out by trained adversarial model has already been described in a malware detection scenario 17 .
In the context of our medical image classification scenario , poisoning the training data could result in a model that will not correctly classify certain images . In fact , it has already been
11
M . Xue , C . Yuan , H . Wu , Y . Zhang , and W . Liu , " Machine Learning Security : Threats , Countermeasures , and Evaluations ," in IEEE Access , vol . 8 , pp . 74720-74742 , 2020 , DOI : 10.1109 / ACCESS . 2020.2987435 .
12
Barreno , M ., Nelson , B ., Joseph , A . D ., et al . The security of machine learning . Mach Learn 81 , 121 – 148 ( 2010 ).
13 https :// docs . microsoft . com / en-us / security / engineering / failure-modes-in-machine-learning
14 https :// atlas . mitre . org /
15 https :// www . enisa . europa . eu / publications / artificial-intelligence-cybersecurity-challenges
16 https :// www . europol . europa . eu / publications-events / publications / malicious-uses-and-abuses-of-artificialintelligence
17
Sen Chen , Minhui Xue , Lingling Fan , Shuang Hao , Lihua Xu , Haojin Zhu , and Bo Li . 2018 . Automated poisoning attacks and defenses in malware detection systems : An adversarial machine learning approach . computers & security 73 ( 2018 ), 326 – 344 .
42 March 2020