Securing the ML Lifecycle
successfully reported that medical training data can be intentionally poisoned to later result in incorrect patient treatment recommendations 18 .
Even more sophisticated attacks can be performed if the adversary has knowledge of the algorithms used for preprocessing . Image material used for training , for example , can be altered if the scaling algorithm is known : the unscaled image is edited in such a way that it does not look different from the original , but the scaled image contains some adversarial artifact erroneously used for training or even scales down to a completely different picture . Several works have been published describing such image scaling attacks 19 and countermeasures 20 .
Conventional access controls and integrity preserving measures can help mitigate this type of attack . Besides secure communication at the network level , confidentiality and integrity preservation at the application layer could be achieved by means of trusted elements 21 . If we think that poisoned data has already been transmitted by the source , more advanced anomaly detection techniques could be used to directly address the data ’ s provenance 22 or try to identify false data at the preprocessing stage 23 by means of statistical techniques 24 .
Besides the more conventional attempt to just technically disturb the training process , more advanced attacks may again try to influence the chosen algorithms and related libraries ( e . g . for GPU interaction ). If the training process is running in a cloud , we could also assume the cloud operator to behave in an “ honest-but-curious ” fashion .
18
M . Jagielski , et al ., " Manipulating Machine Learning : Poisoning Attacks and Countermeasures for Regression Learning ," in 2018 IEEE Symposium on Security and Privacy ( SP ), San Francisco , CA , USA , 2018 pp . 19-35 .
19
Qixue Xiao , Yufei Chen , Yu Chen , Kang Li , ” Seeing is Not Believing : Camouflage Attacks on Image Scaling Algorithms ”, Proceedings of the 28 th USENIX Security Symposium , 2019 .
20
Erwin Quiring , David Klein , Daniel Arp , Martin Johns , Konrad Rieck , “ Adversarial Preprocessing : Understanding and Preventing Image-Scaling Attacks in Machine Learning ”, Proceedings of the 29 th USENIX Security Symposium , 2020 .
21
Andreas Schaad , Tobias Reski , Oliver Winzenried : Integration of a Secure Physical Element as a Trusted Oracle in a Hyperledger Blockchain . ICETE ( 2 ) 2019 : 498-503 .
22
N . Baracaldo , B . Chen , H . Ludwig , A . Safavi , and R . Zhang , " Detecting Poisoning Attacks on Machine Learning in IoT Environments ," 2018 IEEE International Congress on Internet of Things ( ICIOT ), 2018 , pp . 57-64 , doi : 10.1109 / ICIOT . 2018.00015 .
24
Benjamin IP Rubinstein , Blaine Nelson , Ling Huang , Anthony D Joseph , Shing-hon Lau , Satish Rao , Nina Taft , and J Doug Tygar . 2009 . Antidote : understanding and defending against poisoning of anomaly detectors . In Proceedings of the 9th ACMSIGCOMM Conference on Internet Measurement . 1 – 14 .
IIC Journal of Innovation 43