Securing the ML Lifecycle
Besides standard engineering and access controls 34 , we acknowledge that significant work 35 in the form of publicly available software libraries ( e . g ., ART , CleverHans ) has been done to test models for their resistance to adversarial attacks 36 .
Though not under full control of the model owner or cloud operator , the query by a customer as well as the result delivered to that customer may contain sensitive data . This could make it possible to not only intercept immediate business data , but to even reconstruct parts of the actual data used for training . Given a data record and black-box access to a model , it has been shown that this can be used to determine whether a record was in a model ’ s training dataset 37 .
Besides protecting the query content by means of network or application-level encryption , the problem of protecting against membership inference appears to be a classical trade-off between data protection and utility . In fact , current ML frameworks already provide support for performing such inference attacks as part of testing the robustness of the model 38 . One current avenue pursued by researchers to counter inference attacks is that of Differential Privacy 39 .
We have now identified the stages of the ML lifecycle , associated assets , and stakeholders as well as possible attacks and some possible ( albeit selected ) countermeasures . We argued that some of these countermeasures require comparatively easy engineering interventions , whilst others are still subject to academic discussions . One key observation was that , as far as we are aware of there seem to be no major technical solutions to support the licensing of models resulting from machine learning processes . To be more specific , whilst it appears feasible to harness the supporting environment with access-control features to enable licensing , the open research question is whether such licensing could become an intrinsic feature of a trained model . We
37
R . Shokri , M . Stronati , C . Song , and V . Shmatikov , " Membership Inference Attacks Against Machine Learning Models ," 2017 IEEE Symposium on Security and Privacy ( SP ), 2017 , pp . 3-18 , doi : 10.1109 / SP . 2017.41 .
46 March 2020