IIC Journal of Innovation 19th Edition The Role of Artificial Intelligence in Industry | Page 51

Securing the ML Lifecycle

4.3.4 COUNTERMEASURES

Besides standard engineering and access controls 34 , we acknowledge that significant work 35 in the form of publicly available software libraries ( e . g ., ART , CleverHans ) has been done to test models for their resistance to adversarial attacks 36 .

4.4 ATTACKING THE QUERY

4.4.1 QUERY INTERCEPTION OR MODIFICATION

Though not under full control of the model owner or cloud operator , the query by a customer as well as the result delivered to that customer may contain sensitive data . This could make it possible to not only intercept immediate business data , but to even reconstruct parts of the actual data used for training . Given a data record and black-box access to a model , it has been shown that this can be used to determine whether a record was in a model ’ s training dataset 37 .

4.4.2 COUNTERMEASURES

Besides protecting the query content by means of network or application-level encryption , the problem of protecting against membership inference appears to be a classical trade-off between data protection and utility . In fact , current ML frameworks already provide support for performing such inference attacks as part of testing the robustness of the model 38 . One current avenue pursued by researchers to counter inference attacks is that of Differential Privacy 39 .

4.5 SUMMARY

We have now identified the stages of the ML lifecycle , associated assets , and stakeholders as well as possible attacks and some possible ( albeit selected ) countermeasures . We argued that some of these countermeasures require comparatively easy engineering interventions , whilst others are still subject to academic discussions . One key observation was that , as far as we are aware of there seem to be no major technical solutions to support the licensing of models resulting from machine learning processes . To be more specific , whilst it appears feasible to harness the supporting environment with access-control features to enable licensing , the open research question is whether such licensing could become an intrinsic feature of a trained model . We
34 https :// docs . microsoft . com / en-us / security / engineering / threat-modeling-aiml
35 https :// adversarial-robustness-toolbox . readthedocs . io / en / latest /
36 https :// github . com / cleverhans-lab / cleverhans
37
R . Shokri , M . Stronati , C . Song , and V . Shmatikov , " Membership Inference Attacks Against Machine Learning Models ," 2017 IEEE Symposium on Security and Privacy ( SP ), 2017 , pp . 3-18 , doi : 10.1109 / SP . 2017.41 .
38 https :// github . com / tensorflow / privacy / tree / master / tensorflow _ privacy / privacy / privacy _ tests
39 https :// cacm . acm . org / magazines / 2021 / 7 / 253460-the-limits-of-differential-privacy-and-its-misuse-in-datarelease-and-machine-learning / fulltext
46 March 2020