IIC Journal of Innovation 19th Edition The Role of Artificial Intelligence in Industry | Page 49

Securing the ML Lifecycle

4.2.1 OBSERVING THE PREPROCESSING

Observing the preprocessing stage may yield valuable information for the attacker . Specifically , any knowledge about the features used in it could support later adversarial attacks . This also includes knowledge about the features not used for a training model .

4.2.2 POISONING ATTACK

An attacker may choose to either perform a targeted poisoning attack or cause byzantine failures during the training process . By failures , we do not necessarily imply technical failures resulting in disrupted training processes . The attacker may rather focus on influencing the model ’ s quality itself . One obvious reason could be that the final model should simply not correctly classify later activities of the attacker ( e . g ., network intrusion ), but more subtle attacks could intend to just diminish the overall model quality to achieve a competitive advantage .

4.2.3 CONFIGURATION STEALING ATTACK

At the training stage , the attacker may also try to derive knowledge about the used configuration and learning parameters . This also includes trying to illicitly obtain knowledge about the steps taken as part of the preprocessing , such as the selected features . Such knowledge could then facilitate later adversarial attacks aimed at evading model prediction or classification .

4.2.4 COUNTERMEASURES

Again , conventional access controls can be used against these attacks . However , specifically in shared training environments , possible application layer countermeasures might have to use more fine-grained software protection services 25 . While access to critical parameters of the training process could be controlled , we could even consider shifting 26 parts of the training process to trusted execution environments such as SGX enclaves or secure elements in general .
The possible dilemma in using , but not fully trusting a cloud operator 27 has long been discussed by the database , trusted computing , and secure computation communities . A realistic engineering scenario for shared machine learning using trusted execution environments such as SGX was recently presented 28 .
25 https :// www . wibu . com / us / products / codemeter / codemeter-cloud-server . html
26
Wojciech Ozga , Do Le Quoc , Christof Fetzer : Perun - Confidential Multi-stakeholder Machine Learning Framework with Hardware Acceleration Support . DBSec 2021 : 189-208 .
27
B . Alouffi , M . Hasnain , A . Alharbi , W . Alosaimi , H . Alyami , and M . Ayaz , " A Systematic Literature Review on Cloud Computing Security : Threats and Mitigation Strategies ," in IEEE Access , vol . 9 , pp . 57792-57807 , 2021 , doi : 10.1109 / ACCESS . 2021.3073203 .
28
Wojciech Ozga , Do Le Quoc , Christof Fetzer : Perun - Confidential Multi-stakeholder Machine Learning Framework with Hardware Acceleration Support . DBSec 2021 : 189-208
44 March 2020