Securing the Machine Learning Lifecycle
...
... ... ...
Machine Learning can be defined as programming a computer so that it can learn from data . Three core artefacts – training data , training process description , trained model – are key in this context and can be susceptible to inadvertent modifications or even intentional attacks .
= Cats
The verification and validation mechanisms used in standard software development ( e . g ., static code analysis or unit testing ) do not suffice to guarantee the overall quality of training data , training code , or trained models .
Conventional access controls and integrity protection measures can help mitigate data poisoning attacks . Besides secure communication at the network level , confidentiality and integrity protection at the application layer could be achieved by means of trusted elements , like Wibu-Systems ’ CodeMeter dongles .
To protect against theft of training models finegrained software protection services like our Code- Meter Cloud could be the answer . We could even consider shifting parts of the training process to trusted execution environments such as SGX enclaves or secure elements in general .
Training Inference
</> </>
Framework code |
Framework code |
Pre-processing |
API boundary |
Input |
Data collection |
Pre-processing & Feature Engineering |
Training process |
Trained model |
Trained model |
Output |
WIBU-SYSTEMS AG 17