The 10 Most Advanced QA & Testing Companies of 2019 QA & Testing-compressed | Page 25
Regulatory Risks
Data in today’s day and age, needs to be carefully
managed and used for the right purposes. Data use
regulations (GDPR for example) – impose demands like
protection of privacy to usage within geographical
boundaries, which can also bring in its own set of risks
that need to be mitigated. Examples could be how
applications use AI is to make decisions in highly
regulated environments like Healthcare (diagnosis for
example) and Finance (investment decisions for
example) and its legal or regulatory impact, would need
to managed.
possible biases creeping into the model that need to be
mitigated. For example models pretrained on data from
particular country may provide wrong outcomes when
used in another country.
Privacy & Ethical Risks
Model Behaviour Risks In today’s world, there is an increasing fear that the
insights that systems provide should not be inherently
biased and thus wrongly influence the outcomes. An
example of privacy abuse is using faces of people who
have not consented to their images being used to train
facial recognition systems. Another example of violation
of ethics, is if racial or class bias creeps into the way the
algorithms work due to the skew in the training data.
When using pre-trained models, there are risks
associated with the appropriateness of the underlying
algorithms being used and its suitability to the business
context, the accuracy & precision of the predictions, At Last Mile , we believe that before deploying AI
systems, an effective test strategy that would encompass
and prioritize the impact of the above risks and tailor the
testing needs accordingly must be put in place.
25