IERP® Monthly Newsletter Issue 29 September - November 2021 | Page 11

Most AI currently being applied is done so at weak or general levels. Ramesh cautioned that the danger lay in becoming too reliant on AI, without setting appropriate checks and balances.

ML is about extracting data, mainly historical data and experiences. This helps machines “learn” without being explicitly programmed; the historical, structured or semi-structured data can provide accurate results or predictions but only within specific domains. There are currently three types of ML in application: supervised learning, unsupervised learning and reinforcement learning. The rating system used by Netflix is an example of ML, as are spam filters used in e-mails. With supervised learning, the user decides what to feed the machine to improve its performance in a guided, controlled process.

In unsupervised learning, on the other hand, the algorithm allows the system to pick, in order to refine answers; while in reinforcement learning, the system uses observations taken from the environment to continually learn. Positive reinforcement happens when the right decision is made, for example, when the computer beats humans at a game. Ramesh detailed some common risks in ML that could affect its application. One of these, he said was the lack of strategy and know-how. With new technology, there is always a learning curve but the user’s experience may be lacking, thus preventing optimum understanding of the system.

The organisation may also not have a clear AI strategy in place, or lack the necessary talent with appropriate skill sets to operationalise such systems; these contribute to barriers to greater adoption of technology. Another risk is poor or unreliable data. ML relies on human-supplied data; if there is none, it cannot learn. Errors in data will also affect it, as will meaningless or unstructured data or “noise” that cannot be correctly interpreted. Data integrity and governance is thus imperative. There is also the possibility of “over-fitting data” – setting too many restrictions in the system that results in narrowing its parameters for learning, thus forcing it to take longer to deal with the data or developing a myopic view.

10 The IERP® Monthly Newsletter September - November 2021