We also need people who are experts at interpreting data, at critical thinking, and at solving problems and communicating.
We must learn to identify and remove embedded bias
Embedded bias can be pernicious and difficult to detect. If a hiring algorithm is looking to match a profile of successful candidates within an organization, and the leadership of that organization is overwhelmingly male, the algorithm will filter out female candidates. Unless specifically steered otherwise, the algorithm will be looking backward in an attempt to replicate the status quo. For algorithms to be trusted they must be fair and rely on accurate information. Algorithms used in parole board decision-making are considered successful and trustworthy in part because the developers have worked to eliminate embedded bias. These tools process a range of inputs— sometimes just a few variables, other times more than 100— to assign a risk score based on things such as a likelihood of arrest or failure to appear in court. This is the way accuracy is created. However some data isn’ t considered because it’ s viewed as discriminatory: a person’ s race and gender, for example. Data about the number of times someone has been stopped by the police is also off-limits because that information may reflect police behavior more than the behavior of the incarcerated person. Zip codes don’ t factor in, either, because they may also include racial bias. The more fairness you require, the less accurate the prediction.
We also need people who are experts at interpreting data, at critical thinking, and at solving problems and communicating.
The Kyndryl Institute Journal 39