We must interrogate the model, its inputs, and outputs
In many jurisdictions, we still face troubling ethical issues around the use of algorithms that may not have sufficiently mitigated embedded bias and do not provide transparency around their inputs or their decision-making. In the well-known 2013 case, Eric Loomis was sentenced to six years in jail, based in part on a risk assessment performed by an algorithm. He sued to find out why the algorithm determined that he deserved that sentence. He lost, on the grounds that the algorithm was the intellectual property of the company that had produced it. This is exactly the type of outcome we need to guard against if we want to use AI to make decisions that are both efficient and fair.
The alternative is to become skilled at interrogating these models on a regular basis, rather than resigning ourselves to becoming passive users. Is the model asking the right questions? Does it have the right data? Have we excluded the right data?
We must advocate for regulation
Many business leaders are allergic to regulation, but reasonable regulation provides the guardrails that businesses need to be credible to their customers and other constituents. In the U. S., there is currently no version of the U. S. Food and Drug Administration to determine the safety or efficacy of an algorithm. While algorithmic decisions are frequently referred to as black boxes, we should insist on some level of transparency. That means the ability to understand how models make decisions, how specific results are produced, and what data is used as inputs and for training. We should also be able to audit the outputs to ensure they are fair and accurate.
Not everything can be regulated. Algorithms predict outcomes and the predicted accuracy of an algorithm is reduced if you eliminate gender, race and other differentials in your results. There will always be bias but perhaps less than if a human made the decision. There would
40 Bold ideas to power progress