Global Judicial Integrity Network Updates Special Edition 'Views' Review | Page 11
VIEWS
over the functioning of technology and whether
AI (and more precisely machine learning) is a
peculiar case in this accountability exercise.
Several jurisdictions, particularly in the
United States, have adopted technology that
recommends how to make pre-trial detention
decisions. Such applications use algorithms
that calculate recidivism risks, and ‘score’ the
defendant based on the probability they will
commit a crime if released.
This kind of scoring places the judge in an
awkward position. Suppose there is a case
in which the judge is inclined to release the
defendant into pre-trial custody, but the score
identifies a high risk of recidivism. Should the
judge be ready to go against the risk assessment
calculation made by the machine? And what if
the defendant is released and then commits a
crime? The standard rebuttal to this argument
emphasizes that the systems are just taking
advantage of the data available. It is argued
that the scientific methods employed calculate
recidivism risks in a way that is more powerful
and reliable than those used by individual
judges. This argument is relevant, but how
can we ensure that the data has no biases?
Guaranteeing accountability is much more
complicated than with simpler technology.
ProPublica,
an
American
non-profit
organization that conducts investigations in
the public interest, compared actual recidivism
with predicted recidivism. The analysis of
10,000 criminal defendants’ cases found that
“black defendants were far more likely than
white defendants to be incorrectly judged to
be at a higher risk of recidivism, while white
defendants were more likely than black
defendants to be incorrectly flagged as low
risk”. This instance shows that accountability
is hard to achieve and that such systems can
inject biases into judicial processes.
Technology — whether it is used for case
management, simple web forms, or more
complicated AI-based tasks — should be
introduced into the judicial process if, and only
if, proper accountability mechanisms are in
place.
The problem with accountability is even more
severe with AI systems that are based on
machine learning. In this case, the prediction
is based on algorithms that change over time.
With machine learning, algorithms ‘learn’
(change) based on their own experience. As the
algorithms change, we do not know how they
work and why they do things in certain ways. If
we cannot adopt effective control mechanisms,
how can we guarantee proper accountability?
The debate is ongoing and the precautionary
principle should be adopted until such
questions have been solved from a technical
and institutional perspective.
The cautions and the precautionary principle
mentioned in this article follow the same
direction as the Council of Europe’s Ethical
Charter on the Use of Artificial Intelligence in
Judicial Systems, particularly the principles of
respecting fundamental rights and of keeping
users in check. However, how to implement
such guidelines is not yet clear. Lawyers, case
parties and individual judges can definitely not
be charged with such a task. It is a challenge
that should be faced by pooling multifaceted
competences, monitoring the functioning of
the systems and benchmarking AI against
the core values of the Bangalore Principles of
Judicial Conduct. It is a challenge that Global
Judicial Integrity Network participants are
well positioned to face.
The label ‘predictive justice’ is dangerously
misleading, as such systems make predictions, but
not judicial decisions. Judicial decisions require,
as a minimum standard, justifications based on
an assessment of the relevant facts and applicable
regulations. AI systems make statistical correlations
and their forecasts are just the result of those
correlations. Hence, it would only be proper to speak
of actual predictive justice if the systems were to
provide justifications in terms of facts and laws.
1
11