Memoria [EN] No. 93 | Page 16

At the start of the program, physically and mentally disabled Germans, including many children, were killed through experimental gassing. In choosing which patients were to live and which were to die, doctors relied on patient demographic data, which was reported on mandatory medical forms used in hospitals around the country. This information included whether the patients received regular visits, how long they had been hospitalized, which illness they suffered from, whether they had committed a crime, their occupation (and whether they performed useful work), and their nationality. Patients who suffered from certain diseases, those committed for more than five years, those deemed criminally insane, or those not of German blood or nationality were to be reported immediately. Many of these patients were subsequently killed by gas. In euthanasia centers like Brandenburg, as in the concentration camps after them, abstract representations of these patients determined whether they would live.

In the modern tech landscape, and taking AI technologies as a prime example, engineers rely heavily on developing and leveraging abstract representations of the world for modeling and decision-making. This abstraction is necessary, because it is never possible to capture the full complexity of the world in any one model. Thus, technological progress depends on the development of effective abstractions that represent the most important aspects of a particular decision-making task within a given context. With modern AI systems, engineers make a variety of decisions that affect the abstractions leveraged within these models. These include the selection of data in each dataset, features to represent the data for the given decision task, AI model structures and training objectives, and output spaces for these models (i.e., the set of possible decision outcomes).

Blind trust in the “objective” nature of these AI systems on the part of the public and technologists themselves can lead people to believe that these abstractions are morally neutral. Since their perspectives are colored by their particular social contexts within the tech world, technologists especially might begin to believe that they have no agency in selecting these abstract representations. We as technologists tend to employ techno-optimist lenses, trusting that new technologies will always improve society—–this tendency leads many of us to decide to work in tech in the first place. However, this viewpoint can make us complacent about the impact of our work. We risk departing from the “space of moral reasons,” where we reflect continuously and intentionally on our moral responsibilities and choices15.

As a concrete, modern example of misguided techno-optimism and uncritical data abstraction within an AI-based decision support system, the Design & Technology cohort studied the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool for predicting recidivism16. Based on demographic data about a person accused of committing a crime, including their “current charges, pending charges, prior arrest history, previous pretrial failure, residential stability, employment status, community ties, and substance abuse,” the machine learning-based COMPAS tool assigns the defendant a risk score, predicting the likelihood that the defendant will reoffend before their trial if they are released on bail17. The tool is intended to help judges more efficiently make decisions about pre-trial detention. It is supposed to be more impartial and fairer than human judges who are subject to cognitive biases18.

While the “objectivity” of this system seems like it could produce social benefits, there are several unchecked assumptions about bias and fairness in the current criminal justice system baked into its design. Since the COMPAS system was trained on

16

14Kristen Iannuzzi. Nazi Euthanasia and Action T4: Effects on the Ethical Treatment of Individuals with Disabilities. 2014.

15 Shannon Vallor. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press, 2024.

16 Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. In Ethics of data and analytics, pages 254–264. Auerbach Publications, 2022.

17 Northpointe. Practitioner’s Guide to COMPAS Core, 2015.

18 Ibidem.