Memoria [EN] No. 93 | Page 18

human-generated data from the existing system, and machine learning-based systems tend to act as “mirrors” of the data they are trained on19, COMPAS simply learned to automate many existing biases.

For example, disproportionate policing of certain groups in particular neighborhoods means that these populations, which are often populations of color, are overrepresented in the dataset. Beyond this fact, the features chosen to represent individuals in the COMPAS dataset (in other words, the abstractions chosen for the data and modeling of this application) include factors such as prior arrest history. Such data points likely have bias built into their measurement due to the over-policing of certain neighborhoods. Another similar example is community stability, which is likely to correlate with race due in part to redlining. Not only is it difficult to accurately measure these features due to existing biases in the policing and criminal justice systems but using them to make future decisions also serves to reinforce the biases that are already present in the dataset because the machine-learning system assumes that the distribution of recidivism outcomes based on this set of features is constant over time.

Concretely, consider the findings from ProPublica’s investigative report on the COMPAS system. In their investigation, ProPublica discovered that although race was not used as a predictive feature in the COMPAS system, the system’s false positive rate for Black individuals (44.9%) far exceeded the rate for white individuals (23.5%)20. Black individuals, in other words, will disproportionately bear the brunt of the mistakes made by the system when it is deployed as a trusted decision-support tool for judges. The abstractions used to represent these individuals thus mirror and reinforce our broken criminal justice and policing systems21. The COMPAS tool can amplify our existing brokenness, but it cannot possibly conceive of creative solutions that will meaningfully contribute to a more just society.

What does this mean for us technologists who work to develop systems like COMPAS? First, it is critical for us to be aware that we are constantly choosing abstract representations of the world when we create models for the tech applications that we work on.

We have the power to decide which abstractions we use, and there are consequences for the choices we make. We must be mindful of whose perspectives are represented in our choices and whose might be neglected.

If we hope to design systems informed by diverse perspectives, we cannot do it alone. As designers and technologists, many of us have access to social and financial resources that might limit our perspectives and blind us to the absence of neglected points of view. It is our responsibility to meaningfully seek out and engage a broad range of stakeholders in our technical decisions, especially those stakeholders who will be most impacted by new technologies.

Without such perspectives informing what should be designed and how, we are better off not designing new technologies at all. In engaging key stakeholders in the design process, we should strive to place impacted communities in control of the abstractions used to represent them.

In designing new technologies, we must ask what is needed rather than merely what is possible. One way to achieve this goal is by working with community representatives and organizations familiar with the needs of those affected by technological change. We must draw on practices like participatory design and value-sensitive design22 while at the same time being careful to avoid "participation washing,"23 in which community members are briefly consulted in the design process to “check the participation box.” Instead, we must strive to foster meaningful, mutual, and protracted relationships, in which impacted communities are deeply involved and from which they reap the benefits once new technologies are deployed. We can combine these qualitative and rehumanizing approaches with our quantitative modeling and design processes to produce better overall technical abstractions and models.

Breathing new life into the dry bones of those marginalized by tech requires proximity and mutuality between the modelers and the modeled and continuous reflection on the part of technologists about our model choices, including choices of abstraction.

Questions we can reflect on related to abstraction in our work include:

Who could be impacted by the work I did today?

Which abstractions or representations did I use to characterize them or factors related to them?

How might my design choices dehumanize these people, and what are the possible impacts of my work?

Where my choices of abstract representations might serve to dehumanize people, how can I learn more about the people I model and their stories?

Reflecting on Accountability: Humanizing the Perpetrators

Suddenly there was a noise, a rattling, and the bones came together, bone to its bone. I looked, and there were sinews on them, and flesh had come upon them, and skin had covered them; but there was no breath in them.

Ezekiel 37:7-8

The second theme the FASPE group reflected on throughout our visits was the humanity of the perpetrators of the Holocaust and how seeing their stories in this way might bear on our professional roles today. Each site challenged us to imagine ourselves in the shoes of the perpetrators and to ask questions about who these people were, what their motivations were, and how seemingly ordinary professionals could have contributed to the largest genocide in history. I often found that I could relate to the backgrounds of the perpetrators in ways that surprised me—many of them were educated, middle-class professionals who valued their families and strove for success in their careers24.

I was particularly struck by our visit to the location of the Wannsee Conference. Of the participants in this genocidal meeting, most were professionals or academics. Eight had PhDs. As we walked onto the grounds of the conference on a beautiful late May morning, I reacted viscerally to seeing the building and the serene, well-kept grounds. The building immediately reminded me of a German research center for computer scientists called Schloss Dagstuhl, where I attended a seminar this past fall. Dagstuhl is a remote and idyllic German castle. Lodging is provided on site, meals are catered, and the grounds include an extensive library, artwork, running paths, and a lake.

As I entered the House of the Wannsee Conference and saw the ornate dining room, the sunlit breakfast room, the marble floors, and the intricately decorated walls, I reflected on what the conference participants expected when they arrived for their meeting and what I have come to expect when I meet with my colleagues and friends to discuss technical solutions to difficult problems.

We meet in top-tier cities with amazing conference venues, expect outstanding meals, stay in fancy hotels, and are waited on by attentive staff, such that nothing can impede our focus on the problems at hand.

19 Shannon Vallor. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press, 2024.

20 Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. In Ethics of data and analytics, pages 254–264. Auerbach Publications, 2022.

21 Shannon Vallor. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford University Press, 2024.

22 Batya Friedman. Value-sensitive design. interactions, 3(6):16–23, 1996. Michael J Muller and Sarah Kuhn. Participatory design. Communications of the ACM, 36(6):24–28, 1993.

18