Memoria [EN] Nr 87 | Page 5

“We don’t trust AI and AI doesn’t understand us.”

There was laughter from the audience – but there was truth, too, in the quip. The conference on AI in the Holocaust education, remembrance, and research sector held in London on 1 December, 2024, brought together experts from media studies, sociology, and Holocaust studies to unravel the quip and explore the challenges and opportunities AI presents for the future of Holocaust memory.

The fear: deepfakes, disinformation, and Chatbots

Dr Victoria Grace Richardson-Walden of the Landecker Digital Memory Lab opened the session by encouraging participants to beware of the hype around AI. She outlined the various types of AI that are already woven into our daily lives; from Netflix recommendations to SatNav to your email spam filter and underlined that the technology can be used in both positive and negative ways.

Danny Morris from the Community Security Trust shared information about the guardrails that are in place to stop AI being used to generate inaccurate or offensive material related to the Holocaust. He also explained how actors are using descriptive prompts to by-pass safety features. Morris stressed that antisemitism is not a new phenomenon but that AI provides new ways to express the hatred. The audience was stunned into silence by an AI generated video based on Mein Kampf, examples of chatbot discussions with Nazi perpetrators, and photos of a young Adolf Hitler with his arm draped around Anne Frank.

Noah Kravitz, creator of the NVIDIA AI Podcast, also underlined the danger of disinformation and stated that “AI has the potential to supercharge and transform anything that humans do. We cannot mitigate all harms. Bad actors will always look for ways to circumvent.”

Turning towards more technical dangers, Dr Richardson-Walden explained that AI can only draw from the information it is trained on. This means that if AI models only have access to inaccurate sources or limited narratives, it will produce flawed outputs or reproduce the same well-known stories or facts over and over again. These information loops amplify some narratives while eroding the breadth and depth of the history of the Holocaust. The mass digitization of records and their integration into AI systems can go some way towards protecting the record of the Holocaust from erasure or distortion.

During a panel discussion moderated by IHRA delegate, Martin Winstone, concerns were also raised over the unethical practices of commercial AI companies which use low-paid manual labour to tag and moderate content as well as the environmental impact of AI.

But there were hopeful stories too. Dr Yael Richler Friedman, Pedagogical Director of Yad Vashem’s International Institute for Holocaust Education, explained how AI had helped Yad Vashem identify the names of more than 400 previously unknown victims of the Holocaust. However, she also told an anecdote about how AI had mistaken the very common word “li” – which mean ‘me’ in Hebrew – as a family name, drawing the conclusion that there were hundreds of additional victims with this ‘surname’. The story made a strong case for the need for human review and oversight in any AI-powered project. Shiran Mlamdovsky Somech of Generative AI for Good presented an AI-created telling of the Warsaw Ghetto Uprising.

Dr Robert Williams, Finci-Viterbi Executive Director of USC Shoah Foundation, spoke about how his organization is training Large Language Models to catalogue testimonies and carry out real-time analysis of content for moderation. He also highlighted that AI can be used to translate testimonies into multiple languages, facilitating access to people all over the world.

Clementine Smith of the Holocaust Education Trust spoke about their 360 Testimony project, which has two components: first students engage with USC Shoah Foundation testimonies in which authentic pre-recorded answers are matched to real-time student questions by AI. Then, a Virtual Reality headset transports learners to the present-day locations where the survivors’ stories unfolded. With only a few hundred thousand Holocaust survivors still with us worldwide, Smith stressed the value in seeking innovative, thoughtful ways to use

5

“We don’t trust AI and AI doesn’t understand us.”

There was laughter from the audience – but there was truth, too, in the quip. The conference on AI in the Holocaust education, remembrance, and research sector held in London on 1 December, 2024, brought together experts from media studies, sociology, and Holocaust studies to unravel the quip and explore the challenges and opportunities AI presents for the future of Holocaust memory.

The fear: deepfakes, disinformation, and Chatbots

Dr Victoria Grace Richardson-Walden of the Landecker Digital Memory Lab opened the session by encouraging participants to beware of the hype around AI. She outlined the various types of AI that are already woven into our daily lives; from Netflix recommendations to SatNav to your email spam filter and underlined that the technology can be used in both positive and negative ways.

Danny Morris from the Community Security Trust shared information about the guardrails that are in place to stop AI being used to generate inaccurate or offensive material related to the Holocaust. He also explained how actors are using descriptive prompts to by-pass safety features. Morris stressed that antisemitism is not a new phenomenon but that AI provides new ways to express the hatred. The audience was stunned into silence by an AI generated video based on Mein Kampf, examples of chatbot discussions with Nazi perpetrators, and photos of a young Adolf Hitler with his arm draped around Anne Frank.

Noah Kravitz, creator of the NVIDIA AI Podcast, also underlined the danger of disinformation and stated that “AI has the potential to supercharge and transform anything that humans do. We cannot mitigate all harms. Bad actors will always look for ways to circumvent.”

Turning towards more technical dangers, Dr Richardson-Walden explained that AI can only draw from the information it is trained on. This means that if AI models only have access to inaccurate sources or limited narratives, it will produce flawed outputs or reproduce the same well-known stories or facts over and over again. These information loops amplify some narratives while eroding the breadth and depth of the history of the Holocaust. The mass digitization of records and their integration into AI systems can go some way towards protecting the record of the Holocaust from erasure or distortion.

During a panel discussion moderated by IHRA delegate, Martin Winstone, concerns were also raised over the unethical practices of commercial AI companies which use low-paid manual labour to tag and moderate content as well as the environmental impact of AI.

But there were hopeful stories too. Dr Yael Richler Friedman, Pedagogical Director of Yad Vashem’s International Institute for Holocaust Education, explained how AI had helped Yad Vashem identify the names of more than 400 previously unknown victims of the Holocaust. However, she also told an anecdote about how AI had mistaken the very common word “li” – which mean ‘me’ in Hebrew – as a family name, drawing the conclusion that there were hundreds of additional victims with this ‘surname’. The story made a strong case for the need for human review and oversight in any AI-powered project. Shiran Mlamdovsky Somech of Generative AI for Good presented an AI-created telling of the Warsaw Ghetto Uprising.

Dr Robert Williams, Finci-Viterbi Executive Director of USC Shoah Foundation, spoke about how his organization is training Large Language Models to catalogue testimonies and carry out real-time analysis of content for moderation. He also highlighted that AI can be used to translate testimonies into multiple languages, facilitating access to people all over the world.

Clementine Smith of the Holocaust Education Trust spoke about their 360 Testimony project, which has two components: first students engage with USC Shoah Foundation testimonies in which authentic pre-recorded answers are matched to real-time student questions by AI. Then, a Virtual Reality headset transports learners to the present-day locations where the survivors’ stories unfolded. With only a few hundred thousand Holocaust survivors still with us worldwide, Smith stressed the value in seeking innovative, thoughtful ways to use