The rise of generative artificial intelligence (genAI) models poses new possibilities and risks for how the past is remembered by accelerating content production and altering the process of information discovery. The most critical risk is historical misrepresentation, which ranges from the distortion of facts and inaccurate depiction of specific groups to more subtle forms, such as the selective moralization of history. The dangers of misrepresentation of the past are particularly pronounced for high-risk memories, such as memories of past atrocities, which have a strong emotional load and are often instrumentalised by political actors. To understand how substantive this risk is, we empirically investigate how genAI applications deal with high-risk memories of the Second World War atrocities in Ukraine. This case is crucial due to the scope of the atrocities and the intense, often instrumentalised, contestation surrounding their memory. We audit the performance of three common genAI applications for different types of misrepresentation, including hallucinations and inconsistent moralization, and discuss the implications for future memory practices.
翻译:生成式人工智能(genAI)模型的兴起,通过加速内容生产并改变信息发现过程,为历史记忆方式带来了新的可能性和风险。最关键的危害是历史误表征,其范围涵盖事实扭曲、对特定群体的不准确描绘,以及更微妙的形式(例如对历史的选择性道德化)。对于高风险记忆(如具有强烈情感负荷且常被政治行为者工具化的过往暴行记忆),历史误表征的危险尤为突出。为理解这一风险的实质程度,我们通过实证研究考察了genAI应用如何处理乌克兰境内二战暴行的高风险记忆。鉴于暴行的规模以及围绕其记忆的激烈(且往往被工具化的)争议,该案例具有关键意义。我们针对三种常见genAI应用在幻觉与不一致道德化等不同误表征类型上的表现进行审计,并探讨其对未来记忆实践的启示。