In subjective decision-making, where decisions are based on contextual interpretation, Large Language Models (LLMs) can be integrated to present users with additional rationales to consider. The diversity of these rationales is mediated by the ability to consider the perspectives of different social actors. However, it remains unclear whether and how models differ in the distribution of perspectives they provide. We compare the perspectives taken by humans and different LLMs when assessing subtle sexism scenarios. We show that these perspectives can be classified within a finite set (perpetrator, victim, decision-maker), consistently present in argumentations produced by humans and LLMs, but in different distributions and combinations, demonstrating differences and similarities with human responses, and between models. We argue for the need to systematically evaluate LLMs' perspective-taking to identify the most suitable models for a given decision-making task. We discuss the implications for model evaluation.
翻译:在主观决策中,当决策基于情境解释时,可以整合大型语言模型(LLMs)为用户提供额外的论证依据以供考量。这些论证的多样性受到考虑不同社会行动者视角能力的影响。然而,模型在提供的视角分布上是否以及如何存在差异,目前尚不明确。我们比较了人类与不同LLMs在评估微妙性别歧视情境时所采取的视角。研究表明,这些视角可归类于一个有限的集合(施害者、受害者、决策者),该集合在人类和LLMs生成的论证中持续存在,但其分布与组合方式存在差异,这揭示了模型与人类反应之间以及不同模型之间的异同。我们主张有必要系统评估LLMs的视角采择能力,以识别最适合特定决策任务的模型。本文进一步探讨了该发现对模型评估的启示。