Recent years have witnessed a rapid growth of recommender systems, providing suggestions in numerous applications with potentially high social impact, such as health or justice. Meanwhile, in Europe, the upcoming AI Act mentions \emph{transparency} as a requirement for critical AI systems in order to ``mitigate the risks to fundamental rights''. Post-hoc explanations seamlessly align with this goal and extensive literature on the subject produced several forms of such objects, graphs being one of them. Early studies in visualization demonstrated the graphs' ability to improve user understanding, positioning them as potentially ideal explanations. However, it remains unclear how graph-based explanations compare to other explanation designs. In this work, we aim to determine the effectiveness of graph-based explanations in improving users' perception of AI-based recommendations using a mixed-methods approach. We first conduct a qualitative study to collect users' requirements for graph explanations. We then run a larger quantitative study in which we evaluate the influence of various explanation designs, including enhanced graph-based ones, on aspects such as understanding, usability and curiosity toward the AI system. We find that users perceive graph-based explanations as more usable than designs involving feature importance. However, we also reveal that textual explanations lead to higher objective understanding than graph-based designs. Most importantly, we highlight the strong contrast between participants' expressed preferences for graph design and their actual ratings using it, which are lower compared to textual design. These findings imply that meeting stakeholders' expressed preferences might not alone guarantee ``good'' explanations. Therefore, crafting hybrid designs successfully balancing social expectations with downstream performance emerges as a significant challenge.
翻译:近年来,推荐系统快速发展,在健康、司法等具有潜在重大社会影响的众多应用领域提供建议。与此同时,欧洲即将出台的《人工智能法案》将"透明度"列为关键AI系统的要求,以"减轻对基本权利的风险"。事后解释与这一目标无缝契合,该主题的大量文献已产生多种形式的解释对象,图结构解释便是其中之一。早期的可视化研究表明,图结构能够提升用户理解能力,使其成为潜在的理想解释形式。然而,图结构解释与其他解释设计相比效果如何仍不明确。本研究采用混合方法,旨在确定图结构解释在改善用户对基于AI的推荐系统感知方面的有效性。我们首先开展定性研究,收集用户对图解释的需求。随后进行更大规模的定量研究,评估包括增强型图解释在内的多种解释设计对用户理解度、系统可用性及对AI系统好奇心等方面的影响。研究发现,用户认为图结构解释比涉及特征重要性的设计更具可用性。但我们也发现,文本解释比图结构设计能带来更高的客观理解度。最重要的是,我们揭示了参与者对图设计表达出的偏好与其实际使用评分之间的强烈反差——相比文本设计,图设计的实际评分更低。这些发现意味着,仅满足利益相关者表达出的偏好可能不足以保证产生"良好"的解释。因此,如何构建能成功平衡社会期望与下游性能的混合设计,已成为一项重大挑战。