The integration of generative AI into information access systems often presents users with synthesized answers that lack transparency. This study investigates how different types of explanations can influence user trust in responses from retrieval-augmented generation systems. We conducted a controlled, two-stage user study where participants chose the more trustworthy response from a pair-one objectively higher quality than the other-both with and without one of three explanation types: (1) source attribution, (2) factual grounding, and (3) information coverage. Our results show that while explanations significantly guide users toward selecting higher quality responses, trust is not dictated by objective quality alone: Users' judgments are also heavily influenced by response clarity, actionability, and their own prior knowledge.
翻译:将生成式人工智能集成到信息访问系统中,常常向用户呈现缺乏透明度的合成答案。本研究探讨了不同类型的解释如何影响用户对检索增强生成系统响应的信任。我们进行了一项受控的两阶段用户研究,参与者从一对响应中选择更可信的一个——其中一个在客观质量上高于另一个——这些响应分别在有和没有以下三种解释类型之一的情况下呈现:(1) 来源归属,(2) 事实依据,以及(3) 信息覆盖度。我们的结果表明,虽然解释能显著引导用户选择更高质量的响应,但信任并非仅由客观质量决定:用户的判断还深受响应清晰度、可操作性及其自身先验知识的严重影响。