Conversational AI systems increasingly function as primary interfaces for information seeking, yet how they present sources to support information evaluation remains under-explored. This paper investigates how source transparency design shapes interactive information seeking, trust, and critical engagement. We conducted a controlled between-subjects experiment (N=372) comparing four source presentation interfaces - Collapsible, Hover Card, Footer, and Aligned Sidebar - varying in visibility and accessibility. Using fine-grained behavioral analysis and automated critical thinking assessment, we found that interface design fundamentally alters exploration strategies and evidence integration. While the Hover Card interface facilitated seamless, on-demand verification during the task, the Aligned Sidebar uniquely mitigated the negative effects of information overload: as citation density increased, Sidebar users demonstrated significantly higher critical thinking and synthesis scores compared to other conditions. Our results highlight a trade-off between designs that support workflow fluency and those that enforce reflective verification, offering practical implications for designing adaptive and responsible conversational AI that fosters critical engagement with AI generated content.
翻译:对话式人工智能系统日益成为信息寻求的主要界面,然而其如何呈现信息来源以支持信息评估仍待深入探究。本文研究源透明度设计如何影响交互式信息寻求、信任与批判性参与。我们开展了一项受控组间实验(N=372),比较了四种在可见性与可访问性上存在差异的源呈现界面——可折叠式、悬停卡片式、页脚式与对齐侧边栏式。通过细粒度行为分析与自动化批判性思维评估,我们发现界面设计从根本上改变了探索策略与证据整合方式。悬停卡片界面在任务过程中实现了按需无缝验证,而对齐侧边栏界面则独特地缓解了信息过载的负面影响:随着引用密度增加,侧边栏用户相比其他实验条件展现出显著更高的批判性思维与综合得分。我们的研究结果揭示了支持工作流流畅性的设计与强制反思性验证的设计之间的权衡关系,为设计能够促进对AI生成内容进行批判性参与的自适应、负责任对话式人工智能提供了实践启示。