Trust is not just a cognitive issue but also an emotional one, yet the research in human-AI interactions has primarily focused on the cognitive route of trust development. Recent work has highlighted the importance of studying affective trust towards AI, especially in the context of emerging human-like LLMs-powered conversational agents. However, there is a lack of validated and generalizable measures for the two-dimensional construct of trust in AI agents. To address this gap, we developed and validated a set of 27-item semantic differential scales for affective and cognitive trust through a scenario-based survey study. We then further validated and applied the scale through an experiment study. Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents. Our study methodology and findings also provide insights into the capability of the state-of-art LLMs to foster trust through different routes.
翻译:信任不仅是认知问题,也是情感问题,然而人机交互研究主要聚焦于信任发展的认知路径。近期研究强调了探究对AI情感信任的重要性,特别是在新兴类人大型语言模型驱动的对话代理背景下。目前尚缺乏针对AI代理信任二维结构的有效且可推广的测量工具。为填补这一空白,我们通过基于情境的调查研究开发并验证了一套包含27个项目的语义差异量表,用于测量情感信任与认知信任。随后通过实验研究进一步验证并应用了该量表。实证结果表明,信任的情感维度与认知维度如何相互作用,并共同塑造个体对AI代理的整体信任。本研究的方法与发现亦为理解前沿大型语言模型通过不同路径培育信任的能力提供了启示。