Trust is a fundamental concept in human decision-making and collaboration that has long been studied in philosophy and psychology. However, software engineering (SE) articles often use the term trust informally; providing an explicit definition or embedding results in established trust models is rare. In SE research on AI assistants, this practice culminates in equating trust with the likelihood of accepting generated content, which, in isolation, does not capture the full conceptual complexity of trust. Without a common definition, true secondary research on trust is impossible. The objectives of our research were: (1) to present the psychological and philosophical foundations of human trust, (2) to systematically study how trust is conceptualized in SE and the related disciplines human-computer interaction and information systems, and (3) to discuss limitations of equating trust with content acceptance, outlining how SE research can adopt existing trust models to overcome the widespread informal use of the term trust. We conducted a literature review across disciplines and a critical review of recent SE articles with a focus on trust conceptualizations. We found that trust is rarely defined or conceptualized in SE articles. Related disciplines commonly embed their methodology and results in established trust models, clearly distinguishing, for example, between initial trust and trust formation and between appropriate and inappropriate trust. On a meta-scientific level, other disciplines even discuss whether and when trust can be applied to AI assistants at all. Our study reveals a significant maturity gap of trust research in SE compared to other disciplines. We provide concrete recommendations on how SE researchers can adopt established trust models and instruments to study trust in AI assistants beyond the acceptance of generated software artifacts.
翻译:信任是人类决策与协作的基本概念,在哲学与心理学领域已有长期研究。然而,软件工程领域的文献常非正式地使用“信任”这一术语;明确给出定义或将研究结果嵌入既有信任模型的情况较为罕见。在关于AI助手的软件工程研究中,这种做法最终将信任等同于接受生成内容的可能性,而孤立地看,这并未涵盖信任概念的全部复杂性。缺乏共同定义使得真正关于信任的二次研究无法实现。本研究的目标在于:(1) 阐述人类信任的心理学与哲学基础,(2) 系统梳理软件工程及相关学科(人机交互与信息系统)中信任的概念化方式,(3) 讨论将信任等同于内容接受的局限性,并概述软件工程研究如何通过采用现有信任模型来克服该术语被广泛非正式使用的现状。我们开展了跨学科文献综述,并对近期聚焦信任概念化的软件工程文献进行了批判性评述。研究发现,软件工程文献中很少对信任进行定义或概念化阐述。相关学科通常将其方法论与结果建立在成熟的信任模型之上,例如明确区分初始信任与信任形成过程,以及恰当信任与不恰当信任。在元科学层面,其他学科甚至探讨了信任是否及何时能适用于AI助手。本研究揭示了软件工程领域的信任研究相较于其他学科存在显著的成熟度差距。我们为软件工程研究者提供了具体建议,指导其如何采用成熟的信任模型与工具,以超越对生成软件产物的接受度,更全面地研究AI助手中的信任问题。