A central challenge in AI-assisted decision making is achieving warranted, well-calibrated trust. Both overtrust (accepting incorrect AI recommendations) and undertrust (rejecting correct advice) should be prevented. Prior studies differ in the design of the decision workflow - whether users see the AI suggestion immediately (1-step setup) or have to submit a first decision beforehand (2-step setup) -, and in how trust is measured - through self-reports or as behavioral trust, that is, reliance. We examined the effects and interactions of (a) the type of decision workflow, (b) the presence of explanations, and (c) users' domain knowledge and prior AI experience. We compared reported trust, reliance (agreement rate and switch rate), and overreliance. Results showed no evidence that a 2-step setup reduces overreliance. The decision workflow also did not directly affect self-reported trust, but there was a crossover interaction effect with domain knowledge and explanations, suggesting that the effects of explanations alone may not generalize across workflow setups. Finally, our findings confirm that reported trust and reliance behavior are distinct constructs that should be evaluated separately in AI-assisted decision making.
翻译:人工智能辅助决策中的一个核心挑战在于实现合理且校准良好的信任。应同时防止过度信任(接受错误的AI建议)和信任不足(拒绝正确的建议)。先前研究在决策流程设计上存在差异——用户是立即看到AI建议(一步式设置),还是需要先提交初步决策(两步式设置)——以及在信任测量方式上有所不同——通过自我报告或行为信任(即依赖度)。我们考察了以下因素的作用及其交互影响:(a) 决策流程类型,(b) 解释信息的存在,以及(c) 用户的领域知识和先前AI经验。我们比较了报告信任度、依赖度(同意率与转换率)以及过度依赖度。结果显示,没有证据表明两步式设置能降低过度依赖。决策流程也未直接影响自我报告信任度,但存在与领域知识和解释信息的交叉交互效应,这表明仅凭解释信息的效果可能无法在不同流程设置间推广。最后,我们的研究结果证实,报告信任度与依赖行为是不同的构念,在AI辅助决策中应分别进行评估。