Trust biases how users rely on AI recommendations in AI-assisted decision-making tasks, with low and high levels of trust resulting in increased under- and over-reliance, respectively. We propose that AI assistants should adapt their behavior through trust-adaptive interventions to mitigate such inappropriate reliance. For instance, when user trust is low, providing an explanation can elicit more careful consideration of the assistant's advice by the user. In two decision-making scenarios -- laypeople answering science questions and doctors making medical diagnoses -- we find that providing supporting and counter-explanations during moments of low and high trust, respectively, yields up to 38% reduction in inappropriate reliance and 20% improvement in decision accuracy. We are similarly able to reduce over-reliance by adaptively inserting forced pauses to promote deliberation. Our results highlight how AI adaptation to user trust facilitates appropriate reliance, presenting exciting avenues for improving human-AI collaboration.
翻译:在AI辅助决策任务中,信任会影响用户对AI建议的依赖方式:信任水平过低或过高会分别导致依赖不足和过度依赖的增加。我们提出,AI助手应通过信任自适应干预来调整其行为,以缓解此类不当依赖。例如,当用户信任度较低时,提供解释可以促使用户更仔细地考虑助手的建议。在两个决策场景中——外行人士回答科学问题与医生进行医学诊断——我们发现,分别在低信任和高信任时刻提供支持性解释与反例解释,可使不当依赖减少高达38%,决策准确率提升20%。我们同样能够通过自适应地插入强制暂停以促进审慎思考,从而减少过度依赖。我们的研究结果表明,AI对用户信任的自适应调整如何促进恰当依赖,为改善人机协作开辟了令人振奋的途径。