AI systems are assisting humans with increasingly diverse intellectual tasks but are still prone to mistakes. Humans are over-reliant on this assistance if they trust AI-generated advice, even though they would make a better decision on their own. To identify such instances of over-reliance, this paper proposes the reliance drill: an exercise that tests whether a human can recognise mistakes in AI-generated advice. Our paper examines the reasons why an organisation might choose to implement reliance drills and the doubts they may have about doing so. As an example, we consider the benefits and risks that could arise when using these drills to detect over-reliance on AI in healthcare professionals. We conclude by arguing that reliance drills should become a standard risk management practice for ensuring humans remain appropriately involved in the oversight of AI-assisted decisions.
翻译:人工智能系统正在协助人类完成日益多样化的智力任务,但仍容易出错。当人类信任人工智能生成的建议时,即使他们本可以独立做出更好的决策,也会对这些辅助产生过度依赖。为识别此类过度依赖现象,本文提出依赖演练:一种测试人类能否识别人工智能生成建议中错误的训练方法。本文探讨了组织实施依赖演练的潜在动因及其可能存在的疑虑。以医疗专业人员为例,我们分析了使用此类演练检测对人工智能过度依赖可能带来的益处与风险。最后,我们主张依赖演练应成为标准风险管理实践,以确保人类在人工智能辅助决策的监督过程中保持适度参与。