AI systems are assisting humans with increasingly diverse intellectual tasks but are still prone to mistakes. Humans are over-reliant on this assistance if they trust AI-generated advice, even though they would make a better decision on their own. To identify such instances of over-reliance, this paper proposes the reliance drill: an exercise that tests whether a human can recognise mistakes in AI-generated advice. Our paper examines the reasons why an organisation might choose to implement reliance drills and the doubts they may have about doing so. As an example, we consider the benefits and risks that could arise when using these drills to detect over-reliance on AI in healthcare professionals. We conclude by arguing that reliance drills should become a standard risk management practice for ensuring humans remain appropriately involved in the oversight of AI-assisted decisions.
翻译:人工智能系统正协助人类完成日益多样化的智力任务,但其仍易产生错误。若人类在自身能做出更优决策的情况下,仍盲目采信AI生成的建议,则表明存在过度依赖。为识别此类过度依赖现象,本文提出依赖演练:一种测试人类能否识别AI生成建议中错误的训练方法。本文探讨了组织实施依赖演练的潜在动因及可能存在的疑虑,并以医疗专业人员为例,分析使用此类演练检测AI过度依赖可能带来的收益与风险。最后我们主张,依赖演练应成为标准风险管理实践,以确保人类在AI辅助决策的监督过程中保持适度参与。