AI-based systems can increasingly perform work tasks autonomously. In safety-critical tasks, human oversight of these systems is required to mitigate risks and to ensure responsibility in case something goes wrong. Since people often struggle to stay focused and perform good oversight, intelligent support systems are used to assist them, giving decision recommendations, alerting users, or restricting them from dangerous actions. However, in cases where recommendations are wrong, decision support might undermine the very reason why human oversight was employed -- genuine moral responsibility. The goal of our study was to investigate how a decision support system that restricted available interventions would affect overseer's perceived moral responsibility, in particular in cases where the support errs. In a simulated oversight experiment, participants (\textit{N}=274) monitored an autonomous drone that faced ten critical situations, choosing from six possible actions to resolve each situation. An AI system constrained participants' choices to either six, four, two, or only one option (between-subject study). Results showed that participants, who were restricted to choosing from a single action, felt less morally responsible if a crash occurred. At the same time, participants' judgments about the responsibility of other stakeholders (the AI; the developer of the AI) did not change between conditions. Our findings provide important insights for user interface design and oversight architectures: they should prevent users from attributing moral agency to AI, help them understand how moral responsibility is distributed, and, when oversight aims to prevent ethically undesirable outcomes, be designed to support the epistemic and causal conditions required for moral responsibility.
翻译:基于人工智能的系统正日益能够自主执行工作任务。在安全关键任务中,需要人类对这些系统进行监督以降低风险,并在出现问题时确保责任归属。由于人们往往难以保持专注并实施有效监督,智能支持系统被用于协助用户,提供决策建议、发出警报或限制危险操作。然而,当系统建议出现错误时,决策支持可能会削弱采用人类监督的根本目的——真正的道德责任。本研究旨在探究限制可用干预措施的决策支持系统如何影响监督者的感知道德责任,特别是在支持系统出错的情况下。通过模拟监督实验,参与者(\textit{N}=274)监控面临十次危急情境的自主无人机,并从六种可能行动中选择解决方案。人工智能系统将参与者的选择限制为六项、四项、两项或仅一项选项(组间研究)。结果显示,被限制仅能选择单一行动的参与者在发生坠毁事件时感知到的道德责任显著降低。与此同时,参与者对其他利益相关者(人工智能系统;AI开发者)的责任判断在不同实验条件下未发生显著变化。我们的研究结果为用户界面设计和监督架构提供了重要启示:应防止用户将道德主体性归因于人工智能,帮助其理解道德责任的分配机制,并且当监督旨在防止伦理不良后果时,系统设计应支持道德责任所需的认知与因果条件。