Across a growing number of fields, human decision making is supported by predictions from AI models. However, we still lack a deep understanding of the effects of adoption of these technologies. In this paper, we introduce a general computational framework, the 2-Step Agent, which models the effects of AI-assisted decision making. Our framework uses Bayesian methods for causal inference to model 1) how a prediction on a new observation affects the beliefs of a rational Bayesian agent, and 2) how this change in beliefs affects the downstream decision and subsequent outcome. Using this framework, we show by simulations how a single misaligned prior belief can be sufficient for decision support to result in worse downstream outcomes compared to no decision support. Our results reveal several potential pitfalls of AI-driven decision support and highlight the need for thorough model documentation and proper user training.
翻译:随着人工智能模型预测在越来越多的领域为人类决策提供支持,我们仍缺乏对这些技术采纳影响的深入理解。本文提出一个通用计算框架——两步智能体,用于建模人工智能辅助决策的影响。该框架采用贝叶斯因果推断方法,分别建模:1)对新观测值的预测如何影响理性贝叶斯智能体的信念;2)信念变化如何影响下游决策及后续结果。通过仿真实验,我们证明当存在单一先验信念失准时,决策支持系统可能导致比无辅助决策更差的下游结果。本研究揭示了人工智能驱动决策支持的若干潜在缺陷,强调需要完善的模型文档和规范的用户培训。