We develop a decision-theoretic model of human-AI interaction to study when AI assistance improves or impairs human decision-making. A human decision-maker observes private information and receives a recommendation from an AI system, but may combine these signals imperfectly. We show that the effect of AI assistance decomposes into two main forces: the marginal informational value of the AI beyond what the human already knows, and a behavioral distortion arising from how the human uses the AI's recommendation. Central to our analysis is a micro-founded measure of informational overlap between human and AI knowledge. We study an empirically relevant form of imperfect decision-making -- correlation neglect -- whereby humans treat AI recommendations as independent of their own information despite shared evidence. Under this model, we characterize how overlap and AI capabilities shape the Human-AI interaction regime between augmentation, impairment, complementarity, and automation, and draw key insights.
翻译:本文构建了一个人机交互的决策理论模型,以研究人工智能辅助何时改善或损害人类决策。人类决策者观察私有信息并接收人工智能系统的推荐,但可能无法完美整合这些信号。我们证明人工智能辅助的效果可分解为两种主要作用力:人工智能相对于人类既有知识的边际信息价值,以及人类使用人工智能推荐时产生的行为偏差。我们分析的核心在于提出一种基于微观基础的人机知识信息重叠度量方法。我们研究了一种实证相关的不完美决策形式——相关性忽视,即尽管存在共享证据,人类仍将人工智能推荐视为独立于自身信息。在此模型下,我们刻画了信息重叠与人工智能能力如何形塑人机交互在增强、损害、互补与自动化之间的作用机制,并得出关键结论。