Humans frequently make decisions with the aid of artificially intelligent (AI) systems. A common pattern is for the AI to recommend an action to the human who retains control over the final decision. Researchers have identified ensuring that a human has appropriate reliance on an AI as a critical component of achieving complementary performance. We argue that the current definition of appropriate reliance used in such research lacks formal statistical grounding and can lead to contradictions. We propose a formal definition of reliance, based on statistical decision theory, which separates the concepts of reliance as the probability the decision-maker follows the AI's prediction from challenges a human may face in differentiating the signals and forming accurate beliefs about the situation. Our definition gives rise to a framework that can be used to guide the design and interpretation of studies on human-AI complementarity and reliance. Using recent AI-advised decision making studies from literature, we demonstrate how our framework can be used to separate the loss due to mis-reliance from the loss due to not accurately differentiating the signals. We evaluate these losses by comparing to a baseline and a benchmark for complementary performance defined by the expected payoff achieved by a rational agent facing the same decision task as the behavioral agents.
翻译:人类经常借助人工智能系统做出决策。常见模式是,AI向保留最终决策权的人类推荐行动。研究者已认识到,确保人类对AI具有适当依赖是实现互补性绩效的关键要素。我们认为,当前此类研究中使用的适当依赖定义缺乏正式统计基础,可能导致逻辑矛盾。基于统计决策理论,我们提出一个正式依赖定义,将依赖概念(指决策者遵循AI预测的概率)与人类在区分信号和形成情境准确信念方面可能遇到的挑战相分离。该定义衍生出一个框架,可用于指导人机互补性与依赖研究的设计与解读。通过文献中基于AI建议的近期决策研究,我们展示如何使用该框架将错误依赖导致的损失与信号区分不准确导致的损失相分离。通过将行为主体面临的同一决策任务与理性主体预期收益所定义的互补性绩效基准进行对比,我们评估了这些损失。