As AI systems enter institutional workflows, workers must decide whether to delegate task execution to AI and how much effort to invest in verifying AI outputs, while institutions evaluate workers using outcome-based standards that may misalign with workers' private costs. We model delegation and verification as the solution to a rational worker's optimization problem, and define worker quality by evaluating an institution-centered utility (distinct from the worker's objective) at the resulting optimal action. We formally characterize optimal worker workflows and show that AI induces *phase transitions*, where arbitrarily small differences in verification ability lead to sharply different behaviors. As a result, AI can amplify workers with strong verification reliability while degrading institutional worker quality for others who rationally over-delegate and reduce oversight, even when baseline task success improves and no behavioral biases are present. These results identify a structural mechanism by which AI reshapes institutional worker quality and amplifies quality disparities between workers with different verification reliability.
翻译:随着人工智能系统进入机构工作流程,工作者必须决定是否将任务执行委托给AI,以及投入多少精力来验证AI的输出结果,而机构则采用基于结果的评估标准来考核工作者,这种标准可能与工作者的私人成本存在错配。我们将委托与验证建模为理性工作者优化问题的解,并通过评估机构中心效用(区别于工作者目标函数)在最优行动下的取值来定义工作者质量。我们形式化地刻画了最优工作者工作流程,并证明AI会引发*相变*现象——验证能力的微小差异会导致截然不同的行为模式。因此,AI在强化具有高验证可靠性工作者表现的同时,也会降低其他工作者在机构评估中的质量,因为这些工作者会理性地过度委托并减少监督,即使基线任务成功率有所提升且不存在行为偏差。这些结果揭示了AI通过结构性机制重塑机构工作者质量,并放大具有不同验证可靠性的工作者之间质量差异的内在机理。