As AI usage becomes more prevalent in social contexts, understanding agent-user interaction is critical to designing systems that improve both individual and group outcomes. We present an online behavioral experiment (N = 243) in which participants play three multi-turn bargaining games in groups of three. Each game, presented in randomized order, grants \textit{access to} a single LLM assistance modality: proactive recommendations from an \textit{Advisor}, reactive feedback from a \textit{Coach}, or autonomous execution by a \textit{Delegate}; all modalities are powered by an underlying LLM that achieves superhuman performance in an all-agent environment. On each turn, participants privately decide whether to act manually or use the AI modality available in that game. Despite preferring the \textit{Advisor} modality, participants achieve the highest mean individual gains with the \textit{Delegate}, demonstrating a preference-performance misalignment. Moreover, delegation generates positive externalities; even non-adopting users in \textit{access-to-delegate} treatment groups benefit by receiving higher-quality offers. Mechanism analysis reveals that the \textit{Delegate} agent acts as a market maker, injecting rational, Pareto-improving proposals that restructure the trading environment. Our research reveals a gap between agent capabilities and realized group welfare. While autonomous agents can exhibit super-human strategic performance, their impact on realized welfare gains can be constrained by interfaces, user perceptions, and adoption barriers. Assistance modalities should be designed as mechanisms with endogenous participation; adoption-compatible interaction rules are a prerequisite to improving human welfare with automated assistance.
翻译:随着人工智能在社会情境中的应用日益普及,理解智能体与用户的交互对于设计既能提升个体又能改善群体结果的系统至关重要。我们开展了一项在线行为实验(N = 243),参与者以三人小组形式进行三轮多回合议价博弈。每轮博弈以随机顺序呈现,仅提供单一大型语言模型辅助模式:来自"顾问"的主动建议、"教练"的反馈响应,或"代表"的自主执行;所有模式均由底层大型语言模型驱动,该模型在全智能体环境中实现了超人类性能。在每回合中,参与者需私下决定是手动操作还是使用该轮博弈中可用的AI模式。尽管参与者更偏好"顾问"模式,但他们在"代表"模式下实现了最高的个体平均收益,这显示出偏好与绩效之间的错位。此外,委托模式产生了正向外部性;即使处于"可访问代表"处理组中的非采用用户,也能通过接收更高质量的报价而获益。机制分析表明,"代表"智能体扮演了做市商的角色,通过注入理性且帕累托改进的提案来重构交易环境。我们的研究揭示了智能体能力与已实现群体福利之间的差距。虽然自主智能体能够展现超人类的战略性能,但其对实际福利增益的影响可能受到交互界面、用户认知和采用障碍的限制。辅助模式应设计为具有内生参与机制的体系;与采用兼容的交互规则是通过自动化辅助提升人类福利的先决条件。