Despite the growing interest in collaborative AI, designing systems that seamlessly integrate human input remains a major challenge. In this study, we developed a task to systematically examine human preferences for collaborative agents. We created and evaluated five collaborative AI agents with strategies that differ in the manner and degree they adapt to human actions. Participants interacted with a subset of these agents, evaluated their perceived traits, and selected their preferred agent. We used a Bayesian model to understand how agents' strategies influence the Human-AI team performance, AI's perceived traits, and the factors shaping human-preferences in pairwise agent comparisons. Our results show that agents who are more considerate of human actions are preferred over purely performance-maximizing agents. Moreover, we show that such human-centric design can improve the likability of AI collaborators without reducing performance. We find evidence for inequality-aversion effects being a driver of human choices, suggesting that people prefer collaborative agents which allow them to meaningfully contribute to the team. Taken together, these findings demonstrate how collaboration with AI can benefit from development efforts which include both subjective and objective metrics.
翻译:尽管对协作式人工智能的兴趣日益增长,但设计能够无缝整合人类输入的系统仍是一项重大挑战。在本研究中,我们开发了一项任务,以系统性地考察人类对协作智能体的偏好。我们创建并评估了五种协作式AI智能体,其策略在适应人类行为的方式和程度上各不相同。参与者与这些智能体的一个子集进行交互,评估其感知到的特质,并选择他们偏好的智能体。我们使用贝叶斯模型来理解智能体策略如何影响人机团队性能、AI的感知特质,以及在成对智能体比较中塑造人类偏好的因素。我们的结果表明,相较于纯粹追求性能最大化的智能体,那些更考虑人类行为的智能体更受青睐。此外,我们证明这种人本设计可以在不降低性能的前提下提升AI协作伙伴的受欢迎程度。我们发现不平等厌恶效应是人类选择的一个驱动因素,这表明人们偏好那些能让他们为团队做出有意义贡献的协作智能体。综上所述,这些发现表明,人机协作可以从同时包含主观和客观指标的开发努力中受益。