From autonomous driving to package delivery, ensuring safe yet efficient multi-agent interaction is challenging as the interaction dynamics are influenced by hard-to-model factors such as social norms and contextual cues. Understanding these influences can aid in the design and evaluation of socially-aware autonomous agents whose behaviors are aligned with human values. In this work, we seek to codify factors governing safe multi-agent interactions via the lens of responsibility, i.e., an agent's willingness to deviate from their desired control to accommodate safe interaction with others. Specifically, we propose a data-driven modeling approach based on control barrier functions and differentiable optimization that efficiently learns agents' responsibility allocation from data. We demonstrate on synthetic and real-world datasets that we can obtain an interpretable and quantitative understanding of how much agents adjust their behavior to ensure the safety of others given their current environment.
翻译:从自动驾驶到包裹配送,确保安全高效的多智能体交互具有挑战性,因为交互动态受到社会规范与情境线索等难以建模因素的影响。理解这些影响有助于设计和评估行为符合人类价值观的社会感知自主智能体。本研究试图通过责任视角——即智能体为适应安全交互而偏离其期望控制的意愿程度——来系统化地编码影响安全多智能体交互的因素。具体而言,我们提出一种基于控制屏障函数与可微分优化的数据驱动建模方法,能够从数据中高效学习智能体的责任分配机制。通过在合成数据集和真实数据集上的实验验证,我们证明该方法能够获得可解释的量化理解,揭示智能体在特定环境下为确保他人安全而调整自身行为的程度。