Human interactions are influenced by emotions, temperament, and affection, often conflicting with individuals' underlying preferences. Without explicit knowledge of those preferences, judging whether behaviour is appropriate becomes guesswork, leaving us highly prone to misinterpretation. Yet, such understanding is critical if autonomous agents are to collaborate effectively with humans. We frame the problem with multi-agent inverse reinforcement learning and show that even a simple model, where agents weigh their own welfare against that of others, can cover a wide range of social behaviours. Using novel Bayesian techniques, we find that intrinsic rewards and altruistic tendencies can be reliably identified by placing agents in different groups. Crucially, this disentanglement of intrinsic motivation from altruism enables the synthesis of new behaviours aligned with any desired level of altruism, even when demonstrations are drawn from restricted behaviour profiles.
翻译:人类互动受到情绪、气质和情感的影响,这些因素常与个体潜在偏好相冲突。在缺乏对这些偏好的明确认知时,判断行为是否恰当便成为猜测,使我们极易产生误解。然而,若要使自主智能体与人类有效协作,此类理解至关重要。我们通过多智能体逆强化学习框架构建该问题,并证明即使采用智能体权衡自身与他人福祉的简单模型,也能涵盖广泛的社会行为。通过采用新颖的贝叶斯技术,我们发现将智能体置于不同群体中可以可靠识别内在奖励与利他倾向。关键在于,这种对内在动机与利他主义的解耦能够合成符合任意利他水平的新行为,即使演示样本来源于受限的行为模式。