Strategy learning in game environments with multi-agent is a challenging problem. Since each agent's reward is determined by the joint strategy, a greedy learning strategy that aims to maximize its own reward may fall into a local optimum. Recent studies have proposed the opponent modeling and shaping methods for game environments. These methods enhance the efficiency of strategy learning by modeling the strategies and updating processes of other agents. However, these methods often rely on simple predictions of opponent strategy changes. Due to the lack of modeling behavioral preferences such as cooperation and competition, they are usually applicable only to predefined scenarios and lack generalization capabilities. In this paper, we propose a novel Preference-based Opponent Shaping (PBOS) method to enhance the strategy learning process by shaping agents' preferences towards cooperation. We introduce the preference parameter, which is incorporated into the agent's loss function, thus allowing the agent to directly consider the opponent's loss function when updating the strategy. We update the preference parameters concurrently with strategy learning to ensure that agents can adapt to any cooperative or competitive game environment. Through a series of experiments, we verify the performance of PBOS algorithm in a variety of differentiable games. The experimental results show that the PBOS algorithm can guide the agent to learn the appropriate preference parameters, so as to achieve better reward distribution in multiple game environments.
翻译:多智能体博弈环境中的策略学习是一个具有挑战性的问题。由于每个智能体的奖励由联合策略决定,旨在最大化自身奖励的贪婪学习策略可能陷入局部最优。近期研究提出了针对博弈环境的对手建模与塑造方法。这些方法通过对其他智能体的策略及更新过程进行建模,提升了策略学习的效率。然而,这些方法通常依赖于对对手策略变化的简单预测。由于缺乏对合作与竞争等行为偏好的建模,它们通常仅适用于预定义场景,缺乏泛化能力。本文提出了一种新颖的基于偏好的对手塑造方法,通过塑造智能体对合作的偏好来增强策略学习过程。我们引入了偏好参数,将其融入智能体的损失函数,从而使智能体在更新策略时能够直接考虑对手的损失函数。我们使偏好参数与策略学习同步更新,以确保智能体能够适应任何合作性或竞争性博弈环境。通过一系列实验,我们在多种可微分博弈中验证了PBOS算法的性能。实验结果表明,PBOS算法能够引导智能体学习到合适的偏好参数,从而在多种博弈环境中实现更优的奖励分配。