Reinforcement learning solutions have great success in the 2-player general sum setting. In this setting, the paradigm of Opponent Shaping (OS), in which agents account for the learning of their co-players, has led to agents which are able to avoid collectively bad outcomes, whilst also maximizing their reward. These methods have currently been limited to 2-player game. However, the real world involves interactions with many more agents, with interactions on both local and global scales. In this paper, we extend Opponent Shaping (OS) methods to environments involving multiple co-players and multiple shaping agents. We evaluate on over 4 different environments, varying the number of players from 3 to 5, and demonstrate that model-based OS methods converge to equilibrium with better global welfare than naive learning. However, we find that when playing with a large number of co-players, OS methods' relative performance reduces, suggesting that in the limit OS methods may not perform well. Finally, we explore scenarios where more than one OS method is present, noticing that within games requiring a majority of cooperating agents, OS methods converge to outcomes with poor global welfare.
翻译:强化学习解决方案在二人一般和博弈中取得了巨大成功。在此背景下,"对手塑造"范式(即智能体考虑其同伴的学习行为)使得智能体能够在最大化自身收益的同时避免集体不良后果。然而,这些方法目前仅限于双人博弈。现实世界涉及更多智能体之间的交互,且交互发生在局部和全局尺度上。本文将对手塑造方法扩展至包含多个同伴和多个塑造智能体的环境。我们在超过4种不同环境中进行了评估(玩家数量从3到5不等),结果表明基于模型的对手塑造方法能够收敛至比朴素学习具有更优全局福利的均衡。但研究发现,当与大量同伴博弈时,对手塑造方法的相对性能有所下降,暗示其在大规模场景下可能表现不佳。最后,我们探究了存在多个对手塑造方法的情景,发现在需要多数合作型智能体的博弈中,对手塑造方法会收敛至全局福利较差的结局。