Multi-agent reinforcement learning (MARL) algorithms often struggle to find strategies close to Pareto optimal Nash Equilibrium, owing largely to the lack of efficient exploration. The problem is exacerbated in sparse-reward settings, caused by the larger variance exhibited in policy learning. This paper introduces MESA, a novel meta-exploration method for cooperative multi-agent learning. It learns to explore by first identifying the agents' high-rewarding joint state-action subspace from training tasks and then learning a set of diverse exploration policies to "cover" the subspace. These trained exploration policies can be integrated with any off-policy MARL algorithm for test-time tasks. We first showcase MESA's advantage in a multi-step matrix game. Furthermore, experiments show that with learned exploration policies, MESA achieves significantly better performance in sparse-reward tasks in several multi-agent particle environments and multi-agent MuJoCo environments, and exhibits the ability to generalize to more challenging tasks at test time.
翻译:多智能体强化学习算法常因缺乏高效探索而难以逼近帕累托最优纳什均衡策略,这一问题在稀疏奖励设定下因策略学习方差增大而愈发严峻。本文提出MESA——一种面向协同多智能体学习的新型元探索方法。该方法通过从训练任务中识别智能体的高回报联合状态-动作子空间,并学习一组多样化探索策略以"覆盖"该子空间来实现探索。这些训练好的探索策略可与任意离策略多智能体强化学习算法集成,用于测试阶段任务。我们首先在多步矩阵博弈中验证MESA的优势。此外,实验表明,借助学习到的探索策略,MESA在多个多智能体粒子环境与多智能体MuJoCo环境的稀疏奖励任务中取得显著更优的性能,并展现出在测试阶段泛化至更具挑战性任务的能力。