Markov Potential Games (MPGs) form an important sub-class of Markov games, which are a common framework to model multi-agent reinforcement learning problems. In particular, MPGs include as a special case the identical-interest setting where all the agents share the same reward function. Scaling the performance of Nash equilibrium learning algorithms to a large number of agents is crucial for multi-agent systems. To address this important challenge, we focus on the independent learning setting where agents can only have access to their local information to update their own policy. In prior work on MPGs, the iteration complexity for obtaining $\epsilon$-Nash regret scales linearly with the number of agents $N$. In this work, we investigate the iteration complexity of an independent policy mirror descent (PMD) algorithm for MPGs. We show that PMD with KL regularization, also known as natural policy gradient, enjoys a better $\sqrt{N}$ dependence on the number of agents, improving over PMD with Euclidean regularization and prior work. Furthermore, the iteration complexity is also independent of the sizes of the agents' action spaces.
翻译:马尔可夫势博弈(MPGs)是马尔可夫博弈的一个重要子类,后者是建模多智能体强化学习问题的常用框架。特别地,MPGs 包含一个特殊情况,即所有智能体共享相同奖励函数的同利益设定。将纳什均衡学习算法的性能扩展至大规模智能体数量对于多智能体系统至关重要。为应对这一重要挑战,我们聚焦于独立学习设定,其中智能体仅能利用其局部信息来更新自身策略。在先前关于 MPGs 的研究中,获得 $\epsilon$-纳什遗憾的迭代复杂度随智能体数量 $N$ 线性增长。本工作中,我们研究了一种用于 MPGs 的独立策略镜像下降(PMD)算法的迭代复杂度。我们证明,采用 KL 正则化的 PMD(亦称为自然策略梯度)在智能体数量上具有更优的 $\sqrt{N}$ 依赖关系,优于采用欧几里得正则化的 PMD 及先前工作。此外,该迭代复杂度亦独立于各智能体动作空间的规模。