Reinforcement learning for multi-agent games has attracted lots of attention recently. However, given the challenge of solving Nash equilibria for large population games, existing works with guaranteed polynomial complexities either focus on variants of zero-sum and potential games, or aim at solving (coarse) correlated equilibria, or require access to simulators, or rely on certain assumptions that are hard to verify. This work proposes MF-OML (Mean-Field Occupation-Measure Learning), an online mean-field reinforcement learning algorithm for computing approximate Nash equilibria of large population sequential symmetric games. MF-OML is the first fully polynomial multi-agent reinforcement learning algorithm for provably solving Nash equilibria (up to mean-field approximation gaps that vanish as the number of players $N$ goes to infinity) beyond variants of zero-sum and potential games. When evaluated by the cumulative deviation from Nash equilibria, the algorithm is shown to achieve a high probability regret bound of $\tilde{O}(M^{3/4}+N^{-1/2}M)$ for games with the strong Lasry-Lions monotonicity condition, and a regret bound of $\tilde{O}(M^{11/12}+N^{- 1/6}M)$ for games with only the Lasry-Lions monotonicity condition, where $M$ is the total number of episodes and $N$ is the number of agents of the game. As a byproduct, we also obtain the first tractable globally convergent computational algorithm for computing approximate Nash equilibria of monotone mean-field games.
翻译:摘要:多智能体博弈的强化学习近期受到广泛关注。然而,针对大规模群体博弈中求解纳什均衡的挑战,现有具备多项式复杂度保证的工作或聚焦于零和博弈与势博弈的变体,或旨在求解(粗糙)相关均衡,或需要访问模拟器,或依赖于难以验证的假设。本文提出MF-OML(平均场占用度量学习),一种用于计算大规模序列对称博弈近似纳什均衡的在线平均场强化学习算法。MF-OML是首个在零和博弈与势博弈变体之外,可证明求解纳什均衡(均衡与平均场近似误差相关,且该误差随玩家数量$N$趋于无穷而消失)的全多项式多智能体强化学习算法。在基于纳什均衡累积偏差的评估下,该算法可在强Lasry-Lions单调性条件下实现$\tilde{O}(M^{3/4}+N^{-1/2}M)$的高概率遗憾界,在仅满足Lasry-Lions单调性条件下实现$\tilde{O}(M^{11/12}+N^{- 1/6}M)$的遗憾界,其中$M$为总回合数,$N$为博弈中智能体数量。作为副产品,本文还首次获得用于计算单调平均场博弈近似纳什均衡的可处理全局收敛计算算法。