While multi-agent reinforcement learning (MARL) has produced numerous algorithms that converge to Nash or related equilibria, such equilibria are often non-unique and can exhibit widely varying efficiency. This raises a fundamental question: how can one design learning dynamics that not only converge to equilibrium but also select equilibria with desirable performance, such as high social welfare? In contrast to the MARL literature, equilibrium selection has been extensively studied in normal-form games, where decentralized dynamics are known to converge to potential-maximizing or Pareto-optimal Nash equilibria (NEs). Motivated by these results, we study equilibrium selection in finite-horizon stochastic games. We propose a unified actor-critic framework in which a critic learns state-action value functions, and an actor applies a classical equilibrium-selection rule state-wise, treating learned values as stage-game payoffs. We show that, under standard stochastic stability assumptions, the stochastically stable policies of the resulting dynamics inherit the equilibrium selection properties of the underlying normal-form learning rule. As consequences, we obtain potential-maximizing policies in Markov potential games and Pareto-optimal (Markov perfect) equilibria in general-sum stochastic games, together with sample-based implementation of the framework.
翻译:尽管多智能体强化学习(MARL)已发展出众多收敛至纳什均衡或相关均衡的算法,但此类均衡通常不唯一,且其效率可能存在显著差异。这引出了一个根本性问题:如何设计不仅能收敛至均衡,还能选择具有理想性能(如高社会福利)均衡的学习动态?与MARL文献不同,均衡选择在标准型博弈中已得到广泛研究,其中已知去中心化动态能够收敛至势函数最大化或帕累托最优的纳什均衡(NE)。受这些成果启发,我们研究有限时域随机博弈中的均衡选择问题。我们提出一个统一的行动者-评论家框架:评论家学习状态-动作价值函数,行动者则逐状态应用经典的均衡选择规则,并将学习到的价值视为阶段博弈收益。我们证明,在标准的随机稳定性假设下,所得动态的随机稳定策略继承了底层标准型学习规则的均衡选择特性。作为推论,我们在马尔可夫势博弈中获得了势函数最大化策略,在一般和随机博弈中获得了帕累托最优(马尔可夫完美)均衡,同时给出了该框架的基于样本的实现方法。