Policy Space Response Oracles (PSRO) interleaves empirical game-theoretic analysis with deep reinforcement learning (DRL) to solve games too complex for traditional analytic methods. Tree-exploiting PSRO (TE-PSRO) is a variant of this approach that iteratively builds a coarsened empirical game model in extensive form using data obtained from querying a simulator that represents a detailed description of the game. We make two main methodological advances to TE-PSRO that enhance its applicability to complex games of imperfect information. First, we introduce a scalable representation for the empirical game tree where edges correspond to implicit policies learned through DRL. These policies cover conditions in the underlying game abstracted in the game model, supporting sustainable growth of the tree over epochs. Second, we leverage extensive form in the empirical model by employing refined Nash equilibria to direct strategy exploration. To enable this, we give a modular and scalable algorithm based on generalized backward induction for computing a subgame perfect equilibrium (SPE) in an imperfect-information game. We experimentally evaluate our approach on a suite of games including an alternating-offer bargaining game with outside offers; our results demonstrate that TE-PSRO converges toward equilibrium faster when new strategies are generated based on SPE rather than Nash equilibrium, and with reasonable time/memory requirements for the growing empirical model.
翻译:策略空间响应预言机(PSRO)通过将经验博弈论分析与深度强化学习(DRL)相结合,解决了传统解析方法难以处理的复杂博弈问题。树状利用PSRO(TE-PSRO)是该方法的变体,它利用从模拟器(提供博弈的详细描述)查询获得的数据,迭代构建扩展形式的粗粒度经验博弈模型。我们对TE-PSRO进行了两项主要的方法学改进,以增强其在复杂不完全信息博弈中的适用性。首先,我们引入了一种可扩展的经验博弈树表示方法,其中边对应于通过DRL学习的隐式策略。这些策略覆盖了博弈模型中抽象的基础博弈条件,支持博弈树在迭代周期中的可持续增长。其次,我们通过采用精炼纳什均衡来指导策略探索,从而在经验模型中充分利用扩展形式。为此,我们提出了一种基于广义逆向归纳的模块化可扩展算法,用于计算不完全信息博弈中的子博弈完美均衡(SPE)。我们在包括带外部报价的交替出价议价博弈在内的一系列博弈上进行了实验评估;结果表明,当基于SPE而非纳什均衡生成新策略时,TE-PSRO能更快收敛至均衡,且增长的经验模型具有合理的时间/内存开销。