Opponent modeling methods typically involve two crucial steps: building a belief distribution over opponents' strategies, and exploiting this opponent model by playing a best response. However, existing approaches typically require domain-specific heurstics to come up with such a model, and algorithms for approximating best responses are hard to scale in large, imperfect information domains. In this work, we introduce a scalable and generic multiagent training regime for opponent modeling using deep game-theoretic reinforcement learning. We first propose Generative Best Respoonse (GenBR), a best response algorithm based on Monte-Carlo Tree Search (MCTS) with a learned deep generative model that samples world states during planning. This new method scales to large imperfect information domains and can be plug and play in a variety of multiagent algorithms. We use this new method under the framework of Policy Space Response Oracles (PSRO), to automate the generation of an \emph{offline opponent model} via iterative game-theoretic reasoning and population-based training. We propose using solution concepts based on bargaining theory to build up an opponent mixture, which we find identifying profiles that are near the Pareto frontier. Then GenBR keeps updating an \emph{online opponent model} and reacts against it during gameplay. We conduct behavioral studies where human participants negotiate with our agents in Deal-or-No-Deal, a class of bilateral bargaining games. Search with generative modeling finds stronger policies during both training time and test time, enables online Bayesian co-player prediction, and can produce agents that achieve comparable social welfare and Nash bargaining score negotiating with humans as humans trading among themselves.
翻译:对手建模方法通常包含两个关键步骤:构建对手策略的信念分布,以及通过执行最优响应来利用该对手模型。然而,现有方法通常需要依赖领域特定的启发式规则来构建此类模型,且近似最优响应的算法在大型非完美信息领域中难以扩展。本研究提出一种基于深度博弈论强化学习的可扩展通用多智能体训练框架,用于对手建模。我们首先提出生成式最优响应算法,这是一种基于蒙特卡洛树搜索的最优响应算法,其通过训练得到的深度生成模型在规划过程中对世界状态进行采样。这一新方法能够扩展至大型非完美信息领域,并可即插即用地应用于多种多智能体算法。我们将此新方法应用于策略空间响应预言机框架下,通过迭代式博弈论推理与基于种群的训练,实现离线对手模型的自动化生成。我们提出采用基于议价理论的解概念来构建对手混合策略,该方法能够识别接近帕累托边界的策略剖面。随后,生成式最优响应算法持续更新在线对手模型并在博弈过程中对其进行实时响应。我们在"成交与否"双边议价博弈中进行了行为实验,让人类参与者与我们的智能体进行谈判。实验表明:采用生成模型的搜索方法在训练与测试阶段均能发现更强策略,能够实现在线贝叶斯协同玩家预测,并且所构建的智能体在与人类谈判时,可获得与人类间相互交易相当的社会福利水平与纳什议价得分。