While Centralized Training with Decentralized Execution (CTDE) has become the prevailing paradigm in Multi-Agent Reinforcement Learning (MARL), it may not be suitable for scenarios in which agents can fully communicate and share observations with each other. Fully centralized methods, also know as Centralized Training with Centralized Execution (CTCE) methods, can fully utilize observations of all the agents by treating the entire system as a single agent. However, traditional CTCE methods suffer from scalability issues due to the exponential growth of the joint action space. To address these challenges, in this paper we propose JointPPO, a CTCE method that uses Proximal Policy Optimization (PPO) to directly optimize the joint policy of the multi-agent system. JointPPO decomposes the joint policy into conditional probabilities, transforming the decision-making process into a sequence generation task. A Transformer-based joint policy network is constructed, trained with a PPO loss tailored for the joint policy. JointPPO effectively handles a large joint action space and extends PPO to multi-agent setting in a clear and concise manner. Extensive experiments on the StarCraft Multi-Agent Challenge (SMAC) testbed demonstrate the superiority of JointPPO over strong baselines. Ablation experiments and analyses are conducted to explores the factors influencing JointPPO's performance.
翻译:尽管集中训练分散执行(CTDE)已成为多智能体强化学习(MARL)的主流范式,但在智能体能够完全通信并共享观测信息的场景中,该范式可能并不适用。完全集中式方法,即集中训练集中执行(CTCE)方法,通过将整个系统视为单一智能体,能够充分利用所有智能体的观测信息。然而,传统CTCE方法因联合动作空间呈指数级增长而面临可扩展性问题。为应对这些挑战,本文提出JointPPO——一种利用近端策略优化(PPO)直接优化多智能体系统联合策略的CTCE方法。JointPPO将联合策略分解为条件概率,将决策过程转化为序列生成任务。通过构建基于Transformer的联合策略网络,并使用针对联合策略定制的PPO损失进行训练,JointPPO能有效处理大规模联合动作空间,并以清晰简洁的方式将PPO扩展至多智能体场景。在星际争霸多智能体挑战(SMAC)测试平台上进行的广泛实验证明了JointPPO相对于强基线方法的优越性。本文还通过消融实验与分析,深入探究了影响JointPPO性能的关键因素。