The centralized training for decentralized execution paradigm emerged as the state-of-the-art approach to $\epsilon$-optimally solving decentralized partially observable Markov decision processes. However, scalability remains a significant issue. This paper presents a novel and more scalable alternative, namely the sequential-move centralized training for decentralized execution. This paradigm further pushes the applicability of the Bellman's principle of optimality, raising three new properties. First, it allows a central planner to reason upon sufficient sequential-move statistics instead of prior simultaneous-move ones. Next, it proves that $\epsilon$-optimal value functions are piecewise linear and convex in such sufficient sequential-move statistics. Finally, it drops the complexity of the backup operators from double exponential to polynomial at the expense of longer planning horizons. Besides, it makes it easy to use single-agent methods, e.g., SARSA algorithm enhanced with these findings, while still preserving convergence guarantees. Experiments on two- as well as many-agent domains from the literature against $\epsilon$-optimal simultaneous-move solvers confirm the superiority of our novel approach. This paradigm opens the door for efficient planning and reinforcement learning methods for multi-agent systems.
翻译:面向分散执行的集中训练范式已成为$\epsilon$最优求解分散部分可观测马尔可夫决策过程的最先进方法。然而,可扩展性仍是关键瓶颈。本文提出一种新颖且更具可扩展性的替代方案,即面向分散执行的序贯移动集中训练范式。该范式进一步拓展了贝尔曼最优性原理的适用范围,并衍生出三项新特性。首先,它使中心规划器能够基于充分的序贯移动统计量进行推理,而非依赖先前的同步移动统计量。其次,该范式证明了$\epsilon$最优值函数在此类充分序贯移动统计量中呈现分段线性与凸性。最后,它通过延长规划时域为代价,将备份算子的复杂度从双重指数级降至多项式级。此外,该范式便于直接应用单智能体方法(例如融合本研究成果增强的SARSA算法),同时仍能保持收敛性保证。在文献中的双智能体及多智能体领域与$\epsilon$最优同步移动求解器的对比实验,验证了我们所提新方法的优越性。该范式为多智能体系统的高效规划与强化学习方法开辟了新路径。