Offline multi-agent reinforcement learning (MARL) aims to solve cooperative decision-making problems in multi-agent systems using pre-collected datasets. Existing offline MARL methods primarily constrain training within the dataset distribution, resulting in overly conservative policies that struggle to generalize beyond the support of the data. While model-based approaches offer a promising solution by expanding the original dataset with synthetic data generated from a learned world model, the high dimensionality, non-stationarity, and complexity of multi-agent systems make it challenging to accurately estimate the transitions and reward functions in offline MARL. Given the difficulty of directly modeling joint dynamics, we propose a local-to-global (LOGO) world model, a novel framework that leverages local predictions-which are easier to estimate-to infer global state dynamics, thus improving prediction accuracy while implicitly capturing agent-wise dependencies. Using the trained world model, we generate synthetic data to augment the original dataset, expanding the effective state-action space. To ensure reliable policy learning, we further introduce an uncertainty-aware sampling mechanism that adaptively weights synthetic data by prediction uncertainty, reducing approximation error propagation to policies. In contrast to conventional ensemble-based methods, our approach requires only an additional encoder for uncertainty estimation, significantly reducing computational overhead while maintaining accuracy. Extensive experiments across 8 scenarios against 8 baselines demonstrate that our method surpasses state-of-the-art baselines on standard offline MARL benchmarks, establishing a new model-based baseline for generalizable offline multi-agent learning.
翻译:离线多智能体强化学习(MARL)旨在利用预收集数据集解决多智能体系统中的协作决策问题。现有离线MARL方法主要将训练约束在数据集分布内,导致策略过于保守,难以泛化至数据支撑集之外。尽管基于模型的方法通过学习的世界模型生成合成数据以扩展原始数据集,为问题提供了有前景的解决方案,但多智能体系统的高维性、非平稳性与复杂性使得准确估计离线MARL中的状态转移与奖励函数极具挑战。鉴于直接建模联合动力学的困难,我们提出局部到全局(LOGO)世界模型——一种创新框架,通过利用更易估计的局部预测来推断全局状态动态,从而在隐式捕获智能体间依赖关系的同时提升预测精度。借助训练完成的世界模型,我们生成合成数据以增强原始数据集,从而扩展有效状态-动作空间。为确保策略学习的可靠性,我们进一步引入不确定性感知采样机制,该机制通过预测不确定性自适应加权合成数据,减少近似误差向策略的传播。相较于传统的基于集成的方法,我们的方法仅需额外编码器进行不确定性估计,在保持精度的同时显著降低计算开销。在8个场景中与8个基线方法的广泛实验表明,我们的方法在标准离线MARL基准测试中超越了现有最优基线,为可泛化的离线多智能体学习建立了新的基于模型基准。