Cooperative multi-agent reinforcement learning (MARL) commonly adopts centralized training with decentralized execution, where value-factorization methods enforce the individual-global-maximum (IGM) principle so that decentralized greedy actions recover the team-optimal joint action. However, the reliability of this recipe in real-world settings remains unreliable due to environmental uncertainties arising from the sim-to-real gap, model mismatch, and system noise. We address this gap by introducing Distributionally robust IGM (DrIGM), a principle that requires each agent's robust greedy action to align with the robust team-optimal joint action. We show that DrIGM holds for a novel definition of robust individual action values, which is compatible with decentralized greedy execution and yields a provable robustness guarantee for the whole system. Building on this foundation, we derive DrIGM-compliant robust variants of existing value-factorization architectures (e.g., VDN/QMIX/QTRAN) that (i) train on robust Q-targets, (ii) preserve scalability, and (iii) integrate seamlessly with existing codebases without bespoke per-agent reward shaping. Empirically, on high-fidelity SustainGym simulators and a StarCraft game environment, our methods consistently improve out-of-distribution performance. Code and data are available at https://github.com/crqu/robust-coMARL.
翻译:协同多智能体强化学习通常采用集中训练与分散执行的范式,其中价值分解方法通过满足个体-全局最大值原则,使得分散的贪婪动作能够恢复团队最优联合动作。然而,由于仿真到现实的差异、模型失配以及系统噪声等环境不确定性,该方案在实际场景中的可靠性仍显不足。为填补这一空白,本文提出分布鲁棒IGM原则,要求每个智能体的鲁棒贪婪动作与鲁棒团队最优联合动作保持一致。我们证明,该原则适用于一种新颖的鲁棒个体动作价值定义,该定义不仅与分散贪婪执行机制兼容,还能为整个系统提供可证明的鲁棒性保证。基于此理论框架,我们推导出符合DrIGM原则的现有价值分解架构的鲁棒变体,这些变体具有以下特征:训练时采用鲁棒Q目标值;保持算法可扩展性;无需定制化的单智能体奖励重塑即可与现有代码库无缝集成。在SustainGym高保真仿真器与星际争霸游戏环境中的实验表明,我们的方法能持续提升分布外泛化性能。代码与数据公开于https://github.com/crqu/robust-coMARL。