Recent progress in generative models has stimulated significant innovations in many fields, such as image generation and chatbots. Despite their success, these models often produce sketchy and misleading solutions for complex multi-agent decision-making problems because they miss the trial-and-error experience and reasoning as humans. To address this limitation, we explore a paradigm that integrates a language-guided simulator into the multi-agent reinforcement learning pipeline to enhance the generated answer. The simulator is a world model that separately learns dynamics and reward, where the dynamics model comprises an image tokenizer as well as a causal transformer to generate interaction transitions autoregressively, and the reward model is a bidirectional transformer learned by maximizing the likelihood of trajectories in the expert demonstrations under language guidance. Given an image of the current state and the task description, we use the world model to train the joint policy and produce the image sequence as the answer by running the converged policy on the dynamics model. The empirical results demonstrate that this framework can improve the answers for multi-agent decision-making problems by showing superior performance on the training and unseen tasks of the StarCraft Multi-Agent Challenge benchmark. In particular, it can generate consistent interaction sequences and explainable reward functions at interaction states, opening the path for training generative models of the future.
翻译:生成模型的最新进展推动了图像生成与聊天机器人等众多领域的重大创新。尽管这些模型取得了成功,但对于复杂多智能体决策问题,它们常因缺乏人类试错经验与推理能力而产生粗略且具有误导性的解决方案。为突破此局限,我们探索了一种将语言引导模拟器集成至多智能体强化学习流程以增强生成答案的范式。该模拟器作为世界模型,分别学习动力学与奖励函数:其中动力学模型包含图像分词器与因果Transformer,以自回归方式生成交互转移;奖励模型则为通过最大化语言引导下专家示教轨迹似然性学习的双向Transformer。给定当前状态图像与任务描述,我们使用该世界模型训练联合策略,并通过在动力学模型上运行收敛策略生成图像序列作为答案。实证结果表明,该框架能通过星际争霸多智能体挑战基准的训练任务与未见任务上的优越性能,提升多智能体决策问题的解答质量。特别地,它能生成一致的交互序列与可解释的交互状态奖励函数,为未来生成模型的训练开辟了新路径。