Online planning has proven effective in reinforcement learning (RL) for improving sample efficiency and final performance. However, using planning for environment interaction inevitably introduces a divergence between the collected data and the policy's actual behaviors, degrading both model learning and policy improvement. To address this, we propose BOOM (Bootstrap Off-policy with WOrld Model), a framework that tightly integrates planning and off-policy learning through a bootstrap loop: the policy initializes the planner, and the planner refines actions to bootstrap the policy through behavior alignment. This loop is supported by a jointly learned world model, which enables the planner to simulate future trajectories and provides value targets to facilitate policy improvement. The core of BOOM is a likelihood-free alignment loss that bootstraps the policy using the planner's non-parametric action distribution, combined with a soft value-weighted mechanism that prioritizes high-return behaviors and mitigates variability in the planner's action quality within the replay buffer. Experiments on the high-dimensional DeepMind Control Suite and Humanoid-Bench show that BOOM achieves state-of-the-art results in both training stability and final performance. The code is accessible at https://github.com/molumitu/BOOM_MBRL.
翻译:在线规划已被证明在强化学习(RL)中对于提升样本效率和最终性能是有效的。然而,使用规划进行环境交互不可避免地会在收集的数据与策略的实际行为之间引入偏差,从而降低模型学习和策略改进的效果。为解决此问题,我们提出了BOOM(基于世界模型的离线策略自举),这是一个通过自举循环紧密集成规划与离线学习的框架:策略初始化规划器,而规划器则通过行为对齐来优化动作以自举策略。该循环由一个联合学习的世界模型支持,该模型使规划器能够模拟未来轨迹,并提供价值目标以促进策略改进。BOOM的核心是一个无似然对齐损失,它利用规划器的非参数动作分布来自举策略,并结合一个软价值加权机制,该机制优先考虑高回报行为,并减轻了回放缓冲区中规划器动作质量的变异性。在高维DeepMind Control Suite和Humanoid-Bench上的实验表明,BOOM在训练稳定性和最终性能方面均取得了最先进的结果。代码可在 https://github.com/molumitu/BOOM_MBRL 获取。