Reparameterization Policy Gradient (RPG) has emerged as a powerful paradigm for model-based reinforcement learning, enabling high sample efficiency by backpropagating gradients through differentiable dynamics. However, prior RPG approaches have been predominantly restricted to Gaussian policies, limiting their performance and failing to leverage recent advances in generative models. In this work, we identify that flow policies, which generate actions via differentiable ODE integration, naturally align with the RPG framework, a connection not established in prior work. However, naively exploiting this synergy proves ineffective, often suffering from training instability and a lack of exploration. We propose Reparameterization Flow Policy Optimization (RFO). RFO computes policy gradients by backpropagating jointly through the flow generation process and system dynamics, unlocking high sample efficiency without requiring intractable log-likelihood calculations. RFO includes two tailored regularization terms for stability and exploration. We also propose a variant of RFO with action chunking. Extensive experiments on diverse locomotion and manipulation tasks, involving both rigid and soft bodies with state or visual inputs, demonstrate the effectiveness of RFO. Notably, on a challenging locomotion task controlling a soft-body quadruped, RFO achieves almost $2\times$ the reward of the state-of-the-art baseline.
翻译:重参数化策略梯度(RPG)已成为基于模型的强化学习的一个强大范式,它通过可微动力学反向传播梯度,实现了高样本效率。然而,先前的RPG方法主要局限于高斯策略,限制了其性能,并且未能利用生成模型的最新进展。在这项工作中,我们发现,通过可微常微分方程积分生成动作的流策略,天然地与RPG框架相契合,这一关联在先前工作中并未建立。然而,简单地利用这种协同效应被证明是无效的,通常会遇到训练不稳定和探索不足的问题。我们提出了重参数化流策略优化(RFO)。RFO通过流生成过程和系统动力学联合反向传播来计算策略梯度,从而在不需进行难以处理的似然对数计算的情况下,解锁了高样本效率。RFO包含两个为稳定性和探索量身定制的正则化项。我们还提出了一种带有动作分块的RFO变体。在涉及刚体和软体、状态或视觉输入的各种运动和控制任务上进行的大量实验,证明了RFO的有效性。值得注意的是,在一个控制软体四足机器人的具有挑战性的运动任务上,RFO获得的奖励几乎是现有最先进基准方法的$2\times$。