Offline reinforcement learning (RL) aims to learn the optimal policy from a fixed dataset generated by behavior policies without additional environment interactions. One common challenge that arises in this setting is the out-of-distribution (OOD) error, which occurs when the policy leaves the training distribution. Prior methods penalize a statistical distance term to keep the policy close to the behavior policy, but this constrains policy improvement and may not completely prevent OOD actions. Another challenge is that the optimal policy distribution can be multimodal and difficult to represent. Recent works apply diffusion or flow policies to address this problem, but it is unclear how to avoid OOD errors while retaining policy expressiveness. We propose ReFORM, an offline RL method based on flow policies that enforces the less restrictive support constraint by construction. ReFORM learns a behavior cloning (BC) flow policy with a bounded source distribution to capture the support of the action distribution, then optimizes a reflected flow that generates bounded noise for the BC flow while keeping the support, to maximize the performance. Across 40 challenging tasks from the OGBench benchmark with datasets of varying quality and using a constant set of hyperparameters for all tasks, ReFORM dominates all baselines with hand-tuned hyperparameters on the performance profile curves.
翻译:离线强化学习(RL)旨在从行为策略生成的固定数据集中学习最优策略,而无需额外的环境交互。在此设置中出现的一个常见挑战是分布外(OOD)误差,当策略偏离训练分布时会发生此误差。先前的方法通过惩罚一个统计距离项来使策略接近行为策略,但这限制了策略的改进,并且可能无法完全防止OOD动作。另一个挑战是最优策略分布可能是多模态的且难以表示。最近的工作应用扩散策略或流策略来解决此问题,但尚不清楚如何在保持策略表达能力的同时避免OOD误差。我们提出了ReFORM,一种基于流策略的离线RL方法,它通过构造强制执行限制性较弱的支撑集约束。ReFORM首先学习一个具有有界源分布的行为克隆(BC)流策略,以捕获动作分布的支撑集;然后优化一个反射流,该反射流为BC流生成有界噪声,同时保持支撑集,以最大化性能。在OGBench基准测试的40个具有不同质量数据集的挑战性任务中,并使用对所有任务恒定的超参数集,ReFORM在性能曲线图上优于所有经过手动调优超参数的基线方法。