Off-policy learning methods seek to derive an optimal policy directly from a fixed dataset of prior interactions. This objective presents significant challenges, primarily due to the inherent distributional shift and value function overestimation bias. These issues become even more noticeable in zero-shot reinforcement learning, where an agent trained on reward-free data must adapt to new tasks at test time without additional training. In this work, we address the off-policy problem in a zero-shot setting by discovering a theoretical connection of successor measures to stationary density ratios. Using this insight, our algorithm can infer optimal importance sampling ratios, effectively performing a stationary distribution correction with an optimal policy for any task on the fly. We benchmark our method in motion tracking tasks on SMPL Humanoid, continuous control on ExoRL, and for the long-horizon OGBench tasks. Our technique seamlessly integrates into forward-backward representation frameworks and enables fast-adaptation to new tasks in a training-free regime. More broadly, this work bridges off-policy learning and zero-shot adaptation, offering benefits to both research areas.
翻译:离线策略学习方法旨在直接从固定的历史交互数据集中推导出最优策略。这一目标面临重大挑战,主要源于固有的分布偏移和价值函数高估偏差。这些问题在零样本强化学习中尤为突出,其中在无奖励数据上训练的智能体必须在测试时适应新任务,而无需额外训练。在本工作中,我们通过发现后继度量与平稳密度比之间的理论联系,解决了零样本设置下的离线策略问题。利用这一洞见,我们的算法能够推断出最优重要性采样比,从而在运行时针对任何任务,结合最优策略有效地执行平稳分布校正。我们在SMPL Humanoid的运动追踪任务、ExoRL的连续控制任务以及长时程OGBench任务上对我们的方法进行了基准测试。我们的技术能够无缝集成到前向-后向表示框架中,并在无需训练的情况下实现对新任务的快速适应。更广泛地说,本工作架起了离线策略学习与零样本适应之间的桥梁,为这两个研究领域均带来了益处。