Generalizing policies to unseen scenarios remains a critical challenge in visual reinforcement learning, where agents often overfit to the specific visual observations of the training environment. In unseen environments, distracting pixels may lead agents to extract representations containing task-irrelevant information. As a result, agents may deviate from the optimal behaviors learned during training, thereby hindering visual generalization.To address this issue, we propose the Salience-Invariant Consistent Policy Learning (SCPL) algorithm, an efficient framework for zero-shot generalization. Our approach introduces a novel value consistency module alongside a dynamics module to effectively capture task-relevant representations. The value consistency module, guided by saliency, ensures the agent focuses on task-relevant pixels in both original and perturbed observations, while the dynamics module uses augmented data to help the encoder capture dynamic- and reward-relevant representations. Additionally, our theoretical analysis highlights the importance of policy consistency for generalization. To strengthen this, we introduce a policy consistency module with a KL divergence constraint to maintain consistent policies across original and perturbed observations.Extensive experiments on the DMC-GB, Robotic Manipulation, and CARLA benchmarks demonstrate that SCPL significantly outperforms state-of-the-art methods in terms of generalization. Notably, SCPL achieves average performance improvements of 14\%, 39\%, and 69\% in the challenging DMC video hard setting, the Robotic hard setting, and the CARLA benchmark, respectively.Project Page: https://sites.google.com/view/scpl-rl.
翻译:在视觉强化学习中,将策略泛化至未见场景仍是一个关键挑战,智能体常会过拟合训练环境的特定视觉观测。在未见环境中,干扰像素可能导致智能体提取包含任务无关信息的表征,从而偏离训练期间习得的最优行为,阻碍视觉泛化。为解决该问题,我们提出显著性不变一致性策略学习算法,这是一种高效的零样本泛化框架。该方法引入新颖的价值一致性模块与动力学模块,以有效捕获任务相关表征。价值一致性模块在显著性引导下,确保智能体在原始观测与扰动观测中均聚焦于任务相关像素;动力学模块则利用增强数据帮助编码器捕获动态及奖励相关表征。此外,我们的理论分析揭示了策略一致性对泛化的重要性。为此,我们引入带KL散度约束的策略一致性模块,以保持原始观测与扰动观测间策略的一致性。在DMC-GB、机器人操控与CARLA基准上的大量实验表明,SCPL在泛化性能上显著优于现有先进方法。值得注意的是,在极具挑战性的DMC视频困难设定、机器人困难设定及CARLA基准中,SCPL分别实现了14%、39%和69%的平均性能提升。项目页面:https://sites.google.com/view/scpl-rl。