In offline reinforcement learning (RL), addressing the out-of-distribution (OOD) action issue has been a focus, but we argue that there exists an OOD state issue that also impairs performance yet has been underexplored. Such an issue describes the scenario when the agent encounters states out of the offline dataset during the test phase, leading to uncontrolled behavior and performance degradation. To this end, we propose SCAS, a simple yet effective approach that unifies OOD state correction and OOD action suppression in offline RL. Technically, SCAS achieves value-aware OOD state correction, capable of correcting the agent from OOD states to high-value in-distribution states. Theoretical and empirical results show that SCAS also exhibits the effect of suppressing OOD actions. On standard offline RL benchmarks, SCAS achieves excellent performance without additional hyperparameter tuning. Moreover, benefiting from its OOD state correction feature, SCAS demonstrates enhanced robustness against environmental perturbations.
翻译:在离线强化学习(RL)中,处理分布外(OOD)动作问题一直是研究焦点,但我们认为还存在一个同样损害性能却未被充分探索的分布外状态问题。该问题描述了智能体在测试阶段遇到离线数据集之外的分布外状态时,会导致行为失控和性能下降。为此,我们提出了SCAS,一种简单而有效的方法,将离线强化学习中的分布外状态校正与分布外动作抑制统一起来。在技术上,SCAS实现了基于价值感知的分布外状态校正,能够将智能体从分布外状态校正至高价值的分布内状态。理论与实证结果表明,SCAS同样展现出抑制分布外动作的效果。在标准离线强化学习基准测试中,SCAS无需额外超参数调整即可取得优异性能。此外,得益于其分布外状态校正特性,SCAS在面对环境扰动时表现出更强的鲁棒性。