Offline reinforcement learning (RL) has attracted much attention due to its ability in learning from static offline datasets and eliminating the need of interacting with the environment. Nevertheless, the success of offline RL relies heavily on the offline transitions annotated with reward labels. In practice, we often need to hand-craft the reward function, which is sometimes difficult, labor-intensive, or inefficient. To tackle this challenge, we set our focus on the offline imitation learning (IL) setting, and aim at getting a reward function based on the expert data and unlabeled data. To that end, we propose a simple yet effective search-based offline IL method, tagged SEABO. SEABO allocates a larger reward to the transition that is close to its closest neighbor in the expert demonstration, and a smaller reward otherwise, all in an unsupervised learning manner. Experimental results on a variety of D4RL datasets indicate that SEABO can achieve competitive performance to offline RL algorithms with ground-truth rewards, given only a single expert trajectory, and can outperform prior reward learning and offline IL methods across many tasks. Moreover, we demonstrate that SEABO also works well if the expert demonstrations contain only observations. Our code is publicly available at https://github.com/dmksjfl/SEABO.
翻译:离线强化学习因其能够从静态离线数据集中学习且无需与环境交互的能力而备受关注。然而,离线强化学习的成功高度依赖于带有奖励标签的离线转换数据。实践中,我们往往需要手动设计奖励函数,这有时会面临困难、劳动强度大或效率低下的问题。为应对这一挑战,我们将研究聚焦于离线模仿学习场景,旨在基于专家数据和无标签数据获取奖励函数。为此,我们提出了一种简单而有效的基于搜索的离线模仿学习方法,命名为SEABO。SEABO以无监督学习方式,对与专家演示中最近邻转换接近的转换分配较大奖励,反之则分配较小奖励。在多种D4RL数据集上的实验结果表明,仅需单条专家轨迹,SEABO即可达到与使用真实奖励的离线强化学习算法相媲美的性能,并在许多任务上优于先前的奖励学习方法和离线模仿学习方法。此外,我们证明即使专家演示仅包含观测数据,SEABO仍能有效工作。我们的代码开源在https://github.com/dmksjfl/SEABO。