Applying reinforcement learning (RL) to real-world problems is often made challenging by the inability to interact with the environment and the difficulty of designing reward functions. Offline RL addresses the first challenge by considering access to an offline dataset of environment interactions labeled by the reward function. In contrast, Preference-based RL does not assume access to the reward function and learns it from preferences, but typically requires an online interaction with the environment. We bridge the gap between these frameworks by exploring efficient methods for acquiring preference feedback in a fully offline setup. We propose Sim-OPRL, an offline preference-based reinforcement learning algorithm, which leverages a learned environment model to elicit preference feedback on simulated rollouts. Drawing on insights from both the offline RL and the preference-based RL literature, our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy. We provide theoretical guarantees regarding the sample complexity of our approach, dependent on how well the offline data covers the optimal policy. Finally, we demonstrate the empirical performance of Sim-OPRL in different environments.
翻译:将强化学习(RL)应用于现实世界问题时,常因无法与环境交互及奖励函数设计困难而面临挑战。离线RL通过考虑访问带有奖励函数标注的环境交互离线数据集,解决了第一个挑战。相比之下,基于偏好的RL不假设能访问奖励函数,而是从偏好中学习奖励函数,但通常需要与环境进行在线交互。我们通过探索在完全离线设置中获取偏好反馈的有效方法,来弥合这两个框架之间的差距。我们提出了Sim-OPRL,一种基于偏好的离线强化学习算法,该算法利用学习到的环境模型来引导对模拟轨迹的偏好反馈。借鉴离线RL和基于偏好的RL文献中的见解,我们的算法对分布外数据采用悲观方法,并对获取关于最优策略的信息性偏好采用乐观方法。我们提供了关于方法样本复杂度的理论保证,其依赖于离线数据对最优策略的覆盖程度。最后,我们在不同环境中展示了Sim-OPRL的实证性能。