Offline reinforcement learning (RL) learns effective policies from a static target dataset. The performance of state-of-the-art offline RL algorithms notwithstanding, it relies on the size of the target dataset, and it degrades if limited samples in the target dataset are available, which is often the case in real-world applications. To address this issue, domain adaptation that leverages auxiliary samples from related source datasets (such as simulators) can be beneficial. However, establishing the optimal way to trade off the limited target dataset and the large-but-biased source dataset while ensuring provably theoretical guarantees remains an open challenge. To the best of our knowledge, this paper proposes the first framework that theoretically explores the impact of the weights assigned to each dataset on the performance of offline RL. In particular, we establish performance bounds and the existence of the optimal weight, which can be computed in closed form under simplifying assumptions. We also provide algorithmic guarantees in terms of convergence to a neighborhood of the optimum. Notably, these results depend on the quality of the source dataset and the number of samples in the target dataset. Our empirical results on the well-known Procgen and MuJoCo benchmarks substantiate the theoretical contributions in this work.
翻译:离线强化学习(RL)从静态目标数据集中学习有效策略。尽管现有最先进的离线RL算法性能优异,但其表现依赖于目标数据集的大小,若目标数据集样本有限(这在现实应用中常见),性能便会下降。为解决此问题,利用来自相关源数据集(如模拟器)的辅助样本进行域适应可能具有优势。然而,如何在确保可证明理论保证的前提下,权衡有限的目标数据集与规模大但存在偏差的源数据集,并确立最优方法,仍是一个开放挑战。据我们所知,本文首次提出了一个理论框架,探讨了分配给各数据集的权重对离线RL性能的影响。具体而言,我们建立了性能边界并证明了最优权重的存在性,在简化假设下该权重可通过闭式解计算。我们还提供了算法在收敛至最优解邻域的理论保证。值得注意的是,这些结果取决于源数据集的质量与目标数据集的样本数量。我们在知名基准测试Procgen和MuJoCo上的实验结果验证了本工作的理论贡献。