Model-based reinforcement learning (RL), which learns environment model from offline dataset and generates more out-of-distribution model data, has become an effective approach to the problem of distribution shift in offline RL. Due to the gap between the learned and actual environment, conservatism should be incorporated into the algorithm to balance accurate offline data and imprecise model data. The conservatism of current algorithms mostly relies on model uncertainty estimation. However, uncertainty estimation is unreliable and leads to poor performance in certain scenarios, and the previous methods ignore differences between the model data, which brings great conservatism. Therefore, this paper proposes a milDly cOnservative Model-bAsed offlINe RL algorithm (DOMAIN) without estimating model uncertainty to address the above issues. DOMAIN introduces adaptive sampling distribution of model samples, which can adaptively adjust the model data penalty. In this paper, we theoretically demonstrate that the Q value learned by the DOMAIN outside the region is a lower bound of the true Q value, the DOMAIN is less conservative than previous model-based offline RL algorithms and has the guarantee of security policy improvement. The results of extensive experiments show that DOMAIN outperforms prior RL algorithms on the D4RL dataset benchmark, and achieves better performance than other RL algorithms on tasks that require generalization.
翻译:基于模型的强化学习(RL)通过从离线数据集中学习环境模型并生成更多分布外模型数据,已成为解决离线RL中分布偏移问题的有效方法。由于学习环境与实际环境之间存在差距,算法需引入保守性以平衡准确的离线数据与不精确的模型数据。当前算法的保守性主要依赖模型不确定性估计,但不确定性估计不可靠且在某些场景下性能较差,而先前方法忽视了模型数据间的差异,导致过度保守。为此,本文提出一种无需估计模型不确定性的温和保守基于模型的离线RL算法(DOMAIN)。DOMAIN引入模型样本的自适应采样分布,可自适应调整模型数据的惩罚项。本文从理论上证明,域外区域学习的Q值是真实Q值的下界,且DOMAIN相较于先前基于模型的离线RL算法具有更低的保守性,并具备安全策略改进的保障。大量实验结果表明,DOMAIN在D4RL数据集基准上优于先前的RL算法,并在需要泛化的任务中取得比其它RL算法更好的性能。