Offline reinforcement learning (RL) has garnered significant interest due to its safe and easily scalable paradigm. However, training under this paradigm presents its own challenge: the extrapolation error stemming from out-of-distribution (OOD) data. Existing methodologies have endeavored to address this issue through means like penalizing OOD Q-values or imposing similarity constraints on the learned policy and the behavior policy. Nonetheless, these approaches are often beset by limitations such as being overly conservative in utilizing OOD data, imprecise OOD data characterization, and significant computational overhead. To address these challenges, this paper introduces an Uncertainty-Aware Rank-One Multi-Input Multi-Output (MIMO) Q Network framework. The framework aims to enhance Offline Reinforcement Learning by fully leveraging the potential of OOD data while still ensuring efficiency in the learning process. Specifically, the framework quantifies data uncertainty and harnesses it in the training losses, aiming to train a policy that maximizes the lower confidence bound of the corresponding Q-function. Furthermore, a Rank-One MIMO architecture is introduced to model the uncertainty-aware Q-function, \TP{offering the same ability for uncertainty quantification as an ensemble of networks but with a cost nearly equivalent to that of a single network}. Consequently, this framework strikes a harmonious balance between precision, speed, and memory efficiency, culminating in improved overall performance. Extensive experimentation on the D4RL benchmark demonstrates that the framework attains state-of-the-art performance while remaining computationally efficient. By incorporating the concept of uncertainty quantification, our framework offers a promising avenue to alleviate extrapolation errors and enhance the efficiency of offline RL.
翻译:离线强化学习因其安全且易于扩展的范式而备受关注。然而,在该范式下训练存在其特有的挑战:由分布外数据引发的外推误差。现有方法尝试通过惩罚OOD Q值或对学习策略与行为策略施加相似性约束来解决此问题。然而,这些方法常受限于对OOD数据利用过于保守、OOD数据表征不精确以及计算开销显著等不足。为应对这些挑战,本文提出了一种不确定性感知的秩一多输入多输出Q网络框架。该框架旨在通过充分利用OOD数据的潜力来增强离线强化学习,同时确保学习过程的效率。具体而言,该框架量化数据不确定性并将其纳入训练损失中,旨在训练一个最大化相应Q函数置信下界的策略。此外,引入秩一MIMO架构来建模不确定性感知的Q函数,\TP{其提供与网络集成相同的uncertainty quantification能力,但成本几乎与单一网络相当}。因此,该框架在精度、速度和内存效率之间取得了和谐平衡,最终提升了整体性能。在D4RL基准上的大量实验表明,该框架在保持计算高效的同时实现了最先进的性能。通过融入不确定性量化概念,我们的框架为缓解外推误差和提升离线RL效率提供了一条有前景的途径。