Offline model-based reinforcement learning (MBRL) enhances data efficiency by utilizing pre-collected datasets to learn models and policies, especially in scenarios where exploration is costly or infeasible. Nevertheless, its performance often suffers from the objective mismatch between model and policy learning, resulting in inferior performance despite accurate model predictions. This paper first identifies the primary source of this mismatch comes from the underlying confounders present in offline data for MBRL. Subsequently, we introduce \textbf{B}ilin\textbf{E}ar \textbf{CAUS}al r\textbf{E}presentation~(BECAUSE), an algorithm to capture causal representation for both states and actions to reduce the influence of the distribution shift, thus mitigating the objective mismatch problem. Comprehensive evaluations on 18 tasks that vary in data quality and environment context demonstrate the superior performance of BECAUSE over existing offline RL algorithms. We show the generalizability and robustness of BECAUSE under fewer samples or larger numbers of confounders. Additionally, we offer theoretical analysis of BECAUSE to prove its error bound and sample efficiency when integrating causal representation into offline MBRL.
翻译:离线模型强化学习通过利用预收集数据集学习模型与策略来提升数据效率,尤其在探索成本高昂或不可行的场景中具有重要价值。然而,其性能常受模型学习与策略学习之间的目标失配问题制约,导致即使模型预测准确,最终性能仍不理想。本文首先指出该失配问题主要源于离线MBRL数据中存在的潜在混杂因子。基于此,我们提出双线性因果表征算法BECAUSE,通过捕捉状态与动作的因果表征以降低分布偏移的影响,从而缓解目标失配问题。在涵盖不同数据质量与环境背景的18项任务上的综合评估表明,BECAUSE相较于现有离线强化学习算法具有显著优势。我们验证了BECAUSE在样本量减少或混杂因子增多情境下的泛化能力与鲁棒性。此外,本文通过理论分析证明了BECAUSE在因果表征融入离线MBRL框架时的误差界限与样本效率。