Offline reinforcement learning is important in many settings with available observational data but the inability to deploy new policies online due to safety, cost, and other concerns. Many recent advances in causal inference and machine learning target estimation of causal contrast functions such as CATE, which is sufficient for optimizing decisions and can adapt to potentially smoother structure. We develop a dynamic generalization of the R-learner (Nie and Wager 2021, Lewis and Syrgkanis 2021) for estimating and optimizing the difference of $Q^\pi$-functions, $Q^\pi(s,1)-Q^\pi(s,0)$ (which can be used to optimize multiple-valued actions). We leverage orthogonal estimation to improve convergence rates in the presence of slower nuisance estimation rates and prove consistency of policy optimization under a margin condition. The method can leverage black-box nuisance estimators of the $Q$-function and behavior policy to target estimation of a more structured $Q$-function contrast.
翻译:离线强化学习在众多场景中至关重要,这些场景中虽有可用的观测数据,但由于安全、成本等因素无法在线部署新策略。因果推断与机器学习领域近期的许多进展聚焦于因果对比函数(如CATE)的估计,这类估计足以优化决策并能适应潜在更平滑的结构。本文针对Q函数差异$Q^\pi(s,1)-Q^\pi(s,0)$的估计与优化(该差异可用于优化多值动作),提出了R-learner(Nie与Wager 2021, Lewis与Syrgkanis 2021)的动态泛化方法。我们利用正交估计在存在较慢辅助参数估计速率时提升收敛速度,并在边界条件下证明了策略优化的一致性。该方法能够借助Q函数与行为策略的黑箱辅助估计器,实现对结构更清晰的Q函数对比的针对性估计。