Safe policy improvement (SPI) offers theoretical control over policy updates, yet existing guarantees largely concern offline, tabular reinforcement learning (RL). We study SPI in general online settings, when combined with world model and representation learning. We develop a theoretical framework showing that restricting policy updates to a well-defined neighborhood of the current policy ensures monotonic improvement and convergence. This analysis links transition and reward prediction losses to representation quality, yielding online, "deep" analogues of classical SPI theorems from the offline RL literature. Building on these results, we introduce DeepSPI, a principled on-policy algorithm that couples local transition and reward losses with regularised policy updates. On the ALE-57 benchmark, DeepSPI matches or exceeds strong baselines, including PPO and DeepMDPs, while retaining theoretical guarantees.
翻译:安全策略改进(SPI)为策略更新提供了理论控制,然而现有的保证主要局限于离线、表格化强化学习(RL)。我们研究了SPI在通用在线设置中的应用,特别是当其与世界模型和表示学习相结合时。我们建立了一个理论框架,表明将策略更新限制在当前策略的一个明确定义的邻域内,可以确保单调改进与收敛。该分析将状态转移与奖励预测损失与表示质量联系起来,从而为离线RL文献中的经典SPI定理提供了在线的、“深度”的类比。基于这些结果,我们提出了DeepSPI——一种原则性的在线策略算法,它将局部转移与奖励损失与正则化的策略更新相结合。在ALE-57基准测试中,DeepSPI达到或超越了包括PPO和DeepMDPs在内的强基线方法,同时保持了理论保证。