Robust reinforcement learning agents using high-dimensional observations must be able to identify relevant state features amidst many exogeneous distractors. A representation that captures controllability identifies these state elements by determining what affects agent control. While methods such as inverse dynamics and mutual information capture controllability for a limited number of timesteps, capturing long-horizon elements remains a challenging problem. Myopic controllability can capture the moment right before an agent crashes into a wall, but not the control-relevance of the wall while the agent is still some distance away. To address this we introduce action-bisimulation encoding, a method inspired by the bisimulation invariance pseudometric, that extends single-step controllability with a recursive invariance constraint. By doing this, action-bisimulation learns a multi-step controllability metric that smoothly discounts distant state features that are relevant for control. We demonstrate that action-bisimulation pretraining on reward-free, uniformly random data improves sample efficiency in several environments, including a photorealistic 3D simulation domain, Habitat. Additionally, we provide theoretical analysis and qualitative results demonstrating the information captured by action-bisimulation.
翻译:采用高维观测的鲁棒强化学习代理必须能够在众多外源干扰中识别相关的状态特征。捕捉可控性的表示通过确定影响代理控制的因素来识别这些状态元素。虽然逆动力学和互信息等方法能捕捉有限时间步内的可控性,但捕捉长时域元素仍是一个具有挑战性的问题。短视可控性可以捕捉代理即将撞墙的瞬间,但无法在代理仍处于一定距离时识别墙的控制相关性。为解决这一问题,我们引入了动作双模拟编码——一种受双模拟不变性伪度量启发的方法,通过递归不变性约束扩展了单步可控性。通过这种方式,动作双模拟学习了一种多步可控性度量,能够平滑地衰减与控制相关的远端状态特征。我们证明,在无奖励、均匀随机数据上的动作双模拟预训练能提高多个环境的样本效率,包括照片级真实的3D仿真域Habitat。此外,我们提供了理论分析和定性结果,展示了动作双模拟所捕捉的信息。