One of the remaining challenges in reinforcement learning is to develop agents that can generalise to novel scenarios they might encounter once deployed. This challenge is often framed in a multi-task setting where agents train on a fixed set of tasks and have to generalise to new tasks. Recent work has shown that in this setting increased exploration during training can be leveraged to increase the generalisation performance of the agent. This makes sense when the states encountered during testing can actually be explored during training. In this paper, we provide intuition why exploration can also benefit generalisation to states that cannot be explicitly encountered during training. Additionally, we propose a novel method Explore-Go that exploits this intuition by increasing the number of states on which the agent trains. Explore-Go effectively increases the starting state distribution of the agent and as a result can be used in conjunction with most existing on-policy or off-policy reinforcement learning algorithms. We show empirically that our method can increase generalisation performance in an illustrative environment and on the Procgen benchmark.
翻译:强化学习领域尚存的挑战之一是如何开发能够泛化至部署后可能遇到的新场景的智能体。这一挑战通常被置于多任务学习框架中,即智能体在固定任务集上训练后需泛化至新任务。近期研究表明,在此框架下增强训练期间的探索行为可提升智能体的泛化性能——当测试中遇到的状态确实能在训练中被探索时,该策略具有合理性。本文通过理论阐释说明:即使对于训练中无法显式遭遇的状态,增强探索同样有利于泛化能力的提升。基于此洞见,我们提出名为Explore-Go的新方法,通过扩展智能体的训练状态空间实现泛化增强。该方法能有效拓宽智能体的初始状态分布,因而可与大多数现有的同策略或异策略强化学习算法结合使用。实验结果表明,我们的方法在概念验证环境及Procgen基准测试中均能显著提升泛化性能。