Online reinforcement learning (RL) enhances policies through direct interactions with the environment, but faces challenges related to sample efficiency. In contrast, offline RL leverages extensive pre-collected data to learn policies, but often produces suboptimal results due to limited data coverage. Recent efforts integrate offline and online RL in order to harness the advantages of both approaches. However, effectively combining online and offline RL remains challenging due to issues that include catastrophic forgetting, lack of robustness to data quality and limited sample efficiency in data utilization. In an effort to address these challenges, we introduce A3RL, which incorporates a novel confidence aware Active Advantage Aligned (A3) sampling strategy that dynamically prioritizes data aligned with the policy's evolving needs from both online and offline sources, optimizing policy improvement. Moreover, we provide theoretical insights into the effectiveness of our active sampling strategy and conduct diverse empirical experiments and ablation studies, demonstrating that our method outperforms competing online RL techniques that leverage offline data.
翻译:在线强化学习(RL)通过与环境直接交互来优化策略,但面临样本效率方面的挑战。相比之下,离线RL利用大量预先收集的数据学习策略,但由于数据覆盖范围有限,通常产生次优结果。近期研究尝试整合离线与在线RL,以结合两种方法的优势。然而,由于灾难性遗忘、对数据质量缺乏鲁棒性以及数据利用样本效率有限等问题,有效结合在线与离线RL仍具挑战性。为解决这些挑战,我们提出了A3RL,它引入了一种新颖的置信度感知主动优势对齐(A3)采样策略,该策略动态地优先选择来自在线和离线数据源、且与策略演化需求对齐的数据,从而优化策略改进。此外,我们从理论上分析了主动采样策略的有效性,并进行了多样化的实证实验与消融研究,结果表明我们的方法优于其他利用离线数据的在线RL技术。