We study offline off-dynamics reinforcement learning (RL) to utilize data from an easily accessible source domain to enhance policy learning in a target domain with limited data. Our approach centers on return-conditioned supervised learning (RCSL), particularly focusing on the decision transformer (DT), which can predict actions conditioned on desired return guidance and complete trajectory history. Previous works tackle the dynamics shift problem by augmenting the reward in the trajectory from the source domain to match the optimal trajectory in the target domain. However, this strategy can not be directly applicable in RCSL owing to (1) the unique form of the RCSL policy class, which explicitly depends on the return, and (2) the absence of a straightforward representation of the optimal trajectory distribution. We propose the Return Augmented Decision Transformer (RADT) method, where we augment the return in the source domain by aligning its distribution with that in the target domain. We provide the theoretical analysis demonstrating that the RCSL policy learned from RADT achieves the same level of suboptimality as would be obtained without a dynamics shift. We introduce two practical implementations RADT-DARA and RADT-MV respectively. Extensive experiments conducted on D4RL datasets reveal that our methods generally outperform dynamic programming based methods in off-dynamics RL scenarios.
翻译:本研究探讨离线异动态强化学习(RL),旨在利用易于获取的源领域数据来增强在数据有限的目标领域中的策略学习。我们的方法以回报条件监督学习(RCSL)为核心,特别关注决策Transformer(DT),该模型能够基于期望的回报指导和完整轨迹历史来预测动作。先前的研究通过调整源领域轨迹中的奖励以匹配目标领域的最优轨迹来解决动态偏移问题。然而,由于(1)RCSL策略类的独特形式明确依赖于回报,以及(2)缺乏最优轨迹分布的直观表示,该策略无法直接应用于RCSL。我们提出了回报增强决策Transformer(RADT)方法,通过将源领域的回报分布与目标领域的分布对齐来增强源领域的回报。我们提供了理论分析,证明从RADT学习到的RCSL策略能够达到与无动态偏移情况下相同的次优性水平。我们分别介绍了两种实际实现方法RADT-DARA和RADT-MV。在D4RL数据集上进行的大量实验表明,在异动态RL场景中,我们的方法通常优于基于动态规划的方法。