Learning efficient representations for decision-making policies is a challenge in imitation learning (IL). Current IL methods require expert demonstrations, which are expensive to collect. Consequently, they often have underdeveloped world models. Self-supervised learning (SSL) offers an alternative by allowing models to learn from diverse, unlabeled data, including failures. However, SSL methods often operate in raw input space, making them inefficient. In this work, we propose ACT-JEPA, a novel architecture that integrates IL and SSL to enhance policy representations. We train a policy to predict (1) action sequences and (2) abstract observation sequences. The first objective uses action chunking to improve action prediction and reduce compounding errors. The second objective extends this idea of chunking by predicting abstract observation sequences. We utilize Joint-Embedding Predictive Architecture to predict in abstract representation space, allowing the model to filter out irrelevant details, improve efficiency, and develop a robust world model. Our experiments show that ACT-JEPA improves the quality of representations by learning temporal environment dynamics. Additionally, the model's ability to predict abstract observation sequences results in representations that effectively generalize to action sequence prediction. ACT-JEPA performs on par with established baselines across a range of decision-making tasks.
翻译:在模仿学习(IL)中,学习用于决策策略的高效表示是一个挑战。当前的IL方法需要专家演示,而收集这些演示成本高昂。因此,这些方法通常具有欠发达的世界模型。自监督学习(SSL)提供了一种替代方案,它允许模型从多样化的未标记数据(包括失败数据)中学习。然而,SSL方法通常在原始输入空间中运行,导致效率低下。在这项工作中,我们提出了ACT-JEPA,一种集成IL和SSL以增强策略表示的新型架构。我们训练一个策略来预测(1)动作序列和(2)抽象观测序列。第一个目标使用动作分块来改进动作预测并减少复合误差。第二个目标通过预测抽象观测序列扩展了分块的思想。我们利用联合嵌入预测架构在抽象表示空间中进行预测,使模型能够过滤掉不相关的细节、提高效率并发展出鲁棒的世界模型。我们的实验表明,ACT-JEPA通过学习时序环境动态提高了表示质量。此外,模型预测抽象观测序列的能力产生了能够有效泛化到动作序列预测的表示。在一系列决策任务中,ACT-JEPA的表现与现有基线方法相当。