Lifelong imitation learning for manipulation tasks poses significant challenges due to distribution shifts that occur in incremental learning steps. Existing methods often focus on unsupervised skill discovery to construct an ever-growing skill library or distillation from multiple policies, which can lead to scalability issues as diverse manipulation tasks are continually introduced and may fail to ensure a consistent latent space throughout the learning process, leading to catastrophic forgetting of previously learned skills. In this paper, we introduce M2Distill, a multi-modal distillation-based method for lifelong imitation learning focusing on preserving consistent latent space across vision, language, and action distributions throughout the learning process. By regulating the shifts in latent representations across different modalities from previous to current steps, and reducing discrepancies in Gaussian Mixture Model (GMM) policies between consecutive learning steps, we ensure that the learned policy retains its ability to perform previously learned tasks while seamlessly integrating new skills. Extensive evaluations on the LIBERO lifelong imitation learning benchmark suites, including LIBERO-OBJECT, LIBERO-GOAL, and LIBERO-SPATIAL, demonstrate that our method consistently outperforms prior state-of-the-art methods across all evaluated metrics.
翻译:终身模仿学习在操作任务中面临重大挑战,这主要源于增量学习步骤中出现的分布偏移。现有方法通常侧重于无监督技能发现以构建不断增长的技能库,或从多个策略中进行蒸馏。然而,随着多样化的操作任务被持续引入,这些方法可能导致可扩展性问题,并且可能无法在整个学习过程中确保一致的潜在空间,从而导致对先前学习技能的灾难性遗忘。本文提出M2Distill,一种基于多模态蒸馏的终身模仿学习方法,其核心在于在整个学习过程中保持跨视觉、语言和动作分布的一致潜在空间。通过调节从前序步骤到当前步骤中不同模态潜在表示的偏移,并减少连续学习步骤之间高斯混合模型策略的差异,我们确保学习到的策略在无缝集成新技能的同时,保留执行先前学习任务的能力。在LIBERO终身模仿学习基准套件(包括LIBERO-OBJECT、LIBERO-GOAL和LIBERO-SPATIAL)上进行的大量评估表明,我们的方法在所有评估指标上均持续优于先前的最先进方法。