Real-world environments require robots to continuously acquire new skills while retaining previously learned abilities, all without the need for clearly defined task boundaries. Storing all past data to prevent forgetting is impractical due to storage and privacy concerns. To address this, we propose a method that efficiently restores a robot's proficiency in previously learned tasks over its lifespan. Using an Episodic Memory (EM), our approach enables experience replay during training and retrieval during testing for local fine-tuning, allowing rapid adaptation to previously encountered problems without explicit task identifiers. Additionally, we introduce a selective weighting mechanism that emphasizes the most challenging segments of retrieved demonstrations, focusing local adaptation where it is most needed. This framework offers a scalable solution for lifelong learning in dynamic, task-unaware environments, combining retrieval-based adaptation with selective weighting to enhance robot performance in open-ended scenarios.
翻译:现实环境要求机器人能够持续学习新技能,同时保持已掌握的能力,且无需明确的任务边界定义。由于存储空间和隐私考虑,存储所有历史数据以防止遗忘是不切实际的。为此,我们提出一种方法,能够在机器人生命周期内高效恢复其对已学任务的熟练度。该方法利用情景记忆,在训练阶段实现经验回放,在测试阶段通过检索进行局部微调,从而无需显式任务标识即可快速适应先前遇到过的问题。此外,我们引入一种选择性加权机制,重点强调检索到的示范中最具挑战性的片段,将局部适应的焦点集中在最需要的部分。该框架为动态、任务无感知环境中的终身学习提供了可扩展的解决方案,通过结合基于检索的适应与选择性加权,提升了机器人在开放场景中的性能表现。