Generalization to novel object configurations and instances across diverse tasks and environments is a critical challenge in robotics. Keypoint-based representations have been proven effective as a succinct representation for capturing essential object features, and for establishing a reference frame in action prediction, enabling data-efficient learning of robot skills. However, their manual design nature and reliance on additional human labels limit their scalability. In this paper, we propose KALM, a framework that leverages large pre-trained vision-language models (LMs) to automatically generate task-relevant and cross-instance consistent keypoints. KALM distills robust and consistent keypoints across views and objects by generating proposals using LMs and verifies them against a small set of robot demonstration data. Based on the generated keypoints, we can train keypoint-conditioned policy models that predict actions in keypoint-centric frames, enabling robots to generalize effectively across varying object poses, camera views, and object instances with similar functional shapes. Our method demonstrates strong performance in the real world, adapting to different tasks and environments from only a handful of demonstrations while requiring no additional labels. Website: https://kalm-il.github.io/
翻译:泛化到不同任务和环境中新颖的对象配置和实例是机器人学面临的一个关键挑战。基于关键点的表示已被证明是一种有效的简洁表示,能够捕捉对象的基本特征,并为动作预测建立参考系,从而实现数据高效的机器人技能学习。然而,其手动设计的特性以及对额外人工标注的依赖限制了其可扩展性。在本文中,我们提出了KALM框架,该框架利用大规模预训练视觉-语言模型自动生成任务相关且跨实例一致的关键点。KALM通过使用大模型生成关键点提议,并利用少量机器人演示数据进行验证,从而提炼出跨视角和跨对象的鲁棒且一致的关键点。基于生成的关键点,我们可以训练以关键点为条件的策略模型,该模型在关键点中心的参考系中预测动作,使机器人能够在具有相似功能形状的不同物体姿态、相机视角和物体实例之间有效地实现泛化。我们的方法在现实世界中展现出强大的性能,仅需少量演示即可适应不同的任务和环境,且无需任何额外标注。项目网站:https://kalm-il.github.io/