Object rearrangement, a fundamental challenge in robotics, demands versatile strategies to handle diverse objects, configurations, and functional needs. To achieve this, the AI robot needs to learn functional rearrangement priors in order to specify precise goals that meet the functional requirements. Previous methods typically learn such priors from either laborious human annotations or manually designed heuristics, which limits scalability and generalization. In this work, we propose a novel approach that leverages large models to distill functional rearrangement priors. Specifically, our approach collects diverse arrangement examples using both LLMs and VLMs and then distills the examples into a diffusion model. During test time, the learned diffusion model is conditioned on the initial configuration and guides the positioning of objects to meet functional requirements. In this manner, we create a handshaking point that combines the strengths of conditional generative models and large models. Extensive experiments on multiple domains, including real-world scenarios, demonstrate the effectiveness of our approach in generating compatible goals for object rearrangement tasks, significantly outperforming baseline methods.
翻译:物体重排是机器人领域的一项基础挑战,需要通用策略来处理多样化的物体、构型及功能需求。为此,AI机器人需学习功能重排先验,以指定满足功能需求的精确目标。以往方法通常通过人工标注或人工设计的启发式规则来学习此类先验,限制了可扩展性与泛化能力。本研究提出一种利用大型模型蒸馏功能重排先验的新方法:具体而言,我们同时使用大语言模型(LLMs)和视觉语言模型(VLMs)收集多样化的排列示例,并将其蒸馏至扩散模型中。在测试阶段,训练后的扩散模型以初始构型为条件,指导物体定位以符合功能需求。通过这种方式,我们构建了条件生成模型与大型模型优势互补的交汇点。在多个领域(包括真实场景)的广泛实验表明,本方法能为物体重排任务生成兼容性目标,显著优于基线方法。