An open problem in mobile manipulation is how to represent objects and scenes in a unified manner, so that robots can use it both for navigating in the environment and manipulating objects. The latter requires capturing intricate geometry while understanding fine-grained semantics, whereas the former involves capturing the complexity inherit to an expansive physical scale. In this work, we present GeFF (Generalizable Feature Fields), a scene-level generalizable neural feature field that acts as a unified representation for both navigation and manipulation that performs in real-time. To do so, we treat generative novel view synthesis as a pre-training task, and then align the resulting rich scene priors with natural language via CLIP feature distillation. We demonstrate the effectiveness of this approach by deploying GeFF on a quadrupedal robot equipped with a manipulator. We evaluate GeFF's ability to generalize to open-set objects as well as running time, when performing open-vocabulary mobile manipulation in dynamic scenes.
翻译:移动操作中的一个开放性问题是如何以统一的方式表示物体和场景,从而使机器人既能用于环境导航,又能操作物体。后者需要捕捉复杂的几何结构并理解细粒度的语义,而前者则涉及捕捉广阔物理尺度固有的复杂性。在本工作中,我们提出了GeFF(可泛化特征场),这是一个场景级别的可泛化神经特征场,可作为导航和操作的统一表示,并实现实时性能。为此,我们将生成式新视角合成视为预训练任务,然后通过CLIP特征蒸馏将由此产生的丰富场景先验与自然语言对齐。我们通过在配备操作器的四足机器人上部署GeFF,展示了该方法的有效性。我们评估了GeFF在动态场景中执行开放词汇移动操作时的开放集物体泛化能力以及运行时间。