Understanding functionalities in 3D scenes involves interpreting natural language descriptions to locate functional interactive objects, such as handles and buttons, in a 3D environment. Functionality understanding is highly challenging, as it requires both world knowledge to interpret language and spatial perception to identify fine-grained objects. For example, given a task like 'turn on the ceiling light', an embodied AI agent must infer that it needs to locate the light switch, even though the switch is not explicitly mentioned in the task description. To date, no dedicated methods have been developed for this problem. In this paper, we introduce Fun3DU, the first approach designed for functionality understanding in 3D scenes. Fun3DU uses a language model to parse the task description through Chain-of-Thought reasoning in order to identify the object of interest. The identified object is segmented across multiple views of the captured scene by using a vision and language model. The segmentation results from each view are lifted in 3D and aggregated into the point cloud using geometric information. Fun3DU is training-free, relying entirely on pre-trained models. We evaluate Fun3DU on SceneFun3D, the most recent and only dataset to benchmark this task, which comprises over 3000 task descriptions on 230 scenes. Our method significantly outperforms state-of-the-art open-vocabulary 3D segmentation approaches. Project page: https://jcorsetti.github.io/fun3du
翻译:理解三维场景中的功能涉及解析自然语言描述,以定位三维环境中的功能性交互对象,例如把手和按钮。功能理解极具挑战性,因为它既需要世界知识来解释语言,又需要空间感知来识别细粒度对象。例如,给定“打开顶灯”这样的任务,具身AI代理必须推断出需要找到电灯开关,尽管任务描述中并未明确提及开关。迄今为止,尚未有针对此问题的专门方法被开发出来。本文提出了Fun3DU,这是首个专为三维场景功能理解设计的方法。Fun3DU使用语言模型通过思维链推理解析任务描述,从而识别目标对象。识别出的对象通过视觉语言模型在捕获场景的多个视角中进行分割。各视角的分割结果通过几何信息被提升至三维空间并聚合到点云中。Fun3DU无需训练,完全依赖于预训练模型。我们在SceneFun3D数据集上评估Fun3DU,这是目前最新且唯一针对此任务的基准数据集,包含230个场景中的3000多个任务描述。我们的方法显著优于当前最先进的开放词汇三维分割方法。项目页面:https://jcorsetti.github.io/fun3du