3D visual grounding (3DVG) involves localizing entities in a 3D scene referred to by natural language text. Such models are useful for embodied AI and scene retrieval applications, which involve searching for objects or patterns using natural language descriptions. While recent works have focused on LLM-based scaling of 3DVG datasets, these datasets do not capture the full range of potential prompts which could be specified in the English language. To ensure that we are scaling up and testing against a useful and representative set of prompts, we propose a framework for linguistically analyzing 3DVG prompts and introduce Visual Grounding with Diverse Language in 3D (ViGiL3D), a diagnostic dataset for evaluating visual grounding methods against a diverse set of language patterns. We evaluate existing open-vocabulary 3DVG methods to demonstrate that these methods are not yet proficient in understanding and identifying the targets of more challenging, out-of-distribution prompts, toward real-world applications.
翻译:三维视觉定位(3DVG)旨在根据自然语言文本的描述,在三维场景中定位所提及的实体。此类模型对于具身人工智能和场景检索应用非常有用,这些应用涉及使用自然语言描述来搜索物体或模式。尽管最近的研究工作集中于基于大型语言模型(LLM)来扩展3DVG数据集,但这些数据集并未涵盖英语中所有可能描述的完整范围。为了确保我们扩展和测试所用的描述集合既实用又具有代表性,我们提出了一个用于语言分析3DVG描述提示的框架,并引入了"三维视觉定位与多样化语言"(ViGiL3D)——这是一个用于评估视觉定位方法在多样化语言模式下的诊断性数据集。我们评估了现有的开放词汇3DVG方法,结果表明,面向现实世界应用,这些方法在理解和识别更具挑战性的、分布外描述的目标方面尚不熟练。