This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages. ShapeLLM is built upon an improved 3D encoder by extending ReCon to ReCon++ that benefits from multi-view image distillation for enhanced geometry understanding. By utilizing ReCon++ as the 3D point cloud input encoder for LLMs, ShapeLLM is trained on constructed instruction-following data and tested on our newly human-curated benchmark, 3D MM-Vet. ReCon++ and ShapeLLM achieve state-of-the-art performance in 3D geometry understanding and language-unified 3D interaction tasks, such as embodied visual grounding. Project page: https://qizekun.github.io/shapellm/
翻译:本文提出了ShapeLLM,这是首个为具身交互设计的三维多模态大语言模型(LLM),旨在探索结合三维点云与语言的通用三维物体理解方法。ShapeLLM基于改进的三维编码器构建,通过将ReCon扩展为ReCon++,利用多视角图像蒸馏技术以增强几何理解能力。通过采用ReCon++作为LLM的三维点云输入编码器,ShapeLLM在构建的指令跟随数据上进行训练,并在我们新构建的人工标注基准测试集3D MM-Vet上进行评估。ReCon++与ShapeLLM在三维几何理解及语言统一的三维交互任务(如具身视觉定位)中均取得了最先进的性能。项目页面:https://qizekun.github.io/shapellm/