Recent advancements in autonomous driving, augmented reality, robotics, and embodied intelligence have necessitated 3D perception algorithms. However, current 3D perception methods, especially specialized small models, exhibit poor generalization in open scenarios. On the other hand, multimodal large language models (MLLMs) excel in general capacity but underperform in 3D tasks, due to weak 3D local spatial object perception, poor text-based geometric numerical output, and inability to handle camera focal variations. To address these challenges, we propose the following solutions: Spatial-Enhanced Local Feature Mining for better spatial feature extraction, 3D Query Token-Derived Info Decoding for precise geometric regression, and Geometry Projection-Based 3D Reasoning for handling camera focal length variations. We employ parameter-efficient fine-tuning for a pre-trained MLLM and develop LLMI3D, a powerful 3D perception MLLM. Additionally, we have constructed the IG3D dataset, which provides fine-grained descriptions and question-answer annotations. Extensive experiments demonstrate that our LLMI3D achieves state-of-the-art performance, outperforming other methods by a large margin.
翻译:自动驾驶、增强现实、机器人技术和具身智能领域的最新进展对三维感知算法提出了迫切需求。然而,当前的三维感知方法,特别是专用的小型模型,在开放场景中表现出较差的泛化能力。另一方面,多模态大语言模型(MLLMs)虽具备强大的通用能力,但在三维任务中表现欠佳,原因在于其三维局部空间物体感知能力薄弱、基于文本的几何数值输出精度不足,以及无法处理相机焦距变化。为应对这些挑战,我们提出了以下解决方案:空间增强的局部特征挖掘以提升空间特征提取能力,三维查询令牌驱动的信息解码以实现精确的几何回归,以及基于几何投影的三维推理以处理相机焦距变化。我们采用参数高效微调方法对预训练的MLLM进行优化,并开发了LLMI3D——一个强大的三维感知MLLM。此外,我们构建了IG3D数据集,该数据集提供了细粒度的描述和问答标注。大量实验表明,我们的LLMI3D实现了最先进的性能,显著优于其他方法。