Recent developments in Multimodal Large Language Models (MLLMs) have significantly improved Vision-Language (VL) reasoning in 2D domains. However, extending these capabilities to 3D scene understanding remains a major challenge. Existing 3D Multimodal Large Language Models (3D-MLLMs) often depend on 3D data inputs, which limits scalability and generalization. To address this limitation, we propose Vid-LLM, a video-based 3D-MLLM that directly processes video inputs without requiring external 3D data, making it practical for real-world deployment. In our method, the geometric prior are directly used to improve the performance of the sceen perception. To integrate the geometric cues into the MLLM compactly, we design a Cross-Task Adapter (CTA) module to align the 3D geometric priors with the vision-language representations. To ensure geometric consistency and integrity, we introduce a Metric Depth Model that recovers real-scale geometry from the reconstruction outputs. Finally, the model is fine-tuned with a two-stage distillation optimization strategy, realizing fast convergence and stabilizes training. Extensive experiments across diverse benchmarks verified the effectiveness of our method on 3D Question Answering, 3D Dense Captioning and 3D Visual Grounding tasks, demonstrating the superior multi-task capabilities.
翻译:近年来,多模态大语言模型的发展显著提升了二维领域的视觉-语言推理能力。然而,将这些能力扩展到三维场景理解仍然是一个重大挑战。现有的三维多模态大语言模型通常依赖于三维数据输入,这限制了其可扩展性和泛化能力。为应对这一局限,我们提出了Vid-LLM,一种基于视频的三维多模态大语言模型,它能够直接处理视频输入而无需外部三维数据,从而使其在实际部署中更具实用性。在我们的方法中,几何先验被直接用于提升场景感知的性能。为了将几何线索紧凑地整合到多模态大语言模型中,我们设计了一个跨任务适配器模块,用于对齐三维几何先验与视觉-语言表征。为确保几何一致性与完整性,我们引入了一个度量深度模型,用于从重建输出中恢复真实尺度的几何结构。最后,模型通过一个两阶段的蒸馏优化策略进行微调,实现了快速收敛并稳定了训练过程。在多样化基准测试上的大量实验验证了我们的方法在三维问答、三维密集描述和三维视觉定位任务上的有效性,展现了其卓越的多任务能力。