Most existing 3D referring expression segmentation (3DRES) methods rely on dense, high-quality point clouds, while real-world agents such as robots and mobile phones operate with only a few sparse RGB views and strict latency constraints. We introduce Multi-view 3D Referring Expression Segmentation (MV-3DRES), where the model must recover scene structure and segment the referred object directly from sparse multi-view images. Traditional two-stage pipelines, which first reconstruct a point cloud and then perform segmentation, often yield low-quality geometry, produce coarse or degraded target regions, and run slowly. We propose the Multimodal Visual Geometry Grounded Transformer (MVGGT), an efficient end-to-end framework that integrates language information into sparse-view geometric reasoning through a dual-branch design. Training in this setting exposes a critical optimization barrier, termed Foreground Gradient Dilution (FGD), where sparse 3D signals lead to weak supervision. To resolve this, we introduce Per-view No-target Suppression Optimization (PVSO), which provides stronger and more balanced gradients across views, enabling stable and efficient learning. To support consistent evaluation, we build MVRefer, a benchmark that defines standardized settings and metrics for MV-3DRES. Experiments show that MVGGT establishes the first strong baseline and achieves both high accuracy and fast inference, outperforming existing alternatives. Code and models are publicly available at https://mvggt.github.io.
翻译:大多数现有的三维指称表达分割方法依赖于稠密的高质量点云,而现实世界中的智能体(如机器人和移动设备)通常仅能获取少量稀疏的RGB视图,并受严格的延迟约束。我们提出了多视图三维指称表达分割任务,要求模型直接从稀疏的多视图图像中恢复场景结构并分割被指称的目标物体。传统的两阶段流程(先重建点云再进行分割)通常会产生低质量的几何结构,生成粗糙或退化的目标区域,且运行速度缓慢。我们提出了多模态视觉几何基础Transformer,这是一个高效端到端框架,通过双分支设计将语言信息融入稀疏视图的几何推理中。在此设定下训练时,我们发现了一个关键的优化障碍,称为前景梯度稀释现象,即稀疏的三维信号导致监督信号微弱。为解决此问题,我们提出了单视图无目标抑制优化方法,该方法能够提供跨视图更强且更平衡的梯度,从而实现稳定高效的学习。为支持一致性评估,我们构建了MVRefer基准,为MV-3DRES任务定义了标准化的设定与评估指标。实验表明,MVGGT建立了首个强基线,在实现高精度的同时保持了快速推理,性能优于现有替代方案。代码与模型已在https://mvggt.github.io公开。