Video object segmentation methods like SAM2 achieve strong performance through memory-based architectures but struggle under large viewpoint changes due to reliance on appearance features. Traditional 3D instance segmentation methods address viewpoint consistency but require camera poses, depth maps, and expensive preprocessing. We introduce 3AM, a training-time enhancement that integrates 3D-aware features from MUSt3R into SAM2. Our lightweight Feature Merger fuses multi-level MUSt3R features that encode implicit geometric correspondence. Combined with SAM2's appearance features, the model achieves geometry-consistent recognition grounded in both spatial position and visual similarity. We propose a field-of-view aware sampling strategy ensuring frames observe spatially consistent object regions for reliable 3D correspondence learning. Critically, our method requires only RGB input at inference, with no camera poses or preprocessing. On challenging datasets with wide-baseline motion (ScanNet++, Replica), 3AM substantially outperforms SAM2 and extensions, achieving 90.6% IoU and 71.7% Positive IoU on ScanNet++'s Selected Subset, improving over state-of-the-art VOS methods by +15.9 and +30.4 points. Project page: https://jayisaking.github.io/3AM-Page/
翻译:SAM2等基于记忆架构的视频对象分割方法虽性能优异,但在视角剧烈变化时因依赖表观特征而表现欠佳。传统三维实例分割方法虽能保证视角一致性,但需相机位姿、深度图及昂贵的预处理。本文提出3AM,一种训练时增强方法,将MUSt3R的三维感知特征集成至SAM2中。我们设计的轻量级特征融合器整合了编码隐式几何对应关系的多层级MUSt3R特征,结合SAM2的表观特征,使模型能基于空间位置与视觉相似性实现几何一致的目标识别。此外,我们提出视场感知采样策略,确保采样的帧能观测到空间一致的对象区域,从而学习可靠的三维对应关系。关键的是,本方法在推理时仅需RGB输入,无需相机位姿或预处理。在具有宽基线运动的挑战性数据集(ScanNet++、Replica)上,3AM显著优于SAM2及其扩展方法,在ScanNet++精选子集上分别达到90.6%的交并比和71.7%的正交并比,较当前最优视频对象分割方法提升15.9和30.4个百分点。项目页面:https://jayisaking.github.io/3AM-Page/