Large Language Models (LLMs) demonstrate strong capabilities in broad knowledge representation, yet they are inherently deficient in pixel-level perceptual understanding. Although the Segment Anything Model (SAM) represents a significant advancement in visual-prompt-driven image segmentation, it exhibits notable limitations in multi-mask prediction and category-specific segmentation tasks, and it cannot integrate all segmentation tasks within a unified model architecture. To address these limitations, we present X-SAM, a streamlined Multimodal Large Language Model (MLLM) framework that extends the segmentation paradigm from \textit{segment anything} to \textit{any segmentation}. Specifically, we introduce a novel unified framework that enables more advanced pixel-level perceptual comprehension for MLLMs. Furthermore, we propose a new segmentation task, termed Visual GrounDed (VGD) segmentation, which segments all instance objects with interactive visual prompts and empowers MLLMs with visual grounded, pixel-wise interpretative capabilities. To enable effective training on diverse data sources, we present a unified training strategy that supports co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on a wide range of image segmentation benchmarks, highlighting its efficiency for multimodal, pixel-level visual understanding. Code is available at https://github.com/wanghao9610/X-SAM.
翻译:大型语言模型(LLM)在广泛的知识表示方面展现出强大的能力,但本质上缺乏像素级的感知理解能力。尽管“分割一切模型”(Segment Anything Model, SAM)在视觉提示驱动的图像分割领域取得了显著进展,但它在多掩码预测和类别特定分割任务中存在明显局限,且无法将所有分割任务整合到统一的模型架构中。为应对这些局限性,我们提出了X-SAM——一个简化的多模态大型语言模型(MLLM)框架,将分割范式从“分割一切”扩展至“任意分割”。具体而言,我们引入了一种新颖的统一框架,使MLLM能够实现更高级的像素级感知理解。此外,我们提出了一种新的分割任务,称为视觉接地(Visual GrounDed, VGD)分割,该任务通过交互式视觉提示分割所有实例对象,并赋予MLLM视觉接地、像素级解释的能力。为支持在多样化数据源上进行有效训练,我们提出了一种支持跨多个数据集协同训练的统一训练策略。实验结果表明,X-SAM在广泛的图像分割基准测试中达到了最先进的性能,突显了其在多模态、像素级视觉理解方面的高效性。代码发布于 https://github.com/wanghao9610/X-SAM。