Boosted by Multi-modal Large Language Models (MLLMs), text-guided universal segmentation models for the image and video domains have made rapid progress recently. However, these methods are often developed separately for specific domains, overlooking the similarities in task settings and solutions across these two areas. In this paper, we define the union of referring segmentation and reasoning segmentation at both the image and video levels as Instructed Visual Segmentation (IVS). Correspondingly, we propose InstructSeg, an end-to-end segmentation pipeline equipped with MLLMs for IVS. Specifically, we employ an object-aware video perceiver to extract temporal and object information from reference frames, facilitating comprehensive video understanding. Additionally, we introduce vision-guided multi-granularity text fusion to better integrate global and detailed text information with fine-grained visual guidance. By leveraging multi-task and end-to-end training, InstructSeg demonstrates superior performance across diverse image and video segmentation tasks, surpassing both segmentation specialists and MLLM-based methods with a single model. Our code is available at https://github.com/congvvc/InstructSeg.
翻译:在多模态大语言模型(MLLMs)的推动下,面向图像和视频领域的文本引导通用分割模型近期取得了快速发展。然而,这些方法通常针对特定领域独立开发,忽视了这两个领域在任务设置与解决方案上的相似性。本文将在图像和视频两个层面上的参考分割与推理分割统一定义为指令式视觉分割(IVS)。相应地,我们提出了InstructSeg——一个配备MLLMs的端到端IVS分割框架。具体而言,我们采用对象感知视频理解器从参考帧中提取时序与对象信息,以促进对视频的全面理解。此外,我们提出了视觉引导的多粒度文本融合机制,通过细粒度视觉引导更好地整合全局与细节文本信息。通过多任务与端到端训练,InstructSeg在单一模型上实现了对多样化图像与视频分割任务的卓越性能,超越了专业分割模型及基于MLLM的方法。代码已开源:https://github.com/congvvc/InstructSeg。