Segment Anything Model (SAM) has attracted widespread attention for its superior interactive segmentation capabilities with visual prompts while lacking further exploration of text prompts. In this paper, we empirically investigate what text prompt encoders (e.g., CLIP or LLM) are good for adapting SAM for referring expression segmentation and introduce the Early Vision-language Fusion-based SAM (EVF-SAM). EVF-SAM is a simple yet effective referring segmentation method which exploits multimodal prompts (i.e., image and text) and comprises a pre-trained vision-language model to generate referring prompts and a SAM model for segmentation. Surprisingly, we observe that: (1) multimodal prompts and (2) vision-language models with early fusion (e.g., BEIT-3) are beneficial for prompting SAM for accurate referring segmentation. Our experiments show that the proposed EVF-SAM based on BEIT-3 can obtain state-of-the-art performance on RefCOCO/+/g for referring expression segmentation and demonstrate the superiority of prompting SAM with early vision-language fusion. In addition, the proposed EVF-SAM with 1.32B parameters achieves remarkably higher performance while reducing nearly 82% of parameters compared to previous SAM methods based on large multimodal models.
翻译:Segment Anything Model (SAM) 因其卓越的视觉提示交互分割能力而广受关注,但在文本提示方面的探索尚显不足。本文通过实证研究,探讨了何种文本提示编码器(例如CLIP或LLM)更适合于将SAM适配于指代表达式分割任务,并提出了基于早期视觉-语言融合的SAM模型(EVF-SAM)。EVF-SAM是一种简单而有效的指代分割方法,它利用多模态提示(即图像和文本),包含一个预训练的视觉-语言模型用于生成指代提示,以及一个用于分割的SAM模型。令人惊讶的是,我们观察到:(1)多模态提示,以及(2)采用早期融合策略的视觉-语言模型(例如BEIT-3),对于提示SAM实现精确的指代分割是有益的。我们的实验表明,基于BEIT-3所提出的EVF-SAM模型能够在RefCOCO/+/g数据集上获得指代表达式分割的最先进性能,并证明了采用早期视觉-语言融合来提示SAM的优越性。此外,与先前基于大型多模态模型的SAM方法相比,所提出的拥有13.2亿参数的EVF-SAM在显著降低近82%参数量的同时,实现了更高的性能。