Segment Anything Model (SAM) has attracted widespread attention for its superior interactive segmentation capabilities with visual prompts while lacking further exploration of text prompts. In this paper, we empirically investigate what text prompt encoders (e.g., CLIP or LLM) are good for adapting SAM for referring expression segmentation and introduce the Early Vision-language Fusion-based SAM (EVF-SAM). EVF-SAM is a simple yet effective referring segmentation method which exploits multimodal prompts (i.e., image and text) and comprises a pre-trained vision-language model to generate referring prompts and a SAM model for segmentation. Surprisingly, we observe that: (1) multimodal prompts and (2) vision-language models with early fusion (e.g., BEIT-3) are beneficial for prompting SAM for accurate referring segmentation. Our experiments show that the proposed EVF-SAM based on BEIT-3 can obtain state-of-the-art performance on RefCOCO/+/g for referring expression segmentation and demonstrate the superiority of prompting SAM with early vision-language fusion. In addition, the proposed EVF-SAM with 1.32B parameters achieves remarkably higher performance while reducing nearly 82% of parameters compared to previous SAM methods based on large multimodal models.
翻译:Segment Anything Model (SAM) 因其卓越的视觉提示交互式分割能力而受到广泛关注,但在文本提示方面的探索尚不充分。本文通过实证研究,探讨了何种文本提示编码器(例如 CLIP 或 LLM)更适合于将 SAM 适配于指代表达式分割任务,并提出了基于早期视觉-语言融合的 SAM(EVF-SAM)。EVF-SAM 是一种简单而有效的指代分割方法,它利用多模态提示(即图像和文本),包含一个用于生成指代提示的预训练视觉-语言模型和一个用于分割的 SAM 模型。令人惊讶的是,我们观察到:(1)多模态提示,以及(2)采用早期融合的视觉-语言模型(例如 BEIT-3),对于提示 SAM 实现精确的指代分割是有益的。我们的实验表明,基于 BEIT-3 所提出的 EVF-SAM 能够在 RefCOCO/+/g 数据集上获得指代表达式分割的最先进性能,并证明了采用早期视觉-语言融合来提示 SAM 的优越性。此外,所提出的 EVF-SAM 拥有 13.2 亿参数,与之前基于大型多模态模型的 SAM 方法相比,在显著提高性能的同时,参数减少了近 82%。