Interactive segmentation is to segment the mask of the target object according to the user's interactive prompts. There are two mainstream strategies: early fusion and late fusion. Current specialist models utilize the early fusion strategy that encodes the combination of images and prompts to target the prompted objects, yet repetitive complex computations on the images result in high latency. Late fusion models extract image embeddings once and merge them with the prompts in later interactions. This strategy avoids redundant image feature extraction and improves efficiency significantly. A recent milestone is the Segment Anything Model (SAM). However, this strategy limits the models' ability to extract detailed information from the prompted target zone. To address this issue, we propose SAM-REF, a two-stage refinement framework that fully integrates images and prompts by using a lightweight refiner into the interaction of late fusion, which combines the accuracy of early fusion and maintains the efficiency of late fusion. Through extensive experiments, we show that our SAM-REF model outperforms the current state-of-the-art method in most metrics on segmentation quality without compromising efficiency.
翻译:交互式分割旨在根据用户的交互提示分割出目标对象的掩码。目前存在两种主流策略:早期融合与晚期融合。当前的专用模型采用早期融合策略,将图像与提示组合编码以定位提示对象,但对图像的重复复杂计算导致了较高的延迟。晚期融合模型则提取一次图像嵌入,并在后续交互中将其与提示合并。该策略避免了冗余的图像特征提取,显著提升了效率。近期的一个里程碑是Segment Anything Model (SAM)。然而,这种策略限制了模型从提示目标区域提取细节信息的能力。为解决这一问题,我们提出了SAM-REF,一个两阶段精炼框架,通过将轻量级精炼器引入晚期融合的交互过程,充分整合图像与提示,从而结合了早期融合的精度并保持了晚期融合的效率。通过大量实验,我们证明在不牺牲效率的前提下,我们的SAM-REF模型在分割质量的大多数指标上超越了当前最先进的方法。