Integration of diverse visual prompts like clicks, scribbles, and boxes in interactive image segmentation significantly facilitates users' interaction as well as improves interaction efficiency. However, existing studies primarily encode the position or pixel regions of prompts without considering the contextual areas around them, resulting in insufficient prompt feedback, which is not conducive to performance acceleration. To tackle this problem, this paper proposes a simple yet effective Probabilistic Visual Prompt Unified Transformer (PVPUFormer) for interactive image segmentation, which allows users to flexibly input diverse visual prompts with the probabilistic prompt encoding and feature post-processing to excavate sufficient and robust prompt features for performance boosting. Specifically, we first propose a Probabilistic Prompt-unified Encoder (PPuE) to generate a unified one-dimensional vector by exploring both prompt and non-prompt contextual information, offering richer feedback cues to accelerate performance improvement. On this basis, we further present a Prompt-to-Pixel Contrastive (P$^2$C) loss to accurately align both prompt and pixel features, bridging the representation gap between them to offer consistent feature representations for mask prediction. Moreover, our approach designs a Dual-cross Merging Attention (DMA) module to implement bidirectional feature interaction between image and prompt features, generating notable features for performance improvement. A comprehensive variety of experiments on several challenging datasets demonstrates that the proposed components achieve consistent improvements, yielding state-of-the-art interactive segmentation performance. Our code is available at https://github.com/XuZhang1211/PVPUFormer.
翻译:在交互式图像分割中整合点击、涂鸦和框选等多种视觉提示,不仅能显著简化用户交互,还能提升交互效率。然而,现有研究主要对提示的位置或像素区域进行编码,未能充分考虑其周围的上下文区域,导致提示反馈不足,不利于性能加速。为解决此问题,本文提出了一种简单而有效的概率视觉提示统一Transformer(PVPUFormer),用于交互式图像分割。该方法允许用户灵活输入多样化的视觉提示,并通过概率提示编码与特征后处理来挖掘充分且鲁棒的提示特征,以提升性能。具体而言,我们首先提出了一种概率提示统一编码器(PPuE),通过探索提示区域与非提示区域的上下文信息,生成统一的一维向量,从而提供更丰富的反馈线索以加速性能提升。在此基础上,我们进一步提出了提示到像素对比(P$^2$C)损失,以精确对齐提示特征与像素特征,弥合二者之间的表示差距,从而为掩码预测提供一致的特征表示。此外,我们的方法设计了一个双交叉融合注意力(DMA)模块,以实现图像特征与提示特征之间的双向交互,生成显著特征以促进性能提升。在多个具有挑战性的数据集上进行的大量实验表明,所提出的组件实现了一致的性能提升,取得了最先进的交互式分割性能。我们的代码可在 https://github.com/XuZhang1211/PVPUFormer 获取。