Constrained by the separate encoding of vision and language, existing grounding and referring segmentation works heavily rely on bulky Transformer-based fusion en-/decoders and a variety of early-stage interaction technologies. Simultaneously, the current mask visual language modeling (MVLM) fails to capture the nuanced referential relationship between image-text in referring tasks. In this paper, we propose OneRef, a minimalist referring framework built on the modality-shared one-tower transformer that unifies the visual and linguistic feature spaces. To modeling the referential relationship, we introduce a novel MVLM paradigm called Mask Referring Modeling (MRefM), which encompasses both referring-aware mask image modeling and referring-aware mask language modeling. Both modules not only reconstruct modality-related content but also cross-modal referring content. Within MRefM, we propose a referring-aware dynamic image masking strategy that is aware of the referred region rather than relying on fixed ratios or generic random masking schemes. By leveraging the unified visual language feature space and incorporating MRefM's ability to model the referential relations, our approach enables direct regression of the referring results without resorting to various complex techniques. Our method consistently surpasses existing approaches and achieves SoTA performance on both grounding and segmentation tasks, providing valuable insights for future research. Our code and models are available at https://github.com/linhuixiao/OneRef.
翻译:受限于视觉与语言特征的分离编码,现有指代表达式定位与分割方法严重依赖基于Transformer的庞大融合编码器/解码器以及多种早期交互技术。同时,当前的掩码视觉语言建模(MVLM)未能有效捕捉指代任务中图像与文本间细微的指代关系。本文提出OneRef——一种基于模态共享单塔Transformer构建的极简指代框架,该框架统一了视觉与语言特征空间。为建模指代关系,我们提出名为掩码指代表征建模(MRefM)的新型MVLM范式,其同时包含指代感知掩码图像建模与指代感知掩码语言建模。两个模块不仅重建模态相关的内容,还实现跨模态指代内容的重建。在MRefM中,我们提出一种指代感知动态图像掩码策略,该策略能够感知被指代区域而非依赖固定比例或通用随机掩码方案。通过利用统一的视觉语言特征空间并结合MRefM建模指代关系的能力,我们的方法无需借助各种复杂技术即可直接回归指代结果。本方法在定位与分割任务中均持续超越现有方法并达到当前最优性能,为未来研究提供了有价值的洞见。代码与模型已开源:https://github.com/linhuixiao/OneRef。