To mitigate the threat of misinformation, multimodal manipulation localization has garnered growing attention. Consider that current methods rely on costly and time-consuming fine-grained annotations, such as patch/token-level annotations. This paper proposes a novel framework named Coupling Implicit and Explicit Cues (CIEC), which aims to achieve multimodal weakly-supervised manipulation localization for image-text pairs utilizing only coarse-grained image/sentence-level annotations. It comprises two branches, image-based and text-based weakly-supervised localization. For the former, we devise the Textual-guidance Refine Patch Selection (TRPS) module. It integrates forgery cues from both visual and textual perspectives to lock onto suspicious regions aided by spatial priors. Followed by the background silencing and spatial contrast constraints to suppress interference from irrelevant areas. For the latter, we devise the Visual-deviation Calibrated Token Grounding (VCTG) module. It focuses on meaningful content words and leverages relative visual bias to assist token localization. Followed by the asymmetric sparse and semantic consistency constraints to mitigate label noise and ensure reliability. Extensive experiments demonstrate the effectiveness of our CIEC, yielding results comparable to fully supervised methods on several evaluation metrics.
翻译:为应对虚假信息的威胁,多模态篡改定位研究日益受到关注。考虑到现有方法依赖成本高昂且耗时的细粒度标注(如块级/词元级标注),本文提出一种名为耦合隐式与显式线索(CIEC)的新框架,旨在仅利用粗粒度图像/句子级标注实现图像-文本对的多模态弱监督篡改定位。该框架包含基于图像的弱监督定位和基于文本的弱监督定位两个分支。对于前者,我们设计了文本引导细化块选择(TRPS)模块,该模块融合视觉与文本双重视角的篡改线索,借助空间先验锁定可疑区域,并通过背景抑制与空间对比约束来排除无关区域的干扰。对于后者,我们设计了视觉偏差校准词元定位(VCTG)模块,该模块聚焦于有意义的实义词,利用相对视觉偏差辅助词元定位,并通过非对称稀疏约束与语义一致性约束来缓解标签噪声并确保可靠性。大量实验证明了CIEC框架的有效性,在多项评估指标上取得了与全监督方法相当的结果。