Existing forgery detection methods are often limited to uni-modal or bi-modal settings, failing to handle the interleaved text, images, and videos prevalent in real-world misinformation. To bridge this gap, this paper targets to develop a unified framework for omnibus vision-language forgery detection and grounding. In this unified setting, the {interplay} between diverse modalities and the dual requirements of simultaneous detection and localization pose a critical ``difficulty bias`` problem: the simpler veracity classification task tends to dominate the gradients, leading to suboptimal performance in fine-grained grounding during multi-task optimization. To address this challenge, we propose \textbf{OmniVL-Guard}, a balanced reinforcement learning framework for omnibus vision-language forgery detection and grounding. Particularly, OmniVL-Guard comprises two core designs: Self-Evolving CoT Generatio and Adaptive Reward Scaling Policy Optimization (ARSPO). {Self-Evolving CoT Generation} synthesizes high-quality reasoning paths, effectively overcoming the cold-start challenge. Building upon this, {Adaptive Reward Scaling Policy Optimization (ARSPO)} dynamically modulates reward scales and task weights, ensuring a balanced joint optimization. Extensive experiments demonstrate that OmniVL-Guard significantly outperforms state-of-the-art methods and exhibits zero-shot robust generalization across out-of-domain scenarios.
翻译:现有的伪造检测方法通常局限于单模态或双模态设置,难以处理现实世界虚假信息中普遍存在的交错文本、图像和视频。为弥补这一差距,本文致力于开发一个用于全方位视觉语言伪造检测与定位的统一框架。在此统一设置下,多种模态间的相互作用以及同时进行检测与定位的双重要求引发了关键的“难度偏差”问题:较简单的真实性分类任务倾向于主导梯度,导致多任务优化过程中细粒度定位性能欠佳。为解决这一挑战,我们提出了 **OmniVL-Guard**,一个用于全方位视觉语言伪造检测与定位的平衡强化学习框架。具体而言,OmniVL-Guard包含两个核心设计:自演进思维链生成与自适应奖励缩放策略优化(ARSPO)。自演进思维链生成可合成高质量的推理路径,有效克服冷启动挑战。在此基础上,自适应奖励缩放策略优化(ARSPO)动态调节奖励尺度与任务权重,确保平衡的联合优化。大量实验表明,OmniVL-Guard显著优于现有最先进方法,并在领域外场景中展现出零样本鲁棒泛化能力。