Visual grounding (VG) tasks involve explicit cross-modal alignment, as semantically corresponding image regions are to be located for the language phrases provided. Existing approaches complete such visual-text reasoning in a single-step manner. Their performance causes high demands on large-scale anchors and over-designed multi-modal fusion modules based on human priors, leading to complicated frameworks that may be difficult to train and overfit to specific scenarios. Even worse, such once-for-all reasoning mechanisms are incapable of refining boxes continuously to enhance query-region matching. In contrast, in this paper, we formulate an iterative reasoning process by denoising diffusion modeling. Specifically, we propose a language-guided diffusion framework for visual grounding, LG-DVG, which trains the model to progressively reason queried object boxes by denoising a set of noisy boxes with the language guide. To achieve this, LG-DVG gradually perturbs query-aligned ground truth boxes to noisy ones and reverses this process step by step, conditional on query semantics. Extensive experiments for our proposed framework on five widely used datasets validate the superior performance of solving visual grounding, a cross-modal alignment task, in a generative way. The source codes are available at https://github.com/iQua/vgbase/tree/main/examples/DiffusionVG.
翻译:视觉定位(VG)任务涉及显式的跨模态对齐,即需要根据提供的语言短语定位语义对应的图像区域。现有方法以单步方式完成此类视觉-文本推理。其性能表现导致对大规模锚点以及基于人类先验过度设计的跨模态融合模块的高要求,从而产生可能难以训练且易对特定场景过拟合的复杂框架。更严重的是,这种一次性推理机制无法持续优化边界框以增强查询-区域匹配。相比之下,本文通过去噪扩散建模构建了一种迭代推理过程。具体而言,我们提出了一种用于视觉定位的语言引导扩散框架 LG-DVG,该框架训练模型在语言引导下,通过去噪一组带噪声的边界框来逐步推理查询目标框。为实现这一目标,LG-DVG 在查询语义的条件下,逐步将查询对齐的真实边界框扰动为带噪声的框,并逐步逆转此过程。我们在五个广泛使用的数据集上对所提框架进行的广泛实验验证了以生成式方法解决视觉定位这一跨模态对齐任务的优越性能。源代码可在 https://github.com/iQua/vgbase/tree/main/examples/DiffusionVG 获取。