Unlike Object Detection, Visual Grounding task necessitates the detection of an object described by complex free-form language. To simultaneously model such complex semantic and visual representations, recent state-of-the-art studies adopt transformer-based models to fuse features from both modalities, further introducing various modules that modulate visual features to align with the language expressions and eliminate the irrelevant redundant information. However, their loss function, still adopting common Object Detection losses, solely governs the bounding box regression output, failing to fully optimize for the above objectives. To tackle this problem, in this paper, we first analyze the attention mechanisms of transformer-based models. Building upon this, we further propose a novel framework named Attention-Driven Constraint Balancing (AttBalance) to optimize the behavior of visual features within language-relevant regions. Extensive experimental results show that our method brings impressive improvements. Specifically, we achieve constant improvements over five different models evaluated on four different benchmarks. Moreover, we attain a new state-of-the-art performance by integrating our method into QRNet.
翻译:与目标检测不同,视觉定位任务需要检测由复杂自由形式语言描述的对象。为了同时建模此类复杂的语义和视觉表示,近期最先进的研究采用基于Transformer的模型来融合两种模态的特征,并进一步引入各种模块来调制视觉特征以对齐语言表达并消除无关的冗余信息。然而,它们的损失函数仍采用通用的目标检测损失,仅约束边界框回归输出,未能充分优化上述目标。为解决此问题,本文首先分析了基于Transformer模型的注意力机制。在此基础上,我们进一步提出了一种名为注意力驱动约束平衡(AttBalance)的新颖框架,以优化语言相关区域内视觉特征的行为。大量实验结果表明,我们的方法带来了显著的性能提升。具体而言,我们在四种不同基准测试上评估的五个不同模型均实现了持续改进。此外,通过将我们的方法集成到QRNet中,我们取得了新的最先进性能。