Large Vision-Language Models (LVLMs) have advanced rapidly by aligning visual patches with the text embedding space, but a fixed visual-token budget forces images to be resized to a uniform pretraining resolution, often erasing fine-grained details and causing hallucinations via over-reliance on language priors. Recent attention-guided enhancement (e.g., cropping or region-focused attention allocation) alleviates this, yet it commonly hinges on a static "magic layer" empirically chosen on simple recognition benchmarks and thus may not transfer to complex reasoning tasks. In contrast to this static assumption, we propose a dynamic perspective on visual grounding. Through a layer-wise sensitivity analysis, we demonstrate that visual grounding is a dynamic process: while simple object recognition tasks rely on middle layers, complex visual search and reasoning tasks require visual information to be reactivated at deeper layers. Based on this observation, we introduce Visual Activation by Query (VAQ), a metric that identifies the layer whose attention map is most relevant to query-specific visual grounding by measuring attention sensitivity to the input query. Building on VAQ, we further propose LASER (Layer-adaptive Attention-guided Selective visual and decoding Enhancement for Reasoning), a training-free inference procedure that adaptively selects task-appropriate layers for visual localization and question answering. Experiments across diverse VQA benchmarks show that LASER significantly improves VQA accuracy across tasks with varying levels of complexity.
翻译:大型视觉语言模型(LVLMs)通过将视觉图像块与文本嵌入空间对齐而快速发展,但固定的视觉标记预算迫使图像被统一缩放到预训练分辨率,这往往会抹去细粒度细节,并因过度依赖语言先验而导致幻觉现象。近期基于注意力引导的增强方法(例如裁剪或区域聚焦的注意力分配)缓解了这一问题,但这些方法通常依赖于在简单识别基准上凭经验选择的静态“魔法层”,因此可能无法迁移到复杂的推理任务中。与这种静态假设相反,我们提出了一种动态的视觉定位视角。通过层级敏感性分析,我们证明视觉定位是一个动态过程:简单的物体识别任务依赖于中间层,而复杂的视觉搜索与推理任务则需要视觉信息在更深层被重新激活。基于这一观察,我们引入了基于查询的视觉激活(VAQ)指标,该指标通过测量注意力对输入查询的敏感性,来识别其注意力图与查询特定视觉定位最相关的层级。在此基础上,我们进一步提出了LASER(面向推理的层级自适应注意力引导选择性视觉与解码增强),这是一种无需训练的自适应推理过程,能够为视觉定位和问题解答自适应地选择适合任务的层级。在多种视觉问答基准上的实验表明,LASER显著提升了不同复杂度任务的视觉问答准确率。