Large Vision-Language Models (LVLMs) excel in cross-model tasks but experience performance declines in long-context reasoning due to overreliance on textual information and reduced visual dependency. In this study, we empirically analyze LVLMs in long-context reasoning, revealing that increased context length leads to a higher dependence on language at the expense of visual dependency. To address this issue, we propose a novel training-free context pruning method that selectively removes less critical textual information. Our approach enhances visual dependency and reduces textual noise, thereby improving LVLM performance in long-context reasoning. We validate our method by constructing a long-context dataset, demonstrating its effectiveness across various LVLMs. Moreover, further analysis confirms the robustness of different token pruning strategies and preliminary explores scaling laws between pruning rates and context length.
翻译:大型视觉语言模型(LVLMs)在跨模态任务中表现出色,但在长上下文推理中性能下降,这归因于对文本信息的过度依赖以及视觉依赖性的降低。在本研究中,我们通过实证分析发现,随着上下文长度的增加,LVLMs对语言的依赖性增强,而视觉依赖性则相应减弱。为解决此问题,我们提出了一种新颖的无训练上下文剪枝方法,该方法能选择性地移除重要性较低的文本信息。我们的方法增强了视觉依赖性并减少了文本噪声,从而提升了LVLMs在长上下文推理中的性能。我们通过构建一个长上下文数据集验证了该方法的有效性,并在多种LVLMs上证明了其普适性。此外,进一步的分析确认了不同令牌剪枝策略的鲁棒性,并初步探索了剪枝率与上下文长度之间的缩放规律。