Recent multimodal large language models (MLLMs) still struggle with long document understanding due to two fundamental challenges: information interference from abundant irrelevant content, and the quadratic computational cost of Transformer-based architectures. Existing approaches primarily fall into two categories: token compression, which sacrifices fine-grained details; and introducing external retrievers, which increase system complexity and prevent end-to-end optimization. To address these issues, we conduct an in-depth analysis and observe that MLLMs exhibit a human-like coarse-to-fine reasoning pattern: early Transformer layers attend broadly across the document, while deeper layers focus on relevant evidence pages. Motivated by this insight, we posit that the inherent evidence localization capabilities of MLLMs can be explicitly leveraged to perform retrieval during the reasoning process, facilitating efficient long document understanding. To this end, we propose URaG, a simple-yet-effective framework that Unifies Retrieval and Generation within a single MLLM. URaG introduces a lightweight cross-modal retrieval module that converts the early Transformer layers into an efficient evidence selector, identifying and preserving the most relevant pages while discarding irrelevant content. This design enables the deeper layers to concentrate computational resources on pertinent information, improving both accuracy and efficiency. Extensive experiments demonstrate that URaG achieves state-of-the-art performance while reducing computational overhead by 44-56%. The code is available at https://github.com/shi-yx/URaG.
翻译:当前的多模态大语言模型(MLLMs)在处理长文档理解时仍面临两个根本性挑战:大量无关内容造成的信息干扰,以及基于Transformer架构的二次计算成本。现有方法主要分为两类:令牌压缩(牺牲细粒度细节)和引入外部检索器(增加系统复杂性并阻碍端到端优化)。为解决这些问题,我们进行了深入分析并观察到,MLLMs展现出一种类人的从粗到细的推理模式:早期Transformer层广泛关注整个文档,而深层则聚焦于相关证据页面。受此启发,我们认为可以显式利用MLLMs固有的证据定位能力,在推理过程中执行检索,从而促进高效的长文档理解。为此,我们提出了URaG,一个简单而有效的框架,将检索与生成统一在单个MLLM内。URaG引入了一个轻量级的跨模态检索模块,将早期Transformer层转换为高效的证据选择器,识别并保留最相关的页面,同时丢弃无关内容。这一设计使深层能够将计算资源集中于相关信息,从而提升准确性和效率。大量实验表明,URaG在实现最先进性能的同时,将计算开销降低了44-56%。代码发布于https://github.com/shi-yx/URaG。