Recently, there has been growing interest in the capability of multimodal large language models (MLLMs) to process high-resolution images. A common approach currently involves dynamically cropping the original high-resolution image into smaller sub-images, which are then fed into a vision encoder that was pre-trained on lower-resolution images. However, this cropping approach often truncates objects and connected areas in the original image, causing semantic breaks. To address this limitation, we introduce HyViLM, designed to process images of any resolution while retaining the overall context during encoding. Specifically, we: (i) Design a new visual encoder called Hybrid Encoder that not only encodes individual sub-images but also interacts with detailed global visual features, significantly improving the model's ability to encode high-resolution images. (ii) Propose an optimal feature fusion strategy for the dynamic cropping approach, effectively leveraging information from different layers of the vision encoder. Compared with the state-of-the-art MLLMs under the same setting, our HyViLM outperforms existing MLLMs in nine out of ten tasks. Specifically, HyViLM achieves a 9.6% improvement in performance on the TextVQA task and a 6.9% enhancement on the DocVQA task.
翻译:近年来,多模态大语言模型处理高分辨率图像的能力日益受到关注。当前主流方法通常将原始高分辨率图像动态裁剪为若干子图像,随后输入至基于低分辨率图像预训练的视觉编码器。然而,这种裁剪方式往往截断原始图像中的对象与连通区域,导致语义断裂。为解决这一局限,我们提出HyViLM模型,该模型能够处理任意分辨率的图像,同时在编码过程中保持整体上下文信息。具体而言,我们:(i)设计了一种称为混合编码器的新型视觉编码器,该编码器不仅对单个子图像进行编码,还能与细粒度的全局视觉特征交互,显著提升了模型对高分辨率图像的编码能力。(ii)针对动态裁剪方法提出了最优特征融合策略,有效利用了视觉编码器不同层级的信息。在相同实验设置下,与当前最先进的多模态大语言模型相比,我们的HyViLM在十项任务中有九项表现更优。具体而言,HyViLM在TextVQA任务上实现了9.6%的性能提升,在DocVQA任务上获得了6.9%的性能增强。