Recently, there has been growing interest in the capability of multimodal large language models (MLLMs) to process high-resolution images. A common approach currently involves dynamically cropping the original high-resolution image into smaller sub-images, which are then fed into a vision encoder that was pre-trained on lower-resolution images. However, this cropping approach often truncates objects and connected areas in the original image, causing semantic breaks. To address this limitation, we introduce HyViLM, designed to process images of any resolution while retaining the overall context during encoding. Specifically, we: (i) Design a new visual encoder called Hybrid Encoder that not only encodes individual sub-images but also interacts with detailed global visual features, significantly improving the model's ability to encode high-resolution images. (ii) Propose an optimal feature fusion strategy for the dynamic cropping approach, effectively leveraging information from different layers of the vision encoder. Compared with the state-of-the-art MLLMs under the same setting, our HyViLM outperforms existing MLLMs in nine out of ten tasks. Specifically, HyViLM achieves a 9.6% improvement in performance on the TextVQA task and a 6.9% enhancement on the DocVQA task.
翻译:近年来,多模态大语言模型(MLLMs)处理高分辨率图像的能力日益受到关注。当前常见的方法是动态地将原始高分辨率图像裁剪为较小的子图像,然后输入到在较低分辨率图像上预训练的视觉编码器中。然而,这种裁剪方法通常会截断原始图像中的物体和连通区域,导致语义断裂。为了解决这一局限性,我们提出了HyViLM,它旨在处理任意分辨率的图像,同时在编码过程中保留整体上下文信息。具体而言,我们:(i)设计了一个名为混合编码器的新型视觉编码器,它不仅编码单个子图像,还与详细的全局视觉特征进行交互,显著提升了模型编码高分辨率图像的能力。(ii)为动态裁剪方法提出了一种最优的特征融合策略,有效利用了视觉编码器不同层的信息。在与同等设置下的最先进MLLMs相比,我们的HyViLM在十项任务中有九项超越了现有MLLMs。具体来说,HyViLM在TextVQA任务上实现了9.6%的性能提升,在DocVQA任务上实现了6.9%的性能提升。