Recently, scaling images to high resolution has received much attention in multimodal large language models (MLLMs). Most existing practices adopt a sliding-window-style cropping strategy to adapt to resolution increase. Such a cropping strategy, however, can easily cut off objects and connected regions, which introduces semantic discontinuity and therefore impedes MLLMs from recognizing small or irregularly shaped objects or text, leading to a phenomenon we call the semantic sawtooth effect. This effect is particularly evident in lightweight MLLMs. To address this issue, we introduce a Complementary Image Pyramid (CIP), a simple, effective, and plug-and-play solution designed to mitigate semantic discontinuity during high-resolution image processing. In particular, CIP dynamically constructs an image pyramid to provide complementary semantic information for the cropping-based MLLMs, enabling them to richly acquire semantics at all levels. Furthermore, we introduce a Scale Compression Mechanism (SCM) to reduce the additional computational overhead by compressing the redundant visual tokens. Our experiments demonstrate that CIP can consistently enhance the performance across diverse architectures (e.g., MiniCPM-V-2, InternVL2, and LLaVA-OneVision), various model capacity (1B$\rightarrow$8B), and different usage configurations (training-free and fine-tuning). Leveraging the proposed CIP and SCM, we introduce a lightweight MLLM, Mini-Monkey, which achieves remarkable performance in both general multimodal understanding and document understanding. On the OCRBench, the 2B-version Mini-Monkey even surpasses the 8B model InternVL2-8B by 12 score. Additionally, training Mini-Monkey is cheap, requiring only eight RTX 3090 GPUs. The code is available at https://github.com/Yuliang-Liu/Monkey.
翻译:近年来,在多模态大语言模型(MLLMs)中,将图像缩放至高分辨率受到了广泛关注。大多数现有实践采用滑动窗口式的裁剪策略来适应分辨率的提升。然而,这种裁剪策略容易切断物体和连通区域,从而引入语义不连续性,进而阻碍MLLMs识别小尺寸或不规则形状的物体或文本,导致一种我们称之为语义锯齿效应的现象。这种效应在轻量级MLLMs中尤为明显。为解决此问题,我们引入了互补图像金字塔(CIP),这是一种简单、有效且即插即用的解决方案,旨在缓解高分辨率图像处理过程中的语义不连续性。具体而言,CIP动态构建一个图像金字塔,为基于裁剪的MLLMs提供互补的语义信息,使其能够充分获取各个层级的语义。此外,我们引入了尺度压缩机制(SCM),通过压缩冗余的视觉标记来减少额外的计算开销。我们的实验表明,CIP能够持续提升多种架构(例如MiniCPM-V-2、InternVL2和LLaVA-OneVision)、不同模型容量(1B$\rightarrow$8B)以及不同使用配置(免训练和微调)下的性能。利用所提出的CIP和SCM,我们引入了一个轻量级MLLM——Mini-Monkey,其在通用多模态理解和文档理解方面均取得了显著性能。在OCRBench上,2B版本的Mini-Monkey甚至以12分的优势超越了8B模型InternVL2-8B。此外,训练Mini-Monkey成本低廉,仅需八张RTX 3090 GPU。代码可在 https://github.com/Yuliang-Liu/Monkey 获取。