Large Vision Language Models (LVLMs) have demonstrated remarkable capabilities, yet their proficiency in understanding and reasoning over multiple images remains largely unexplored. While existing benchmarks have initiated the evaluation of multi-image models, a comprehensive analysis of their core weaknesses and their causes is still lacking. In this work, we introduce MIMIC (Multi-Image Model Insights and Challenges), a new benchmark designed to rigorously evaluate the multi-image capabilities of LVLMs. Using MIMIC, we conduct a series of diagnostic experiments that reveal pervasive issues: LVLMs often fail to aggregate information across images and struggle to track or attend to multiple concepts simultaneously. To address these failures, we propose two novel complementary remedies. On the data side, we present a procedural data-generation strategy that composes single-image annotations into rich, targeted multi-image training examples. On the optimization side, we analyze layer-wise attention patterns and derive an attention-masking scheme tailored for multi-image inputs. Experiments substantially improved cross-image aggregation, while also enhancing performance on existing multi-image benchmarks, outperforming prior state of the art across tasks. Data and code will be made available at https://github.com/anurag-198/MIMIC.
翻译:大型视觉语言模型(LVLM)已展现出卓越的能力,但其在理解和推理多幅图像方面的熟练程度仍很大程度上未被探索。尽管现有基准测试已启动对多图像模型的评估,但对其核心弱点及其成因的全面分析仍然缺乏。在本工作中,我们引入了MIMIC(多图像模型洞察与挑战),这是一个旨在严格评估LVLM多图像能力的新基准。利用MIMIC,我们进行了一系列诊断性实验,揭示了普遍存在的问题:LVLM常常无法跨图像聚合信息,并且难以同时追踪或关注多个概念。为了解决这些失效问题,我们提出了两种新颖的互补性改进方案。在数据方面,我们提出了一种程序化的数据生成策略,将单图像标注组合成丰富、有针对性的多图像训练示例。在优化方面,我们分析了逐层注意力模式,并推导出一种专为多图像输入定制的注意力掩码方案。实验显著改善了跨图像聚合能力,同时也在现有多图像基准测试上提升了性能,在各项任务中超越了先前的最先进水平。数据和代码将在 https://github.com/anurag-198/MIMIC 提供。