Research on Multi-modal Large Language Models (MLLMs) towards the multi-image cross-modal instruction has received increasing attention and made significant progress, particularly in scenarios involving closely resembling images (e.g., change captioning). Existing MLLMs typically follow a two-step process in their pipelines: first, extracting visual tokens independently for each input image, and then aligning these visual tokens from different images with the Large Language Model (LLM) in its textual feature space. However, the independent extraction of visual tokens for each image may result in different semantics being prioritized for different images in the first step, leading to a lack of preservation of linking information among images for subsequent LLM analysis. This issue becomes more serious in scenarios where significant variations exist among the images (e.g., visual storytelling). To address this challenge, we introduce Semantic Alignment for Multi-modal large language models (SAM). By involving the bidirectional semantic guidance between different images in the visual-token extraction process, SAM aims to enhance the preservation of linking information for coherent analysis and align the semantics of different images before feeding them into LLM. As the test bed, we propose a large-scale dataset named MmLINK consisting of 69K samples. Different from most existing datasets for MLLMs fine-tuning, our MmLINK dataset comprises multi-modal instructions with significantly diverse images. Extensive experiments on the group captioning task and the storytelling task prove the effectiveness of our SAM model, surpassing the state-of-the-art methods by a large margin (+37% for group captioning and +22% for storytelling on CIDEr score). Project page: https://mccartney01.github.io/SAM.
翻译:面向多图像跨模态指令的多模态大语言模型研究日益受到关注并取得显著进展,尤其在涉及高度相似图像的场景中(例如变化描述)。现有的多模态大语言模型通常在其流程中遵循两步过程:首先,为每个输入图像独立提取视觉标记;随后,将这些来自不同图像的视觉标记与大语言模型在其文本特征空间中进行对齐。然而,为每幅图像独立提取视觉标记可能导致第一步中不同图像优先处理的语义存在差异,致使图像间的关联信息未能充分保留以供后续大语言模型分析。在图像间存在显著差异的场景中(例如视觉叙事),这一问题变得尤为严重。为应对这一挑战,我们提出了多模态大语言模型的语义对齐方法。通过在视觉标记提取过程中引入不同图像间的双向语义引导,SAM旨在增强关联信息的保留以支持连贯分析,并在输入大语言模型前对齐不同图像的语义。作为测试平台,我们提出了一个包含6.9万个样本的大规模数据集MmLINK。与大多数现有用于多模态大语言模型微调的数据集不同,我们的MmLINK数据集包含具有显著多样性图像的多模态指令。在群组描述任务和叙事任务上的大量实验证明了我们SAM模型的有效性,其性能大幅超越现有最优方法(在CIDEr指标上,群组描述任务提升37%,叙事任务提升22%)。项目页面:https://mccartney01.github.io/SAM。