The rapid progression of multimodal large language models (MLLMs) has demonstrated superior performance on various multimodal benchmarks. However, the issue of data contamination during training creates challenges in performance evaluation and comparison. While numerous methods exist for detecting dataset contamination in large language models (LLMs), they are less effective for MLLMs due to their various modalities and multiple training phases. In this study, we introduce a multimodal data contamination detection framework, MM-Detect, designed for MLLMs. Our experimental results indicate that MM-Detect is sensitive to varying degrees of contamination and can highlight significant performance improvements due to leakage of the training set of multimodal benchmarks. Furthermore, We also explore the possibility of contamination originating from the pre-training phase of LLMs used by MLLMs and the fine-tuning phase of MLLMs, offering new insights into the stages at which contamination may be introduced.
翻译:多模态大语言模型(MLLMs)的快速发展已在各类多模态基准测试中展现出卓越性能。然而,训练过程中的数据污染问题给性能评估与比较带来了挑战。尽管存在多种针对大语言模型(LLMs)数据集污染的检测方法,但由于多模态模型的模态多样性和多阶段训练特性,这些方法对MLLMs的适用性有限。本研究提出了一种专为MLLMs设计的多模态数据污染检测框架MM-Detect。实验结果表明,MM-Detect对不同污染程度具有敏感性,能够有效揭示因多模态基准训练集泄露导致的显著性能提升。此外,我们还探讨了污染可能源自MLLMs所采用LLMs的预训练阶段以及MLLMs微调阶段的可能性,为理解污染引入环节提供了新的视角。