Recent advances in Multi-modal Large Language Models (MLLMs), such as LLaVA-series models, are driven by massive machine-generated instruction-following data tuning. Such automatic instruction collection pipelines, however, inadvertently introduce significant variability in data quality. This paper introduces a novel instruction curation algorithm, derived from two unique perspectives, human and LLM preference alignment, to compress this vast corpus of machine-generated multimodal instructions to a compact and high-quality form: (i) For human preference alignment, we have collected a machine-generated multimodal instruction dataset and established a comprehensive set of both subjective and objective criteria to guide the data quality assessment critically from human experts. By doing so, a reward model was trained on the annotated dataset to internalize the nuanced human understanding of instruction alignment. (ii) For LLM preference alignment, given the instruction selected by the reward model, we propose leveraging the inner LLM used in MLLM to align the writing style of visual instructions with that of the inner LLM itself, resulting in LLM-aligned instruction improvement. Extensive experiments demonstrate that we can maintain or even improve model performance by compressing synthetic multimodal instructions by up to 90%. Impressively, by aggressively reducing the total training sample size from 158k to 14k (9$\times$ smaller), our model consistently outperforms its full-size dataset counterpart across various MLLM benchmarks. Our project is available at https://github.com/DCDmllm/Align2LLaVA.
翻译:近期多模态大型语言模型(如LLaVA系列模型)的进展主要得益于海量机器生成的指令跟随数据进行微调。然而,这种自动指令收集流程会不可避免地引入显著的数据质量波动。本文提出一种新颖的指令构建算法,该算法基于人类与LLM偏好对齐这两个独特视角,将海量机器生成的多模态指令压缩为紧凑且高质量的形式:(i)在人类偏好对齐方面,我们收集了机器生成的多模态指令数据集,并建立了一套涵盖主观与客观标准的完整评估体系,以指导人类专家对数据质量进行关键性评估。基于此,我们在标注数据集上训练了一个奖励模型,从而内化人类对指令对齐的细致理解。(ii)在LLM偏好对齐方面,针对奖励模型筛选出的指令,我们提出利用MLLM内部使用的LLM,将视觉指令的书写风格与内部LLM自身的风格对齐,从而实现LLM对齐的指令优化。大量实验表明,通过将合成多模态指令压缩高达90%,我们能够维持甚至提升模型性能。值得注意的是,在将总训练样本量从158k大幅缩减至14k(缩小9倍)的情况下,我们的模型在各类MLLM基准测试中均持续优于使用完整数据集的对照模型。项目地址:https://github.com/DCDmllm/Align2LLaVA。