Human preference alignment can greatly enhance Multimodal Large Language Models (MLLMs), but collecting high-quality preference data is costly. A promising solution is the self-evolution strategy, where models are iteratively trained on data they generate. However, current techniques still rely on human- or GPT-annotated data and sometimes require additional models or ground truth answers. To address these issues, we propose a novel multimodal self-evolution framework that enables the model to autonomously generate high-quality questions and answers using only unannotated images. First, we implement an image-driven self-questioning mechanism, allowing the model to create and evaluate questions based on image content, regenerating them if they are irrelevant or unanswerable. This sets a strong foundation for answer generation. Second, we introduce an answer self-enhancement technique, starting with image captioning to improve answer quality. We also use corrupted images to generate rejected answers, forming distinct preference pairs for optimization. Finally, we incorporate an image content alignment loss function alongside Direct Preference Optimization (DPO) loss to reduce hallucinations, ensuring the model focuses on image content. Experiments show that our framework performs competitively with methods using external information, offering a more efficient and scalable approach to MLLMs.
翻译:人类偏好对齐能够显著增强多模态大语言模型(MLLMs),但收集高质量的偏好数据成本高昂。一种有前景的解决方案是自我进化策略,即模型在其自身生成的数据上进行迭代训练。然而,现有技术仍依赖于人类或GPT标注的数据,有时还需要额外的模型或真实答案。为解决这些问题,我们提出了一种新颖的多模态自我进化框架,使模型能够仅使用未标注图像自主生成高质量的问题与答案。首先,我们实现了一种图像驱动的自我提问机制,允许模型根据图像内容创建并评估问题,若问题不相关或无法回答则重新生成。这为答案生成奠定了坚实基础。其次,我们引入了一种答案自我增强技术,从图像描述生成入手以提升答案质量。我们还利用损坏图像生成被拒绝的答案,从而形成用于优化的差异化偏好对。最后,我们在直接偏好优化(DPO)损失函数之外,结合了图像内容对齐损失函数,以减少幻觉现象,确保模型聚焦于图像内容。实验表明,我们的框架在使用外部信息的方法面前表现出竞争力,为MLLMs提供了一种更高效且可扩展的途径。