Multimodal large language models (MLLMs) contribute a powerful mechanism to understanding visual information building on large language models. However, MLLMs are notorious for suffering from hallucinations, especially when generating lengthy, detailed descriptions for images. Our analysis reveals that hallucinations stem from the inherent summarization mechanism of large language models, leading to excessive dependence on linguistic tokens while neglecting vision information. In this paper, we propose NoiseBoost, a broadly applicable and simple method for alleviating hallucinations for MLLMs through the integration of noise feature perturbations. Noise perturbation acts as a regularizer, facilitating a balanced distribution of attention weights among visual and linguistic tokens. Despite its simplicity, NoiseBoost consistently enhances the performance of MLLMs across common training strategies, including supervised fine-tuning and reinforcement learning. Further, NoiseBoost pioneerly enables semi-supervised learning for MLLMs, unleashing the power of unlabeled data. Comprehensive experiments demonstrate that NoiseBoost improves dense caption accuracy by 8.1% with human evaluation and achieves comparable results with 50% of the data by mining unlabeled data. Code and models are available at https://kaiwu5.github.io/noiseboost.
翻译:多模态大语言模型(MLLMs)在大型语言模型基础上构建了理解视觉信息的强大机制。然而,MLLMs因产生幻觉而备受诟病,尤其在为图像生成冗长、细致的描述时。我们的分析表明,幻觉源于大型语言模型固有的摘要生成机制,导致对语言标记的过度依赖而忽视视觉信息。本文提出NoiseBoost,一种广泛适用且简单的方法,通过引入噪声特征扰动来缓解MLLMs的幻觉问题。噪声扰动作为一种正则化器,促进视觉标记与语言标记间注意力权重的均衡分布。尽管方法简单,NoiseBoost在包括监督微调和强化学习在内的常见训练策略中均能持续提升MLLMs的性能。此外,NoiseBoost首次实现了MLLMs的半监督学习,释放了未标注数据的潜力。综合实验表明,NoiseBoost通过人工评估将密集描述准确率提升8.1%,并通过挖掘未标注数据在仅使用50%数据量时取得可比结果。代码与模型发布于https://kaiwu5.github.io/noiseboost。