The development of large language models (LLMs) has significantly advanced the emergence of large multimodal models (LMMs). While LMMs have achieved tremendous success by promoting the synergy between multimodal comprehension and creation, they often face challenges when confronted with out-of-distribution data. This is primarily due to their reliance on image encoders trained to encode images into task-relevant features, which may lead them to disregard irrelevant details. Delving into the modeling capabilities of diffusion models for images naturally prompts the question: Can diffusion models serve as the eyes of large language models for image perception? In this paper, we propose DEEM, a simple and effective approach that utilizes the generative feedback of diffusion models to align the semantic distributions of the image encoder. This addresses the drawbacks of previous methods that solely relied on image encoders like ViT, thereby enhancing the model's resilience against out-of-distribution samples and reducing visual hallucinations. Importantly, this is achieved without requiring additional training modules and with fewer training parameters. We extensively evaluated DEEM on both our newly constructed RobustVQA benchmark and another well-known benchmark, POPE, for object hallucination. Compared to the state-of-the-art interleaved content generation models, DEEM exhibits enhanced robustness and a superior capacity to alleviate model hallucinations while utilizing fewer trainable parameters, less pre-training data (10%), and a smaller base model size.
翻译:大语言模型(LLMs)的发展显著推动了大型多模态模型(LMMs)的出现。尽管LMMs通过促进多模态理解与生成的协同作用取得了巨大成功,但在面对分布外数据时常常面临挑战。这主要归因于其依赖经过训练、将图像编码为任务相关特征的图像编码器,这可能导致模型忽略不相关的细节。深入探究扩散模型对图像的建模能力,自然引出一个问题:扩散模型能否作为大语言模型的图像感知之眼?本文提出DEEM,一种简单而有效的方法,利用扩散模型的生成反馈来对齐图像编码器的语义分布。这解决了先前方法仅依赖ViT等图像编码器的缺陷,从而增强了模型对分布外样本的鲁棒性并减少了视觉幻觉。重要的是,这一目标无需额外的训练模块且使用更少的训练参数即可实现。我们在新构建的RobustVQA基准和另一个著名的对象幻觉基准POPE上对DEEM进行了广泛评估。与最先进的交错内容生成模型相比,DEEM展现出更强的鲁棒性、更优异的缓解模型幻觉能力,同时使用了更少的可训练参数、更少的预训练数据(10%)和更小的基础模型规模。