Recent research increasingly focuses on training vision-language models (VLMs) with long, detailed image captions. However, small-scale VLMs often struggle to balance the richness of these captions with the risk of hallucinating content during fine-tuning. In this paper, we explore how well VLMs adapt to such captions. To quantify caption quality, we propose Decomposed NLI (DNLI), an evaluation framework that breaks down generated captions into individual propositions, assessing each in isolation. This fine-grained analysis reveals a critical balance between capturing descriptive details and preventing hallucinations. Our findings show that simply reducing caption complexity or employing standard data curation techniques does not effectively resolve this issue. To tackle this challenge, we introduce Knowledge Adapted (KnowAda) fine-tuning, a data-centric approach that automatically adapts training data with the model's existing knowledge and visual understanding. KnowAda minimizes hallucinations while preserving high descriptiveness. We validate this approach across several small-scale VLMs (up to 7B parameters) and dense caption datasets, demonstrating that KnowAda effectively balances hallucination reduction and descriptiveness. Our results show that KnowAda outperforms various baselines in both automatic metrics and human evaluations. We will release our code and models.
翻译:近期研究日益关注使用长而详细的图像描述来训练视觉-语言模型(VLMs)。然而,小规模VLM在微调过程中往往难以平衡描述的丰富性与内容幻觉风险。本文探究了VLM对此类描述的适应能力。为量化描述质量,我们提出分解式自然语言推理(DNLI)评估框架,将生成描述分解为独立命题进行逐项评估。这种细粒度分析揭示了捕捉描述性细节与防止幻觉之间的关键平衡。研究发现,单纯降低描述复杂度或采用标准数据筛选技术均无法有效解决此问题。为此,我们提出知识自适应(KnowAda)微调方法——一种以数据为中心的策略,能依据模型既有知识与视觉理解自动调整训练数据。KnowAda在保持高描述性的同时最小化幻觉现象。我们在多个小规模VLM(最高70亿参数)与密集描述数据集上验证了该方法,证明KnowAda能有效平衡幻觉抑制与描述能力。实验结果表明,KnowAda在自动评估指标与人工评估中均优于多种基线方法。我们将公开代码与模型。