Large Vision-Language Models (LVLMs) have achieved substantial progress in cross-modal tasks. However, due to language bias, LVLMs are susceptible to object hallucination, which can be primarily divided into category, attribute, and relation hallucination, significantly impeding the trustworthy AI applications. Editing the internal activations of LVLMs has shown promising effectiveness in mitigating hallucinations with minimal cost. However, previous editing approaches neglect the effective guidance offered by factual textual semantics, thereby struggling to explicitly mitigate language bias. To address these issues, we propose Adaptive Factual-guided Visual-Textual Editing for hallucination mitigation (AFTER), which comprises Factual-Augmented Activation Steering (FAS) and Query-Adaptive Offset Optimization (QAO), to adaptively guides the original biased activations towards factual semantics. Specifically, FAS is proposed to provide factual and general guidance for activation editing, thereby explicitly modeling the precise visual-textual associations. Subsequently, QAO introduces a query-aware offset estimator to establish query-specific editing from the general steering vector, enhancing the diversity and granularity of editing. Extensive experiments on standard hallucination benchmarks across three widely adopted LVLMs validate the efficacy of the proposed AFTER, notably achieving up to a 16.3% reduction of hallucination over baseline on the AMBER benchmark. Our code and data will be released for reproducibility.
翻译:大型视觉语言模型(LVLMs)在多模态任务中取得了显著进展。然而,由于语言偏差的存在,LVLMs易受对象幻觉影响,主要可分为类别、属性和关系幻觉,严重阻碍了可信人工智能应用的发展。通过编辑LVLMs的内部激活来缓解幻觉已展现出以最小成本实现显著效果的前景。然而,现有编辑方法忽视了事实性文本语义提供的有效引导,因而难以显式地缓解语言偏差。为解决这些问题,我们提出用于缓解幻觉的自适应事实引导视觉-文本编辑方法(AFTER),该方法包含事实增强激活引导(FAS)与查询自适应偏移优化(QAO),能够自适应地将原始偏差激活引导至事实语义空间。具体而言,FAS旨在为激活编辑提供事实性与通用性引导,从而显式建模精确的视觉-文本关联。随后,QAO引入查询感知的偏移估计器,从通用引导向量中建立查询特定的编辑策略,增强编辑的多样性与粒度。在三种广泛采用的LVLMs上进行的标准幻觉基准测试表明,AFTER方法具有显著有效性,尤其在AMBER基准上实现了相较于基线高达16.3%的幻觉减少率。我们将公开代码与数据以确保可复现性。