When using large language models (LLMs) in knowledge-intensive tasks, such as open-domain question answering, external context can bridge a gap between external knowledge and LLM's parametric knowledge. Recent research has been developed to amplify contextual knowledge over the parametric knowledge of LLM with contrastive decoding approaches. While these approaches could yield truthful responses when relevant context is provided, they are prone to vulnerabilities when faced with noisy contexts. We extend the scope of previous studies to encompass noisy contexts and propose adaptive contrastive decoding (ACD) to leverage contextual influence effectively. ACD demonstrates improvements in open-domain question answering tasks compared to baselines, especially in robustness by remaining undistracted by noisy contexts in retrieval-augmented generation.
翻译:在知识密集型任务(如开放域问答)中使用大型语言模型(LLM)时,外部上下文能够弥合外部知识与LLM参数化知识之间的差距。近期研究通过对比解码方法,旨在增强上下文知识相对于LLM参数化知识的作用。尽管这些方法在提供相关上下文时能够产生真实响应,但在面对噪声上下文时容易表现出脆弱性。我们将先前研究的范围扩展至包含噪声上下文,并提出自适应对比解码(ACD)以有效利用上下文影响。与基线方法相比,ACD在开放域问答任务中表现出改进,特别是在检索增强生成中能够保持对噪声上下文的抗干扰性,从而增强了鲁棒性。