Grounding responses in external knowledge represents an effective strategy for mitigating hallucinations in Large Language Models (LLMs). However, current LLMs struggle to seamlessly integrate knowledge while simultaneously maintaining faithfulness (or fidelity) and expressiveness, capabilities that humans naturally possess. This limitation results in outputs that either lack support from external knowledge, thereby compromising faithfulness, or appear overly verbose and unnatural, thus sacrificing expressiveness. In this work, to break the trade-off between faithfulness and expressiveness, we propose Collaborative Decoding (CoDe), a novel approach that dynamically integrates output probabilities generated with and without external knowledge. This integration is guided by distribution divergence and model confidence, enabling the selective activation of relevant and reliable expressions from the model's internal parameters. Furthermore, we introduce a knowledge-aware reranking mechanism that prevents over-reliance on prior parametric knowledge while ensuring proper utilization of provided external information. Through comprehensive experiments, our plug-and-play CoDe framework demonstrates superior performance in enhancing faithfulness without compromising expressiveness across diverse LLMs and evaluation metrics, validating both its effectiveness and generalizability.
翻译:基于外部知识生成响应是缓解大型语言模型(LLMs)幻觉问题的有效策略。然而,当前LLMs难以在保持忠实性(或保真度)与表达性的同时无缝整合知识——这两种能力是人类天然具备的。这种局限性导致模型输出要么缺乏外部知识支撑而损害忠实性,要么显得冗长不自然而牺牲表达性。为突破忠实性与表达性之间的权衡,本研究提出协同解码(CoDe),一种通过分布散度与模型置信度动态整合基于外部知识生成与无知识生成的输出概率的新方法。该机制能够选择性激活模型内部参数中相关且可靠的语言表达。此外,我们引入知识感知重排序机制,在防止过度依赖先验参数知识的同时,确保对外部信息的合理利用。通过全面实验验证,我们即插即用的CoDe框架在多样化LLMs与评估指标上均展现出在不损害表达性的前提下显著提升忠实性的优越性能,充分证明了其有效性与泛化能力。