Despite their remarkable capabilities, Large Language Models (LLMs) are prone to generate responses that contradict verifiable facts, i.e., unfaithful hallucination content. Existing efforts generally focus on optimizing model parameters or editing semantic representations, which compromise the internal factual knowledge of target LLMs. In addition, hallucinations typically exhibit multifaceted patterns in downstream tasks, limiting the model's holistic performance across tasks. In this paper, we propose a Comparator-driven Decoding-Time (CDT) framework to alleviate the response hallucination. Firstly, we construct hallucinatory and truthful comparators with multi-task fine-tuning samples. In this case, we present an instruction prototype-guided mixture of experts strategy to enhance the ability of the corresponding comparators to capture different hallucination or truthfulness patterns in distinct task instructions. CDT constrains next-token predictions to factuality-robust distributions by contrasting the logit differences between the target LLMs and these comparators. Systematic experiments on multiple downstream tasks show that our framework can significantly improve the model performance and response factuality.
翻译:尽管大语言模型(LLMs)展现出卓越的能力,但其生成的响应常与可验证事实相矛盾,即产生不忠实的幻觉内容。现有研究主要集中于优化模型参数或编辑语义表示,这会损害目标大语言模型的内部事实知识。此外,幻觉在下游任务中通常表现出多面性模式,限制了模型跨任务的整体性能。本文提出一种比较器驱动的解码时(CDT)框架以缓解响应幻觉问题。首先,我们利用多任务微调样本构建幻觉比较器与真实性比较器。为此,我们提出一种指令原型引导的专家混合策略,以增强相应比较器捕捉不同任务指令中各类幻觉或真实性模式的能力。CDT框架通过对比目标大语言模型与这些比较器之间的逻辑值差异,将下一词元预测约束至事实鲁棒的分布空间。在多个下游任务上的系统实验表明,该框架能显著提升模型性能与响应的事实准确性。