While large language models (LLMs) have demonstrated increasing power, they have also called upon studies on their hallucinated outputs that deviate from factually correct statements. In this paper, we focus on one important scenario of false premises, where LLMs are distracted by misaligned claims although the model possesses the required factual knowledge to answer original questions accurately. Inspired by the observation that entropy of the false-premise prompt is closely related to its likelihood to elicit hallucination generation, we propose a new prompting algorithm, named DecoPrompt, to mitigate hallucination. DecoPrompt leverages LLMs to "decode" the false-premise prompts without really eliciting hallucination output from LLMs. We perform experiments on two datasets, demonstrating that DecoPrompt can reduce hallucinations effectively on outputs from different LLMs. Moreover, DecoPrompt exhibits cross-model transferability, which facilitates its applications to scenarios such as LLMs of large sizes or unavailable model logits.
翻译:尽管大型语言模型(LLMs)已展现出日益增强的能力,但其偏离事实正确陈述的幻觉输出也引发了广泛研究。本文聚焦于错误前提这一重要场景:尽管模型具备准确回答原始问题所需的事实性知识,但仍会受到错误主张的干扰。受错误前提提示的熵与其引发幻觉生成的可能性密切相关的观察启发,我们提出一种名为DecoPrompt的新型提示算法以缓解幻觉问题。DecoPrompt利用LLMs对错误前提提示进行“解码”,而无需实际诱发LLMs产生幻觉输出。我们在两个数据集上开展实验,证明DecoPrompt能有效减少不同LLMs输出的幻觉现象。此外,DecoPrompt展现出跨模型可迁移性,这有助于其应用于大规模LLMs或无法获取模型逻辑值等场景。