Large language models (LLMs) have revolutionized the field of natural language processing with their impressive reasoning and question-answering capabilities. However, these models are sometimes prone to generating credible-sounding but incorrect information, a phenomenon known as LLM hallucinations. Reliable uncertainty estimation in LLMs is essential for fostering trust in their generated responses and serves as a critical tool for the detection and prevention of erroneous or hallucinated outputs. To achieve reliable and well-calibrated uncertainty quantification in open-ended and free-form natural language generation, we propose an uncertainty-aware fine-tuning approach for LLMs. This approach enhances the model's ability to provide reliable uncertainty estimates without compromising accuracy, thereby guiding them to produce more trustworthy responses. We introduce a novel uncertainty-aware causal language modeling loss function, grounded in the principles of decision theory. Through rigorous evaluation on multiple free-form question-answering datasets and models, we demonstrate that our uncertainty-aware fine-tuning approach yields better calibrated uncertainty estimates in natural language generation tasks than fine-tuning with the standard causal language modeling loss. Furthermore, the experimental results show that the proposed method significantly improves the model's ability to detect hallucinations and identify out-of-domain prompts.
翻译:大型语言模型(LLMs)凭借其卓越的推理与问答能力,已彻底改变了自然语言处理领域。然而,这些模型有时会生成看似可信实则错误的信息,这种现象被称为LLM幻觉。在LLMs中实现可靠的不确定性估计,对于增强其生成响应的可信度至关重要,同时也是检测和预防错误或幻觉输出的关键工具。为实现开放域自由形式自然语言生成中可靠且校准良好的不确定性量化,我们提出了一种面向LLMs的不确定性感知微调方法。该方法能在保持模型准确性的同时,增强其提供可靠不确定性估计的能力,从而引导模型生成更可信的响应。我们基于决策理论原理,提出了一种新颖的不确定性感知因果语言建模损失函数。通过在多个自由形式问答数据集和模型上的严格评估,我们证明相较于使用标准因果语言建模损失进行微调,我们的不确定性感知微调方法能在自然语言生成任务中产生校准更优的不确定性估计。此外,实验结果表明所提方法显著提升了模型检测幻觉和识别域外提示的能力。