We explore uncertainty quantification in large language models (LLMs), with the goal to identify when uncertainty in responses given a query is large. We simultaneously consider both epistemic and aleatoric uncertainties, where the former comes from the lack of knowledge about the ground truth (such as about facts or the language), and the latter comes from irreducible randomness (such as multiple possible answers). In particular, we derive an information-theoretic metric that allows to reliably detect when only epistemic uncertainty is large, in which case the output of the model is unreliable. This condition can be computed based solely on the output of the model obtained simply by some special iterative prompting based on the previous responses. Such quantification, for instance, allows to detect hallucinations (cases when epistemic uncertainty is high) in both single- and multi-answer responses. This is in contrast to many standard uncertainty quantification strategies (such as thresholding the log-likelihood of a response) where hallucinations in the multi-answer case cannot be detected. We conduct a series of experiments which demonstrate the advantage of our formulation. Further, our investigations shed some light on how the probabilities assigned to a given output by an LLM can be amplified by iterative prompting, which might be of independent interest.
翻译:本文探讨大型语言模型(LLM)中的不确定性量化问题,旨在识别给定查询下模型响应的不确定性何时较高。我们同时考虑认知不确定性与偶然不确定性:前者源于对基本事实(如事实信息或语言知识)的认知不足,后者则来自不可约简的随机性(如存在多个可能答案)。特别地,我们推导出一种信息论度量指标,能够可靠地检测仅当认知不确定性较高的情况——此时模型的输出是不可靠的。该判据可仅通过特殊迭代提示法(基于先前响应生成新提示)获得的模型输出来计算。这种量化方法能够检测单答案与多答案响应中的幻觉现象(即认知不确定性较高的情况)。相比之下,许多标准不确定性量化策略(如对响应对数似然设定阈值)无法检测多答案场景中的幻觉。我们通过一系列实验证明了所提方法的优势。此外,本研究揭示了迭代提示如何放大LLM对特定输出分配的概率,这一发现可能具有独立研究价值。