We explore uncertainty quantification in large language models (LLMs), with the goal to identify when uncertainty in responses given a query is large. We simultaneously consider both epistemic and aleatoric uncertainties, where the former comes from the lack of knowledge about the ground truth (such as about facts or the language), and the latter comes from irreducible randomness (such as multiple possible answers). In particular, we derive an information-theoretic metric that allows to reliably detect when only epistemic uncertainty is large, in which case the output of the model is unreliable. This condition can be computed based solely on the output of the model obtained simply by some special iterative prompting based on the previous responses. Such quantification, for instance, allows to detect hallucinations (cases when epistemic uncertainty is high) in both single- and multi-answer responses. This is in contrast to many standard uncertainty quantification strategies (such as thresholding the log-likelihood of a response) where hallucinations in the multi-answer case cannot be detected. We conduct a series of experiments which demonstrate the advantage of our formulation. Further, our investigations shed some light on how the probabilities assigned to a given output by an LLM can be amplified by iterative prompting, which might be of independent interest.
翻译:本文探讨大语言模型(LLM)中的不确定性量化问题,旨在识别给定查询下模型响应的不确定性何时较高。我们同时考虑认知不确定性和偶然不确定性:前者源于对基本事实(如事实信息或语言知识)的认知不足,后者则来自不可约简的随机性(例如存在多个可能答案)。特别地,我们推导出一种信息论度量指标,能够可靠地检测仅当认知不确定性较高的情况——此时模型的输出是不可靠的。该条件可仅通过基于先前响应的特殊迭代提示所获得的模型输出来计算。例如,这种量化方法能够检测单答案和多答案响应中的幻觉现象(即认知不确定性较高的情况)。这与许多标准的不确定性量化策略(如对响应对数似然设定阈值)形成对比,后者无法检测多答案情况下的幻觉。我们通过一系列实验证明了所提方法的优势。此外,我们的研究揭示了如何通过迭代提示放大LLM对特定输出分配的概率,这一发现可能具有独立的研究价值。