We posit that large language models (LLMs) should be capable of expressing their intrinsic uncertainty in natural language. For example, if the LLM is equally likely to output two contradicting answers to the same question, then its generated response should reflect this uncertainty by hedging its answer (e.g., "I'm not sure, but I think..."). We formalize faithful response uncertainty based on the gap between the model's intrinsic confidence in the assertions it makes and the decisiveness by which they are conveyed. This example-level metric reliably indicates whether the model reflects its uncertainty, as it penalizes both excessive and insufficient hedging. We evaluate a variety of aligned LLMs at faithfully communicating uncertainty on several knowledge-intensive question answering tasks. Our results provide strong evidence that modern LLMs are poor at faithfully conveying their uncertainty, and that better alignment is necessary to improve their trustworthiness.
翻译:我们认为,大型语言模型(LLMs)应当能够用自然语言表达其内在不确定性。例如,若LLM对同一问题输出两个矛盾答案的可能性相当,则其生成回应应通过模糊化表述(例如“我不确定,但我认为……”)来反映这种不确定性。我们基于模型对其所作断言的内部置信度与断言传达的确定性程度之间的差距,形式化定义了忠实回应不确定性。这一示例级指标可靠地指示了模型是否反映了其不确定性,因为它同时对过度模糊化与模糊化不足进行惩罚。我们在多个知识密集型问答任务上评估了多种经过对齐的LLMs忠实传达不确定性的能力。我们的研究结果提供了有力证据,表明现代LLMs在忠实传达其不确定性方面表现欠佳,需要通过更好的对齐来提升其可信度。