Accurately gauging the confidence level of Large Language Models' (LLMs) predictions is pivotal for their reliable application. However, LLMs are often uncalibrated inherently and elude conventional calibration techniques due to their proprietary nature and massive scale. In this work, we explore the potential of deriving confidence from the distribution of multiple randomly sampled model generations, via three measures of consistency. We perform an extensive evaluation across various open and closed-source models on nine reasoning datasets. Results show that consistency-based calibration methods outperform existing post-hoc approaches. Meanwhile, we find that factors such as intermediate explanations, model scaling, and larger sample sizes enhance calibration, while instruction-tuning makes calibration more difficult. Moreover, confidence scores obtained from consistency have the potential to enhance model performance. Finally, we offer practical guidance on choosing suitable consistency metrics for calibration, tailored to the characteristics of various LMs.
翻译:准确评估大语言模型(LLM)预测的置信水平对其可靠应用至关重要。然而,LLM本质上缺乏校准性,且因其专有性质和大规模特性而难以应用传统校准技术。本研究通过三种一致性指标,探索从多次随机采样的模型生成结果分布中推导置信度的可能性。我们在九个推理数据集上对多种开源和闭源模型进行了广泛评估。结果表明,基于一致性的校准方法优于现有的后处理方法。同时,我们发现中间解释、模型规模扩展和更大样本量等因素可提升校准效果,而指令微调则使校准更具挑战性。此外,从一致性获得的置信度分数具有提升模型性能的潜力。最后,我们针对不同语言模型的特征,提供了选择合适一致性指标进行校准的实践指导。