We study 14 large language models (LLMs) fine-tuned for chat and find that their maximum softmax probabilities (MSPs) are consistently miscalibrated on multiple-choice Q&A. However, those MSPs might still encode useful uncertainty information. Specifically, we hypothesized that wrong answers would be associated with smaller MSPs compared to correct answers. Via rigororous statistical testing, we show that this hypothesis holds for models which perform well on the underlying Q&A task. We also find a strong direction correlation between Q&A accuracy and MSP correctness prediction, while finding no correlation between Q&A accuracy and calibration error. This suggests that within the current fine-tuning paradigm, we can expect correctness prediction but not calibration to improve as LLM capabilities progress. To demonstrate the utility of correctness prediction, we show that when models have the option to abstain, performance can be improved by selectively abstaining based on the MSP of the initial model response, using only a small amount of labeled data to choose the MSP threshold.
翻译:本研究对14个专为聊天场景微调的大语言模型进行了分析,发现它们在多项选择题上的最大softmax概率持续存在校准偏差。然而,这些概率可能仍包含有价值的置信度信息。具体而言,我们假设错误答案对应的最大softmax概率会低于正确答案。通过严格的统计检验,我们证明该假设在基础问答任务表现良好的模型上成立。同时,我们发现问答准确率与概率正确性预测能力存在显著正相关,而问答准确率与校准误差之间则无相关性。这表明在当前微调范式下,随着大语言模型能力提升,我们可以预期其正确性预测能力会改善,但校准效果未必同步提升。为证明正确性预测的实用价值,我们展示了当模型具备弃答选项时,仅需少量标注数据设定概率阈值,即可基于初始模型响应的最大softmax概率进行选择性弃答,从而提升整体性能。