Large language models (LLMs) often make accurate next token predictions but their confidence in these predictions can be poorly calibrated: high-confidence predictions are frequently wrong, and low-confidence predictions may be correct. This miscalibration is exacerbated by preference-based alignment methods breaking the link between predictive probability and correctness. We introduce a Calibration Aware Token-level Training Objective (CATTO), a calibration-aware objective that aligns predicted confidence with empirical prediction correctness, which can be combined with the original preference optimization objectives. Empirically, CATTO reduces Expected Calibration Error (ECE) by 2.22%-7.61% in-distribution and 1.46%-10.44% out-of-distribution compared to direct preference optimization (DPO), and by 0.22%-1.24% in-distribution and 1.23%-5.07% out-of-distribution compared to the strongest DPO baseline. This improvement in confidence does not come at a cost of losing task accuracy, where CATTO maintains or slightly improves multiple-choice question-answering accuracy on five datasets. We also introduce Confidence@k, a test-time scaling mechanism leveraging calibrated token probabilities for Bayes-optimal selection of output tokens.
翻译:大型语言模型(LLMs)通常能准确预测下一个词元,但其预测置信度可能存在校准不足的问题:高置信度预测常出现错误,而低置信度预测反而可能正确。基于偏好的对齐方法会破坏预测概率与正确性之间的关联,进一步加剧了这种校准偏差。本文提出一种校准感知的词元级训练目标(Calibration Aware Token-level Training Objective, CATTO),该目标通过将预测置信度与经验预测正确性对齐来实现校准感知,并可与原始偏好优化目标结合使用。实验表明,相较于直接偏好优化(DPO),CATTO在分布内将预期校准误差(ECE)降低了2.22%-7.61%,在分布外降低了1.46%-10.44%;相较于最强的DPO基线模型,在分布内降低了0.22%-1.24%,在分布外降低了1.23%-5.07%。这种置信度的提升并未以牺牲任务精度为代价——在五个数据集的多选题回答任务中,CATTO保持或略微提升了准确率。此外,我们提出Confidence@k机制,该测试时缩放方法利用校准后的词元概率,实现贝叶斯最优的输出词元选择。