Honesty alignment-the ability of large language models (LLMs) to recognize their knowledge boundaries and express calibrated confidence-is essential for trustworthy deployment. Existing methods either rely on training-free confidence estimation (e.g., token probabilities, self-consistency) or training-based calibration with correctness annotations. While effective, achieving universal honesty alignment with training-based calibration requires costly, large-scale labeling. To support annotation-efficient training, we introduce Elicitation-Then-Calibration (EliCal), a two-stage framework that first elicits internal confidence using inexpensive self-consistency supervision, then calibrates this confidence with a small set of correctness annotations. To support a large-scale study, we release HonestyBench, a benchmark covering ten free-form QA datasets with 560k training and 70k evaluation instances annotated with correctness and self-consistency signals. Experiments show that EliCal achieves near-optimal alignment with only 1k correctness annotations (0.18% of full supervision) and better alignment performance on unseen MMLU tasks than the calibration-only baseline, offering a scalable solution toward universal honesty alignment in LLMs.
翻译:诚实对齐——即大型语言模型(LLM)识别其知识边界并表达校准后置信度的能力——对于可信部署至关重要。现有方法要么依赖于免训练置信度估计(例如词元概率、自洽性),要么依赖于基于训练且带有正确性标注的校准。虽然有效,但通过基于训练的校准实现通用诚实对齐需要昂贵的大规模标注。为支持标注高效的训练,我们提出了启发-后校准(EliCal)框架,这是一个两阶段方法:首先使用廉价的自洽性监督来启发内部置信度,然后利用少量正确性标注集对该置信度进行校准。为支持大规模研究,我们发布了HonestyBench基准,涵盖十个自由形式问答数据集,包含56万个训练实例和7万个评估实例,并标注了正确性和自洽性信号。实验表明,EliCal仅需1千个正确性标注(全监督的0.18%)即可实现接近最优的对齐效果,并且在未见过的MMLU任务上比仅使用校准的基线方法具有更好的对齐性能,为LLM的通用诚实对齐提供了一种可扩展的解决方案。