Large language models are increasingly relied upon as sources of information, but their propensity for generating false or misleading statements with high confidence poses risks for users and society. In this paper, we confront the critical problem of epistemic miscalibration $\unicode{x2013}$ where a model's linguistic assertiveness fails to reflect its true internal certainty. We introduce a new human-labeled dataset and a novel method for measuring the linguistic assertiveness of Large Language Models (LLMs) which cuts error rates by over 50% relative to previous benchmarks. Validated across multiple datasets, our method reveals a stark misalignment between how confidently models linguistically present information and their actual accuracy. Further human evaluations confirm the severity of this miscalibration. This evidence underscores the urgent risk of the overstated certainty LLMs hold which may mislead users on a massive scale. Our framework provides a crucial step forward in diagnosing this miscalibration, offering a path towards correcting it and more trustworthy AI across domains.
翻译:大型语言模型日益成为人们依赖的信息来源,但其倾向于以高度自信生成虚假或误导性陈述的特性,给用户和社会带来了风险。本文直面认知失准这一关键问题——即模型的语言断言性未能反映其真实的内部确定性。我们引入了一个新的人工标注数据集和一种新颖的方法,用于测量大型语言模型(LLMs)的语言断言性,该方法相对于先前基准将错误率降低了50%以上。在多个数据集上的验证表明,我们的方法揭示了模型在语言上呈现信息的自信程度与其实际准确性之间存在显著错位。进一步的人工评估证实了这种失准的严重性。这一证据凸显了LLMs所持有的过度自信可能大规模误导用户的紧迫风险。我们的框架为诊断这种失准提供了关键进展,为纠正该问题及实现跨领域更可信赖的人工智能指明了路径。