The propensity of Large Language Models (LLMs) to generate hallucinations and non-factual content undermines their reliability in high-stakes domains, where rigorous control over Type I errors (the conditional probability of incorrectly classifying hallucinations as truthful content) is essential. Despite its importance, formal verification of LLM factuality with such guarantees remains largely unexplored. In this paper, we introduce FactTest, a novel framework that statistically assesses whether an LLM can confidently provide correct answers to given questions with high-probability correctness guarantees. We formulate factuality testing as hypothesis testing problem to enforce an upper bound of Type I errors at user-specified significance levels. Notably, we prove that our framework also ensures strong Type II error control under mild conditions and can be extended to maintain its effectiveness when covariate shifts exist. %These analyses are amenable to the principled NP framework. Our approach is distribution-free and works for any number of human-annotated samples. It is model-agnostic and applies to any black-box or white-box LM. Extensive experiments on question-answering (QA) and multiple-choice benchmarks demonstrate that \approach effectively detects hallucinations and improves the model's ability to abstain from answering unknown questions, leading to an over 40% accuracy improvement.
翻译:大型语言模型(LLM)倾向于产生幻觉和非事实内容,这削弱了其在关键领域应用的可靠性,而在这些领域中,严格控制第一类错误(将幻觉错误分类为真实内容的概率)至关重要。尽管重要性显著,但具备此类保证的LLM事实性形式化验证在很大程度上仍未得到探索。本文提出FactTest,一种新颖的统计框架,用于评估LLM是否能够以高概率正确性保证,对给定问题提供可信的正确答案。我们将事实性检验形式化为假设检验问题,以在用户指定的显著性水平上强制设定第一类错误的上界。值得注意的是,我们证明了该框架在温和条件下同样能确保对第二类错误的强控制,并可扩展至存在协变量偏移时保持其有效性。我们的方法无需分布假设,适用于任意数量的人工标注样本;同时具有模型无关性,可应用于任何黑盒或白盒语言模型。在问答(QA)及多项选择题基准上的大量实验表明,FactTest能有效检测幻觉,并提升模型对未知问题的拒答能力,从而实现超过40%的准确率提升。