As Large Language Models (LLMs) are increasingly deployed in real-world settings, correctness alone is insufficient. Reliable deployment requires maintaining truthful beliefs under contextual perturbations. Existing evaluations largely rely on point-wise confidence like Self-Consistency, which can mask brittle belief. We show that even facts answered with perfect self-consistency can rapidly collapse under mild contextual interference. To address this gap, we propose Neighbor-Consistency Belief (NCB), a structural measure of belief robustness that evaluates response coherence across a conceptual neighborhood. To validate the efficiency of NCB, we introduce a new cognitive stress-testing protocol that probes outputs stability under contextual interference. Experiments across multiple LLMs show that the performance of high-NCB data is relatively more resistant to interference. Finally, we present Structure-Aware Training (SAT), which optimizes context-invariant belief structure and reduces long-tail knowledge brittleness by approximately 30%. Code will be available at https://github.com/zjunlp/belief.
翻译:随着大语言模型在现实场景中的部署日益广泛,仅具备正确性已不足以保证其可靠性。可靠的部署要求模型在上下文扰动下保持真实的信念。现有评估方法主要依赖如自一致性等点状置信度指标,这些指标可能掩盖信念的脆弱性。我们发现,即使在完美自一致性下回答的事实,在轻微上下文干扰下也可能迅速崩溃。为弥补这一不足,我们提出邻域一致性信念——一种评估概念邻域内响应连贯性的信念鲁棒性结构度量。为验证NCB的有效性,我们引入一种新的认知压力测试协议,用于探测上下文干扰下输出结果的稳定性。跨多个大语言模型的实验表明,高NCB数据的性能对干扰具有相对更强的抵抗能力。最后,我们提出结构感知训练方法,该方法通过优化上下文不变的信念结构,将长尾知识的脆弱性降低约30%。代码将在https://github.com/zjunlp/belief公开。