Large language models (LLMs) can correctly answer "When was Einstein born?" yet fail to provide the same date when writing about Einstein's life revealing a fundamental inconsistency in how models access factual knowledge across task complexities. While models display impressive accuracy on factual question-answering benchmarks, the reliability gap between simple and complex queries remains poorly understood, eroding their trustworthiness. In this work, we introduce Short-Long Form Alignment for Factual Question Answering (SLAQ), a controlled evaluation framework that compares LLMs' answers to the same factual questions asked (a) in isolation (short) vs. (b) integrated into complex queries (long). Looking at 16 LLMs across 600 queries, we find a systematic misalignment of answers to the corresponding short and long queries. We further uncover position-dependent accuracy loss and momentum effects where consecutive correct or incorrect answers create self-reinforcing patterns. Through mechanistic analysis, we find that aligned facts activate overlapping model internals, and that metrics based on mechanistic similarity can predict short-long answer alignment with up to 78% accuracy. Our work establishes factual consistency over query complexity as an important aspect of LLMs' trustworthiness and challenges current evaluation practices, which implicitly assume that good performance for simple factual queries implies reliability in more complex knowledge-seeking tasks too.
翻译:大型语言模型(LLMs)能够正确回答"爱因斯坦何时出生?"这样的问题,但在撰写爱因斯坦生平叙述时却无法提供相同日期,这揭示了模型在不同任务复杂度下访问事实知识时存在根本性不一致性。尽管模型在事实问答基准测试中展现出令人印象深刻的准确性,但简单查询与复杂查询之间的可靠性差距仍未得到充分理解,这削弱了其可信度。本研究提出面向事实问答的短长形式对齐评估框架(SLAQ),该受控评估框架通过对比LLMs对相同事实问题的两种回答形式:(a)孤立提问(短形式)与(b)融入复杂查询(长形式),系统考察了16个LLM在600组查询中的表现。研究发现对应短查询与长查询的答案存在系统性错位现象,进一步揭示了位置依赖性准确率损失以及连续正确或错误答案形成自我强化模式的动量效应。通过机制分析发现,对齐事实会激活重叠的模型内部结构,且基于机制相似度的度量指标能以高达78%的准确率预测短长答案对齐性。本研究确立了查询复杂度层面的事实一致性作为LLMs可信度的重要维度,并对当前评估实践提出挑战——这些实践隐含假设模型在简单事实查询中的良好表现即意味着其在复杂知识检索任务中同样可靠。