We introduce SealQA, a new challenge benchmark for evaluating SEarch-Augmented Language models on fact-seeking questions where web search yields conflicting, noisy, or unhelpful results. SealQA comes in three flavors: (1) Seal-0 (main) and (2) Seal-Hard, which assess factual accuracy and reasoning capabilities, with Seal-0 focusing on the most challenging questions where chat models (e.g., GPT-4.1) typically achieve near-zero accuracy; and (3) LongSeal, which extends SealQA to test long-context, multi-document reasoning in "needle-in-a-haystack" settings. Our evaluation reveals critical limitations in current models: Even frontier LLMs perform poorly across all SealQA flavors. On Seal-0, frontier agentic models equipped with tools like o3 and o4-mini achieve only 17.1% and 6.3% accuracy, respectively, at their best reasoning efforts. We find that advanced reasoning models such as DeepSeek-R1-671B and o3-mini are highly vulnerable to noisy search results. Notably, increasing test-time compute does not yield reliable gains across o3-mini, o4-mini, and o3, with performance often plateauing or even declining early. Additionally, while recent models are less affected by the "lost-in-the-middle" issue, they still fail to reliably identify relevant documents in LongSeal when faced with numerous distractors. To facilitate future work, we release SealQA at huggingface.co/datasets/vtllms/sealqa.
翻译:我们提出了SealQA,这是一个用于评估检索增强语言模型在事实性问答任务上表现的新型挑战性基准。该任务专门针对那些通过网页检索会得到矛盾、嘈杂或无帮助结果的问题。SealQA包含三个版本:(1) Seal-0(主要版本)和(2) Seal-Hard,用于评估事实准确性和推理能力,其中Seal-0聚焦于最具挑战性的问题——在这类问题上,聊天模型(例如GPT-4.1)的准确率通常接近零;(3) LongSeal,它将SealQA扩展至长上下文、多文档的“大海捞针”式推理场景。我们的评估揭示了当前模型的关键局限性:即使是前沿的大语言模型在所有SealQA版本上的表现都很差。在Seal-0上,配备了o3和o4-mini等工具的前沿智能体模型,在其最佳推理状态下,准确率也分别仅为17.1%和6.3%。我们发现,像DeepSeek-R1-671B和o3-mini这样的先进推理模型,对嘈杂的检索结果极为敏感。值得注意的是,增加测试时的计算资源并未能在o3-mini、o4-mini和o3模型上带来可靠的性能提升,其表现往往很快达到平台期甚至下降。此外,尽管近期模型受“中间信息丢失”问题的影响较小,但在LongSeal任务中面对大量干扰文档时,它们仍然无法可靠地识别出相关文档。为了促进未来研究,我们已在huggingface.co/datasets/vtllms/sealqa发布了SealQA数据集。