This paper introduces LongBench v2, a benchmark designed to assess the ability of LLMs to handle long-context problems requiring deep understanding and reasoning across real-world multitasks. LongBench v2 consists of 503 challenging multiple-choice questions, with contexts ranging from 8k to 2M words, across six major task categories: single-document QA, multi-document QA, long in-context learning, long-dialogue history understanding, code repository understanding, and long structured data understanding. To ensure the breadth and the practicality, we collect data from nearly 100 highly educated individuals with diverse professional backgrounds. We employ both automated and manual review processes to maintain high quality and difficulty, resulting in human experts achieving only 53.7% accuracy under a 15-minute time constraint. Our evaluation reveals that the best-performing model, when directly answers the questions, achieves only 50.1% accuracy. In contrast, the o1-preview model, which includes longer reasoning, achieves 57.7%, surpassing the human baseline by 4%. These results highlight the importance of enhanced reasoning ability and scaling inference-time compute to tackle the long-context challenges in LongBench v2. The project is available at https://longbench2.github.io.
翻译:本文介绍了LongBench v2,这是一个旨在评估大语言模型处理需要跨真实世界多任务进行深度理解与推理的长上下文问题能力的基准。LongBench v2包含503个具有挑战性的多项选择题,其上下文长度从8千词到2百万词不等,涵盖六大主要任务类别:单文档问答、多文档问答、长上下文学习、长对话历史理解、代码仓库理解以及长结构化数据理解。为确保广度和实用性,我们从近100名具有不同专业背景的高学历人士中收集数据。我们采用自动与人工审查相结合的过程来保持高质量和高难度,最终使得人类专家在15分钟时间限制下仅达到53.7%的准确率。我们的评估显示,性能最佳的模型在直接回答问题时的准确率仅为50.1%。相比之下,包含更长推理过程的o1-preview模型达到了57.7%,超出人类基线4%。这些结果凸显了增强推理能力以及扩展推理时计算资源对于应对LongBench v2中长上下文挑战的重要性。该项目可通过 https://longbench2.github.io 访问。