Large language models (LLMs) have demonstrated remarkable progress in understanding long-context inputs. However, benchmarks for evaluating the long-context reasoning abilities of LLMs fall behind the pace. Existing benchmarks often focus on a narrow range of tasks or those that do not demand complex reasoning. To address this gap and enable a more comprehensive evaluation of the long-context reasoning capabilities of current LLMs, we propose a new synthetic benchmark, LongReason, which is constructed by synthesizing long-context reasoning questions from a varied set of short-context reasoning questions through context expansion. LongReason consists of 794 multiple-choice reasoning questions with diverse reasoning patterns across three task categories: reading comprehension, logical inference, and mathematical word problems. We evaluate 21 LLMs on LongReason, revealing that most models experience significant performance drops as context length increases. Our further analysis shows that even state-of-the-art LLMs still have significant room for improvement in providing robust reasoning across different tasks. We have open-sourced LongReason under https://huggingface.co/datasets/lz1bytedance/LongReason to support the comprehensive evaluation of LLMs' long-context reasoning capabilities.
翻译:大语言模型(LLMs)在理解长上下文输入方面已展现出显著进展。然而,用于评估LLMs长上下文推理能力的基准发展滞后于模型进步。现有基准通常局限于特定任务范围,或未涉及复杂推理需求。为弥补这一空白,实现对当前LLMs长上下文推理能力更全面的评估,我们提出了一种新型合成基准LongReason。该基准通过对多样化短上下文推理问题进行上下文扩展,合成长上下文推理问题。LongReason包含794道具有多样化推理模式的多项选择推理题,涵盖阅读理解、逻辑推理和数学应用题三大任务类别。我们在LongReason上评估了21个LLMs,发现大多数模型性能随上下文长度增加而显著下降。进一步分析表明,即使最先进的LLMs在不同任务中提供稳健推理的能力仍有显著提升空间。我们已在https://huggingface.co/datasets/lz1bytedance/LongReason开源LongReason,以支持对LLMs长上下文推理能力的全面评估。