Language models (LMs) have demonstrated an improved capacity to handle long-context information, yet existing long-context benchmarks primarily measure LMs' retrieval abilities with extended inputs, e.g., pinpointing a short phrase from long-form text. Therefore, they may fall short when evaluating models' global context understanding capacity, such as synthesizing and reasoning over content across input to generate the response. In this paper, we study long-context language model (LCLM) evaluation through many-shot in-context learning (ICL). Concretely, we identify the skills each ICL task requires, and examine models' long-context capabilities on them. We first ask: What types of ICL tasks benefit from additional demonstrations, and are these tasks effective at evaluating LCLMs? We find that classification and summarization tasks show notable performance improvements with additional demonstrations, while translation and reasoning tasks do not exhibit clear trends. This suggests the classification tasks predominantly test models' retrieval skills. Next, we ask: To what extent does each task require retrieval skills versus global context understanding from LCLMs? We develop metrics to categorize ICL tasks into two groups: (i) retrieval tasks that require strong retrieval ability to pinpoint relevant examples, and (ii) global context understanding tasks that necessitate a deeper comprehension of the full input. We find that not all datasets can effectively evaluate these long-context capabilities. To address this gap, we introduce a new many-shot ICL benchmark, MANYICLBENCH, designed to characterize LCLMs' retrieval and global context understanding capabilities separately. Benchmarking 11 open-weight LCLMs with MANYICLBENCH, we find that while state-of-the-art models perform well in retrieval tasks up to 64k tokens, many show significant drops in global context tasks at just 16k tokens.
翻译:语言模型(LMs)在处理长上下文信息方面已展现出增强的能力,然而现有的长上下文基准测试主要衡量LMs在扩展输入下的检索能力,例如从长文本中精确定位短短语。因此,这些基准在评估模型的全局上下文理解能力(如综合推理输入内容以生成响应)时可能存在不足。本文通过多示例上下文学习(ICL)研究长上下文语言模型(LCLM)的评估。具体而言,我们识别了每个ICL任务所需的技能,并检验模型在这些技能上的长上下文能力。我们首先探讨:哪些类型的ICL任务能从额外示例中获益,这些任务是否能有效评估LCLMs?我们发现分类和摘要任务在增加示例后表现出显著的性能提升,而翻译和推理任务则未呈现明显趋势。这表明分类任务主要测试模型的检索技能。其次,我们探究:每项任务在多大程度上需要LCLMs的检索技能而非全局上下文理解能力?我们开发了度量指标,将ICL任务分为两类:(i)需要强大检索能力以定位相关示例的检索任务;(ii)需要深入理解完整输入的全局上下文理解任务。我们发现并非所有数据集都能有效评估这些长上下文能力。为填补这一空白,我们引入了一个新的多示例ICL基准测试MANYICLBENCH,旨在分别表征LCLMs的检索能力和全局上下文理解能力。通过对11个开源LCLMs进行MANYICLBENCH基准测试,我们发现:虽然最先进的模型在高达64k词元的检索任务中表现良好,但许多模型在仅16k词元的全局上下文任务中即出现显著性能下降。