Recent advancements in large language models (LLM) capable of processing extremely long texts highlight the need for a dedicated evaluation benchmark to assess their long-context capabilities. However, existing methods, like the needle-in-a-haystack test, do not effectively assess whether these models fully utilize contextual information, raising concerns about the reliability of current evaluation techniques. To thoroughly examine the effectiveness of existing benchmarks, we introduce a new metric called information coverage (IC), which quantifies the proportion of the input context necessary for answering queries. Our findings indicate that current benchmarks exhibit low IC; although the input context may be extensive, the actual usable context is often limited. To address this, we present ETHIC, a novel benchmark designed to assess LLMs' ability to leverage the entire context. Our benchmark comprises 2,648 test instances spanning four long-context tasks with high IC scores in the domains of books, debates, medicine, and law. Our evaluations reveal significant performance drops in contemporary LLMs, highlighting a critical challenge in managing long contexts. Our benchmark is available at https://github.com/dmis-lab/ETHIC.
翻译:近年来,能够处理极长文本的大语言模型(LLM)取得了显著进展,这凸显了需要专门的评估基准来检验其长上下文处理能力。然而,现有方法(如“大海捞针”测试)无法有效评估这些模型是否充分利用了上下文信息,引发了人们对当前评估技术可靠性的担忧。为了深入检验现有基准的有效性,我们引入了一种称为信息覆盖度(IC)的新指标,该指标量化了回答查询所需的输入上下文的比例。我们的研究结果表明,当前基准的IC值较低;尽管输入上下文可能非常广泛,但实际可用的上下文往往有限。为了解决这个问题,我们提出了ETHIC,这是一个旨在评估LLM利用整个上下文能力的新型基准。我们的基准包含2,648个测试实例,涵盖书籍、辩论、医学和法律四个领域中的四项具有高IC值的长上下文任务。我们的评估显示,当代LLM在这些任务上出现了显著的性能下降,突显了处理长上下文时面临的关键挑战。我们的基准可在 https://github.com/dmis-lab/ETHIC 获取。