In evaluating the long-context capabilities of large language models (LLMs), benchmarks such as "Needle-in-a-Haystack" (NIAH), Ruler, and Needlebench are commonly used. While these benchmarks measure how well models understand long-context input sequences, they do not effectively gauge the quality of long-form text generation--a critical aspect for applications such as design proposals and creative writing. To address this gap, we have introduced a new long-form text evaluation benchmark, LongGenBench, which tests models' ability to identify specific events within generated long text sequences. In this benchmark, we prompt long-context LMs to create long-form text that must include particular events or constraints and evaluate their ability to incorporate these elements. We evaluated ten long-context LMs across four distinct scenarios, three types of prompt instructions, and two different generation-length settings (16K and 32K). Although these models perform well on NIAH benchmarks, none demonstrated satisfactory performance on the LongGenBench, raising concerns about their ability to generate coherent long-form text that follows instructions. Additionally, as the length of the generated text increases, all models exhibit a significant drop in performance.
翻译:在评估大语言模型(LLM)的长上下文能力时,通常采用“大海捞针”(NIAH)、Ruler和Needlebench等基准测试。虽然这些基准测试衡量了模型对长上下文输入序列的理解能力,但未能有效评估长文本生成的质量——这对设计方案和创意写作等应用至关重要。为填补这一空白,我们提出了新的长文本生成评估基准LongGenBench,该基准测试模型在生成长文本序列中识别特定事件的能力。在此基准中,我们引导长上下文语言模型生成必须包含特定事件或约束条件的长文本,并评估其整合这些元素的能力。我们在四种不同场景、三类提示指令以及两种生成长度设置(16K和32K)下评估了十种长上下文大语言模型。尽管这些模型在NIAH基准测试中表现良好,但在LongGenBench上均未展现出令人满意的性能,这引发了对其生成符合指令的连贯长文本能力的担忧。此外,随着生成文本长度的增加,所有模型的性能均出现显著下降。