Developing Large Language Models (LLMs) with robust long-context capabilities has been the recent research focus, resulting in the emergence of long-context LLMs proficient in Chinese. However, the evaluation of these models remains underdeveloped due to a lack of benchmarks. To address this gap, we present CLongEval, a comprehensive Chinese benchmark for evaluating long-context LLMs. CLongEval is characterized by three key features: (1) Sufficient data volume, comprising 7 distinct tasks and 7,267 examples; (2) Broad applicability, accommodating to models with context windows size from 1K to 100K; (3) High quality, with over 2,000 manually annotated question-answer pairs in addition to the automatically constructed labels. With CLongEval, we undertake a comprehensive assessment of 6 open-source long-context LLMs and 2 leading commercial counterparts that feature both long-context abilities and proficiency in Chinese. We also provide in-depth analysis based on the empirical results, trying to shed light on the critical capabilities that present challenges in long-context settings. The dataset, evaluation scripts, and model outputs will be released.
翻译:开发具备强大长上下文能力的大型语言模型(LLMs)是近期的研究焦点,由此催生了擅长中文处理的长上下文LLMs。然而,由于缺乏基准测试,这些模型的评估方法尚不成熟。为填补这一空白,我们提出了CLongEval,一个用于评估长上下文LLMs的综合性中文基准。CLongEval具有三大特征:(1)数据量充足,包含7个不同任务和7267个样例;(2)广泛适用性,支持上下文窗口大小为1K到100K的模型;(3)高质量,除自动构建的标签外,还包含超过2000个手动标注的问答对。基于CLongEval,我们对6个开源长上下文LLMs和两款兼具长上下文能力与中文优势的领先商业模型进行了全面评估。同时,我们根据实证结果深入分析,试图揭示长上下文场景中具有挑战性的关键能力。本数据集、评估脚本及模型输出将公开发布。