Large Language Models (LLMs) have recently achieved remarkable performance in long-context understanding. However, current long-context LLM benchmarks are limited by rigid context length, labor-intensive annotation, and the pressing challenge of label leakage issues during LLM training. Therefore, we propose \textsc{AcademicEval}, a live benchmark for evaluating LLMs over long-context generation tasks. \textsc{AcademicEval} adopts papers on arXiv to introduce several academic writing tasks with long-context inputs, \textit{i.e.}, \textsc{Title}, \textsc{Abstract}, \textsc{Introduction}, and \textsc{Related Work}, which cover a wide range of abstraction levels and require no manual labeling. Moreover, \textsc{AcademicEval} integrates high-quality and expert-curated few-shot demonstrations from a collected co-author graph to enable flexible context length. Especially, \textsc{AcademicEval} features an efficient live evaluation, ensuring no label leakage. We conduct a holistic evaluation on \textsc{AcademicEval}, and the results illustrate that LLMs perform poorly on tasks with hierarchical abstraction levels and tend to struggle with long few-shot demonstrations, highlighting the challenge of our benchmark. Through experimental analysis, we also reveal some insights for enhancing LLMs' long-context modeling capabilities. Code is available at https://github.com/ulab-uiuc/AcademicEval
翻译:大语言模型(LLMs)近期在长上下文理解方面取得了显著进展。然而,当前的长上下文LLM评测基准受限于固定的上下文长度、人工标注成本高昂,以及在LLM训练过程中日益突出的标签泄露问题。为此,我们提出\textsc{AcademicEval},一个用于评估LLMs在长上下文生成任务上的动态评测基准。\textsc{AcademicEval}采用arXiv上的学术论文,构建了一系列基于长上下文输入的学术写作任务,即\textsc{标题}、\textsc{摘要}、\textsc{引言}和\textsc{相关工作},这些任务覆盖了广泛的抽象层次且无需人工标注。此外,\textsc{AcademicEval}通过从构建的合作者图谱中选取高质量、专家筛选的少样本示例,实现了灵活的上下文长度。特别地,\textsc{AcademicEval}具备高效的动态评估机制,确保了无标签泄露。我们在\textsc{AcademicEval}上进行了全面评估,结果表明LLMs在具有层次化抽象级别的任务上表现不佳,且难以有效处理长少样本示例,这凸显了我们基准的挑战性。通过实验分析,我们也为增强LLMs的长上下文建模能力揭示了一些洞见。代码发布于https://github.com/ulab-uiuc/AcademicEval。