Large Language Models (LLMs) have been well-researched in many long-context tasks. However, due to high annotation costs, high-quality long-context summary datasets for training or evaluation are scarce, limiting further research. In this work, we introduce CNNSum, a new multi-scale Chinese long-context novel summarization benchmark, including four subsets, length covering 16k\textasciitilde128k, 695 samples in total, the annotations are human-driven. We evaluate commercial and open-source models on CNNSum and conduct a detailed analysis. Based on the observations, we further conduct fine-tuning exploration with short-context summary data. In our study: (1) GPT-4o underperformed, due to excessive subjective commentary. (2) Currently, long-context summarization mainly relies on memory ability, small LLMs with stable longer context lengths are the most cost-effective. Using long data concatenated from short-context summaries makes a significant improvement. (3) Prompt templates may cause a large performance gap but can be mitigated through fine-tuning. (4) Fine-tuned Chat or Instruction versions may harm the Base model and further fine-tuning cannot bridge performance gap. (5) while models with RoPE base scaling exhibit strong extrapolation potential, their performance may vary significantly when combined with other interpolation methods and need careful selection. (6) CNNSum provides more reliable and insightful evaluation results than other benchmarks. We release CNNSum to advance research in this field.
翻译:大型语言模型(LLMs)已在众多长文本任务中得到深入研究。然而,由于标注成本高昂,用于训练或评估的高质量长文本摘要数据集稀缺,限制了进一步探索。本研究提出了CNNSum——一个全新的多尺度中文长文本小说摘要基准数据集,包含四个子集,文本长度覆盖16k\textasciitilde128k,总计695个样本,所有标注均由人工完成。我们在CNNSum上评估了商业与开源模型,并进行了细致分析。基于观察结果,我们进一步利用短文本摘要数据开展了微调实验。研究发现:(1)GPT-4o因生成过多主观评论而表现欠佳;(2)当前长文本摘要主要依赖记忆能力,具备稳定长上下文处理能力的小型LLMs最具性价比,使用短文本摘要拼接的长数据能带来显著性能提升;(3)提示模板可能导致较大性能差异,但可通过微调缓解;(4)经微调的Chat或Instruction版本可能损害Base模型性能,且进一步微调难以弥补该差距;(5)采用RoPE基数缩放的模型虽展现出较强外推潜力,但与其他插值方法结合时性能波动显著,需谨慎选择;(6)相较于现有基准,CNNSum能提供更可靠且具有洞察力的评估结果。我们公开CNNSum数据集以推动该领域研究。