Large Language Models (LLMs) have been well-researched in many long-context tasks. However, due to high annotation costs, high-quality long-context summary datasets for training or evaluation are scarce, limiting further research. In this work, we introduce CNNSum, a new multi-scale Chinese long-context novel summarization benchmark, including four subsets, length covering 16k to 128k, 695 samples in total, the annotations are human-driven. We evaluate commercial and open-source models on CNNSum and conduct a detailed analysis. Based on the observations, we further conduct fine-tuning exploration with short-context summary data. In our study: (1) GPT-4o underperformed, due to excessive subjective commentary. (2) Currently, long-context summarization mainly relies on memory ability, small LLMs with stable longer context lengths are the most cost-effective. Using long data concatenated from short-context summaries makes a significant improvement. (3) Prompt templates may cause a large performance gap but can be mitigated through fine-tuning. (4) Fine-tuned Chat or Instruction versions may harm the Base model and further fine-tuning cannot bridge performance gap. (5) while models with RoPE base scaling exhibit strong extrapolation potential, their performance may vary significantly when combined with other interpolation methods and need careful selection. (6) CNNSum provides more reliable and insightful evaluation results than other benchmarks. We release CNNSum to advance research in this field (https://github.com/CxsGhost/CNNSum).
翻译:大语言模型(LLMs)已在诸多长上下文任务中得到深入研究。然而,由于标注成本高昂,用于训练或评估的高质量长上下文摘要数据集十分稀缺,限制了进一步探索。本研究提出了CNNSum,一个新颖的多尺度中文长篇小说摘要基准数据集,包含四个子集,文本长度覆盖16k至128k,共计695个样本,所有标注均由人工完成。我们在CNNSum上评估了商业及开源模型,并进行了详细分析。基于观察结果,我们进一步利用短上下文摘要数据开展了微调探索。研究发现:(1)GPT-4o表现欠佳,主要因其产生了过多主观性评论。(2)当前长上下文摘要任务主要依赖模型的记忆能力,具备稳定长上下文处理能力的小型LLMs最具成本效益。使用由短上下文摘要拼接而成的长数据进行训练能带来显著性能提升。(3)提示模板可能导致较大的性能差异,但可通过微调缓解。(4)对Chat或指令版本模型进行微调可能损害其Base模型的性能,且进一步微调难以弥补此差距。(5)虽然采用RoPE基数缩放的模型展现出强大的外推潜力,但当其与其他插值方法结合时,性能可能出现显著波动,需谨慎选择。(6)与其他基准相比,CNNSum能提供更可靠、更具洞察力的评估结果。我们公开CNNSum以推动该领域研究(https://github.com/CxsGhost/CNNSum)。