Large Language Models (LLMs) have been well-researched in various long-context tasks. However, the scarcity of high-quality long-context summarization datasets has hindered further advancements in this area. To address this, we introduce CNNSum, a multi-scale long-context summarization benchmark based on Chinese novels, featuring human-driven annotations, which comprises four subsets totaling 695 samples, with lengths ranging from 16k to 128k. We evaluate numerous LLMs and conduct detailed case analyses. Furthermore, we conduct extensive fine-tuning experiments to explore and improve long-context summarization. In our study: (1) Advanced LLMs like GPT-4o may still generate subjective commentary, leading to vague summaries. (2) Currently, long-context summarization mainly relies on memory ability afforded by longer context lengths. The advantages of Large LLMs are hard to utilize, thus small LLMs are the most cost-effective. (3) Different prompt templates paired with various version models may cause large performance gaps. In further fine-tuning, these can be mitigated, and the Base version models perform better. (4) LLMs with RoPE-base scaled exhibit strong extrapolation potential; using short-context data can significantly improve long-context summarization performance. However, further applying other interpolation methods requires careful selection. (5) CNNSum provides more reliable and insightful evaluation results than other benchmarks. We release CNNSum to advance future research in this field. https://github.com/CxsGhost/CNNSum
翻译:大型语言模型(LLM)已在多种长文本任务中得到深入研究,但高质量长文本摘要数据集的稀缺阻碍了该领域的进一步发展。为此,我们提出了CNNSum——一个基于中文小说的多尺度长文本摘要基准数据集,该数据集包含人工标注的四个子集共计695个样本,文本长度覆盖16k至128k。我们评估了多种LLM并进行了详细的案例分析。此外,我们开展了广泛的微调实验以探索和改进长文本摘要能力。研究发现:(1)如GPT-4o等先进LLM仍可能生成主观性评论,导致摘要内容模糊;(2)当前长文本摘要主要依赖长上下文提供的记忆能力,大型LLM的优势难以充分发挥,因此小型LLM具有最佳性价比;(3)不同提示模板与各版本模型组合可能产生显著性能差异,通过微调可缓解此问题,且基础版本模型表现更优;(4)采用RoPE-base缩放机制的LLM展现出强大的外推潜力,使用短文本数据可显著提升长文本摘要性能,但进一步应用其他插值方法需谨慎选择;(5)相较于现有基准,CNNSum能提供更可靠且具有洞察力的评估结果。我们公开CNNSum以推动该领域后续研究。https://github.com/CxsGhost/CNNSum