Summarizing book-length documents (>100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K USD and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than those generated by open-source models. While LLaMA 2 falls behind other models, Mixtral achieves performance on par with GPT-3.5-Turbo. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by annotators.
翻译:摘要:对超出大语言模型上下文窗口长度的书籍长度文档(>10万词元)进行摘要,需要先将输入文档分割成较小的片段,然后提示大语言模型进行合并、更新和压缩片段级摘要。尽管此任务复杂且重要,但由于评估挑战,尚未得到有意义的研究:现有书籍长度摘要数据集(如BookSum)已出现在大多数公开大语言模型的预训练数据中,而现有评估方法难以捕捉现代大语言模型摘要器产生的错误。本文首次研究了基于大语言模型的书籍长度摘要器的连贯性,通过两种提示工作流实现:(1)分层合并片段级摘要;(2)增量更新运行摘要。我们对GPT-4生成的100本近期出版书籍摘要获得了1193份细粒度人工注释,并识别出大语言模型常见的八类连贯性错误。由于人工评估昂贵且耗时,我们开发了自动指标BooookScore,用于衡量摘要中不包含任何已识别错误类型的句子比例。BooookScore与人工注释具有高度一致性,使我们能够在节省1.5万美元和500小时人工评估成本的同时,系统评估许多其他关键参数(如片段大小、基础大语言模型)的影响。我们发现GPT-4和Claude 2等闭源大语言模型生成的摘要BooookScore高于开源模型。虽然LLaMA 2落后于其他模型,但Mixtral达到了与GPT-3.5-Turbo相当的性能。增量更新相比分层合并产生更低的BooookScore但更高的细节水平,这是一种注释者有时更偏好的权衡。