Large language models (LLMs) can generate fluent summaries across domains using prompting techniques, reducing the need to train models for summarization applications. However, crafting effective prompts that guide LLMs to generate summaries with the appropriate level of detail and writing style remains a challenge. In this paper, we explore the use of salient information extracted from the source document to enhance summarization prompts. We show that adding keyphrases in prompts can improve ROUGE F1 and recall, making the generated summaries more similar to the reference and more complete. The number of keyphrases can control the precision-recall trade-off. Furthermore, our analysis reveals that incorporating phrase-level salient information is superior to word- or sentence-level. However, the impact on hallucination is not universally positive across LLMs. To conduct this analysis, we introduce Keyphrase Signal Extractor (CriSPO), a lightweight model that can be finetuned to extract salient keyphrases. By using CriSPO, we achieve consistent ROUGE improvements across datasets and open-weight and proprietary LLMs without any LLM customization. Our findings provide insights into leveraging salient information in building prompt-based summarization systems.
翻译:大型语言模型(LLMs)能够通过提示技术生成跨领域的流畅摘要,从而减少为摘要应用训练模型的需求。然而,如何设计有效的提示来引导LLMs生成具有适当细节水平和写作风格的摘要仍然是一个挑战。本文探讨了利用从源文档中提取的显著信息来增强摘要提示的方法。我们证明,在提示中添加关键短语可以提高ROUGE F1分数和召回率,使生成的摘要与参考摘要更相似且更完整。关键短语的数量可以控制精确率与召回率之间的权衡。此外,我们的分析表明,融入短语级别的显著信息优于词级别或句子级别。然而,对于不同LLMs,其对幻觉的影响并非普遍积极。为进行此项分析,我们引入了关键短语信号提取器(CriSPO),这是一个可通过微调来提取显著关键短语的轻量级模型。通过使用CriSPO,我们在多个数据集以及开源和专有LLMs上实现了ROUGE指标的持续提升,且无需对LLM进行任何定制。我们的研究结果为利用显著信息构建基于提示的摘要系统提供了见解。