Low-resource extractive text summarization is a vital but heavily underexplored area of research. Prior literature either focuses on abstractive text summarization or prompts a large language model (LLM) like GPT-3 directly to generate summaries. In this work, we propose MixSumm for low-resource extractive text summarization. Specifically, MixSumm prompts an open-source LLM, LLaMA-3-70b, to generate documents that mix information from multiple topics as opposed to generating documents without mixup, and then trains a summarization model on the generated dataset. We use ROUGE scores and L-Eval, a reference-free LLaMA-3-based evaluation method to measure the quality of generated summaries. We conduct extensive experiments on a challenging text summarization benchmark comprising the TweetSumm, WikiHow, and ArXiv/PubMed datasets and show that our LLM-based data augmentation framework outperforms recent prompt-based approaches for low-resource extractive summarization. Additionally, our results also demonstrate effective knowledge distillation from LLaMA-3-70b to a small BERT-based extractive summarizer.
翻译:低资源抽取式文本摘要是一个至关重要但研究严重不足的领域。现有文献要么专注于生成式文本摘要,要么直接提示大型语言模型(如GPT-3)生成摘要。在本工作中,我们提出了MixSumm用于低资源抽取式文本摘要。具体而言,MixSumm提示开源LLM LLaMA-3-70b生成混合多个主题信息的文档,而非生成无混合的文档,然后在生成的数据集上训练摘要模型。我们使用ROUGE分数和L-Eval(一种基于LLaMA-3的无参考评估方法)来衡量生成摘要的质量。我们在一个具有挑战性的文本摘要基准上进行了广泛实验,该基准包含TweetSumm、WikiHow和ArXiv/PubMed数据集,结果表明我们基于LLM的数据增强框架在低资源抽取式摘要任务上优于近期基于提示的方法。此外,我们的结果也展示了从LLaMA-3-70b到小型基于BERT的抽取式摘要器的有效知识蒸馏。