Neural abstractive summarization models make summaries in an end-to-end manner, and little is known about how the source information is actually converted into summaries. In this paper, we define input sentences that contain essential information in the generated summary as $\textit{source sentences}$ and study how abstractive summaries are made by analyzing the source sentences. To this end, we annotate source sentences for reference summaries and system summaries generated by PEGASUS on document-summary pairs sampled from the CNN/DailyMail and XSum datasets. We also formulate automatic source sentence detection and compare multiple methods to establish a strong baseline for the task. Experimental results show that the perplexity-based method performs well in highly abstractive settings, while similarity-based methods perform robustly in relatively extractive settings. Our code and data are available at https://github.com/suhara/sourcesum.
翻译:神经抽象摘要模型以端到端方式生成摘要,而关于源信息如何实际转化为摘要的过程尚不明确。本文定义生成摘要中包含关键信息的输入句子为$\textit{源句子}$,并通过分析源句子来研究抽象摘要的生成机制。为此,我们为CNN/DailyMail和XSum数据集中采样的文档-摘要对标注了参考摘要和PEGASUS系统生成的摘要的源句子。同时,我们设计了自动源句子检测任务,并对比多种方法以建立该任务的强基线。实验结果表明,基于困惑度的方法在高抽象设定下表现优异,而基于相似度的方法在相对抽取式设定下具有鲁棒性。我们的代码和数据发布于 https://github.com/suhara/sourcesum。