The availability of a vast array of research papers in any area of study, necessitates the need of automated summarisation systems that can present the key research conducted and their corresponding findings. Scientific paper summarisation is a challenging task for various reasons including token length limits in modern transformer models and corresponding memory and compute requirements for long text. A significant amount of work has been conducted in this area, with approaches that modify the attention mechanisms of existing transformer models and others that utilise discourse information to capture long range dependencies in research papers. In this paper, we propose a hybrid methodology for research paper summarisation which incorporates an extractive and abstractive approach. We use the extractive approach to capture the key findings of research, and pair it with the introduction of the paper which captures the motivation for research. We use two models based on unsupervised learning for the extraction stage and two transformer language models, resulting in four combinations for our hybrid approach. The performances of the models are evaluated on three metrics and we present our findings in this paper. We find that using certain combinations of hyper parameters, it is possible for automated summarisation systems to exceed the abstractiveness of summaries written by humans. Finally, we state our future scope of research in extending this methodology to summarisation of generalised long documents.
翻译:任何研究领域都存在海量研究论文,这迫切需要能够呈现关键研究及其相应发现的自动摘要系统。科学论文摘要任务面临诸多挑战,包括现代Transformer模型的令牌长度限制,以及长文本处理对应的内存与计算需求。该领域已有大量研究工作,既有改进现有Transformer模型注意力机制的方法,也有利用篇章信息捕捉研究论文长距离依赖关系的方法。本文提出一种融合抽取式与生成式方法的科研论文摘要混合框架。我们采用抽取式方法捕获研究核心发现,并将其与体现研究动机的论文引言部分相结合。在抽取阶段使用两种基于无监督学习的模型,并搭配两种Transformer语言模型,形成四种混合方案组合。通过三项指标评估模型性能,并在本文呈现实验结果。研究发现,通过特定超参数组合,自动摘要系统能够生成比人工撰写摘要更具抽象性的文本。最后,我们阐述了将该方法扩展至通用长文档摘要的未来研究方向。