Summarization for scientific text has shown significant benefits both for the research community and human society. Given the fact that the nature of scientific text is distinctive and the input of the multi-document summarization task is substantially long, the task requires sufficient embedding generation and text truncation without losing important information. To tackle these issues, in this paper, we propose SKT5SciSumm - a hybrid framework for multi-document scientific summarization (MDSS). We leverage the Sentence-Transformer version of Scientific Paper Embeddings using Citation-Informed Transformers (SPECTER) to encode and represent textual sentences, allowing for efficient extractive summarization using k-means clustering. We employ the T5 family of models to generate abstractive summaries using extracted sentences. SKT5SciSumm achieves state-of-the-art performance on the Multi-XScience dataset. Through extensive experiments and evaluation, we showcase the benefits of our model by using less complicated models to achieve remarkable results, thereby highlighting its potential in advancing the field of multi-document summarization for scientific text.
翻译:科学文本摘要对研究界和人类社会均展现出显著价值。鉴于科学文本的本质具有独特性,且多文档摘要任务的输入文本通常极长,该任务需要在保证重要信息不丢失的前提下,实现充分的嵌入表示生成与文本截断。为应对这些挑战,本文提出SKT5SciSumm——一个用于多文档科学文本摘要的混合框架。我们利用基于引用感知Transformer的科学论文嵌入模型的Sentence-Transformer版本对文本语句进行编码与表示,从而支持通过k-means聚类实现高效的抽取式摘要。我们采用T5系列模型基于抽取出的语句生成抽象式摘要。SKT5SciSumm在Multi-XScience数据集上取得了最先进的性能。通过大量实验与评估,我们展示了该模型通过使用相对简洁的架构取得显著成果的优势,从而凸显了其在推动科学文本多文档摘要领域发展的潜力。