High-quality scientific extreme summary (TLDR) facilitates effective science communication. How do large language models (LLMs) perform in generating them? How are LLM-generated summaries different from those written by human experts? However, the lack of a comprehensive, high-quality scientific TLDR dataset hinders both the development and evaluation of LLMs' summarization ability. To address these, we propose a novel dataset, BiomedTLDR, containing a large sample of researcher-authored summaries from scientific papers, which leverages the common practice of including authors' comments alongside bibliography items. We then test popular open-weight LLMs for generating TLDRs based on abstracts. Our analysis reveals that, although some of them successfully produce humanoid summaries, LLMs generally exhibit a greater affinity for the original text's lexical choices and rhetorical structures, hence tend to be more extractive rather than abstractive in general, compared to humans. Our code and datasets are available at https://github.com/netknowledge/LLM_summarization (Lyu and Ke, 2025).
翻译:高质量的极简科学摘要(TLDR)有助于促进有效的科学传播。大型语言模型(LLM)在生成此类摘要方面表现如何?LLM生成的摘要与人类专家撰写的摘要有何差异?然而,缺乏全面、高质量的科学TLDR数据集阻碍了LLM摘要能力的开发与评估。为解决这些问题,我们提出了一个新颖的数据集BiomedTLDR,其中包含大量由研究者撰写的科学论文摘要样本,该数据集利用了在文献条目旁附上作者评论的常见做法。随后,我们测试了基于摘要生成TLDR的流行开源权重LLM。分析表明,尽管部分模型能成功生成类人摘要,但与人类相比,LLM总体上对原文词汇选择和修辞结构表现出更强的亲和性,因此往往更具抽取性而非概括性。我们的代码和数据集可在https://github.com/netknowledge/LLM_summarization获取(Lyu与Ke,2025)。