We propose Sci2Pol-Bench and Sci2Pol-Corpus, the first benchmark and training dataset for evaluating and fine-tuning large language models (LLMs) on policy brief generation from a scientific paper. We build Sci2Pol-Bench on a five-stage taxonomy to mirror the human writing process: (i) Autocompletion, (ii) Understanding, (iii) Summarization, (iv) Generation, and (v) Verification. It features 18 tasks in multiple-choice and open-ended formats. Specifically, for the Generation stage, we show that BERTScore and ROUGE scores fail to capture the quality of brief writing, and introduce a new LLM-based evaluation metric aligned with expert judgement. Using this benchmark, we evaluate 13 leading open-source and commercial LLMs to uncover key limitations. To improve LLM performance on brief writing, we curate the Sci2Pol-Corpus for fine-tuning. We start by linking each cited scientific paper to its corresponding policy document, drawn from 5.6 million policy records. This produces 140,000 candidate pairs. We then employ an LLM-as-a-judge to filter high-quality examples, followed by in-context polishing using three expert-written samples as references. This process yields a final set of 639 new pairs. Finally, we fine-tune three models on Sci2Pol-Corpus: LLaMA-3.18B, Gemma-12B, and Gemma-27B. Fine-tuning leads to consistent performance improvements across Sci2Pol-Bench. Notably, after fine-tuning, Gemma-27B surpasses the much larger GPT-4o and DeepSeek-V3 (671B). These demonstrate the effectiveness of our corpus in bridging the gap between science and policy.
翻译:我们提出了Sci2Pol-Bench和Sci2Pol-Corpus,这是首个用于评估和微调大语言模型(LLMs)根据科学论文生成政策简报的基准和训练数据集。我们基于一个五阶段分类法构建了Sci2Pol-Bench,以模拟人类的写作过程:(i)自动补全,(ii)理解,(iii)摘要,(iv)生成,以及(v)验证。它包含18项任务,涵盖多项选择和开放式两种形式。具体而言,针对生成阶段,我们证明了BERTScore和ROUGE分数无法有效捕捉简报写作的质量,并引入了一种新的、与专家判断一致的基于LLM的评估指标。利用此基准,我们评估了13个领先的开源和商业LLM,揭示了其关键局限性。为了提升LLM在简报写作上的表现,我们构建了用于微调的Sci2Pol-Corpus。我们首先将每篇被引用的科学论文与其对应的政策文件进行关联,这些文件来自560万条政策记录。这产生了14万个候选对。随后,我们采用LLM-as-a-judge来筛选高质量样本,接着以三份专家撰写的样本为参考进行上下文优化。此过程最终得到639个新配对。最后,我们在Sci2Pol-Corpus上微调了三个模型:LLaMA-3.18B、Gemma-12B和Gemma-27B。微调使得模型在Sci2Pol-Bench上的性能得到了一致的提升。值得注意的是,微调后,Gemma-27B超越了规模大得多的GPT-4o和DeepSeek-V3(671B)。这些结果证明了我们的语料库在弥合科学与政策之间鸿沟方面的有效性。