Automatic extraction of information from publications is key to making scientific knowledge machine readable at a large scale. The extracted information can, for example, facilitate academic search, decision making, and knowledge graph construction. An important type of information not covered by existing approaches is hyperparameters. In this paper, we formalize and tackle hyperparameter information extraction (HyperPIE) as an entity recognition and relation extraction task. We create a labeled data set covering publications from a variety of computer science disciplines. Using this data set, we train and evaluate BERT-based fine-tuned models as well as five large language models: GPT-3.5, GALACTICA, Falcon, Vicuna, and WizardLM. For fine-tuned models, we develop a relation extraction approach that achieves an improvement of 29% F1 over a state-of-the-art baseline. For large language models, we develop an approach leveraging YAML output for structured data extraction, which achieves an average improvement of 5.5% F1 in entity recognition over using JSON. With our best performing model we extract hyperparameter information from a large number of unannotated papers, and analyze patterns across disciplines. All our data and source code is publicly available at https://github.com/IllDepence/hyperpie
翻译:从出版物中自动提取信息是实现科学知识大规模机器可读的关键技术。提取出的信息可促进学术搜索、决策支持及知识图谱构建等应用。现有方法未能覆盖的重要信息类型之一是超参数。本文首次系统定义并攻克超参数信息提取(HyperPIE)任务,将其转化为实体识别与关系抽取任务。我们构建了一个覆盖多计算机科学学科出版物的标注数据集,并基于此训练和评估了BERT微调模型以及五种大语言模型:GPT-3.5、GALACTICA、Falcon、Vicuna和WizardLM。针对微调模型,我们提出了一种关系抽取方法,相较现有最优基线在F1值上提升29%;针对大语言模型,我们开发了基于YAML输出的结构化数据抽取方法,在实体识别任务上相较JSON格式平均提升5.5%的F1值。利用最优模型,我们从大量未标注论文中提取超参数信息,并分析跨学科的模式规律。所有数据与源代码已开源发布于https://github.com/IllDepence/hyperpie