I present Astro-HEP-BERT, a transformer-based language model specifically designed for generating contextualized word embeddings (CWEs) to study the meanings of concepts in astrophysics and high-energy physics. Built on a general pretrained BERT model, Astro-HEP-BERT underwent further training over three epochs using the Astro-HEP Corpus, a dataset I curated from 21.84 million paragraphs extracted from more than 600,000 scholarly articles on arXiv, all belonging to at least one of these two scientific domains. The project demonstrates both the effectiveness and feasibility of adapting a bidirectional transformer for applications in the history, philosophy, and sociology of science (HPSS). The entire training process was conducted using freely available code, pretrained weights, and text inputs, completed on a single MacBook Pro Laptop (M2/96GB). Preliminary evaluations indicate that Astro-HEP-BERT's CWEs perform comparably to domain-adapted BERT models trained from scratch on larger datasets for domain-specific word sense disambiguation and induction and related semantic change analyses. This suggests that retraining general language models for specific scientific domains can be a cost-effective and efficient strategy for HPSS researchers, enabling high performance without the need for extensive training from scratch.
翻译:本文提出Astro-HEP-BERT,这是一种基于Transformer的语言模型,专门用于生成上下文词嵌入以研究天体物理学和高能物理学中概念的含义。该模型基于通用预训练的BERT模型,使用Astro-HEP语料库进一步训练了三个周期。该语料库由本人整理,包含从arXiv平台上超过60万篇学术文章中提取的2184万个段落,所有文章均至少属于上述两个科学领域之一。本项目证明了将双向Transformer模型应用于科学史、科学哲学与科学社会学研究的有效性和可行性。整个训练过程使用公开可得的代码、预训练权重和文本输入,在一台MacBook Pro笔记本电脑(M2/96GB)上完成。初步评估表明,在领域特定的词义消歧、词义归纳及相关语义演变分析任务中,Astro-HEP-BERT生成的上下文词嵌入性能与基于更大数据集从头训练的领域适应BERT模型相当。这表明,针对特定科学领域对通用语言模型进行再训练,可为科学史、科学哲学与科学社会学研究者提供一种经济高效的研究策略,使其无需从头进行大规模训练即可获得高性能。