We explore optimally training protein language models, an area of significant interest in biological research where guidance on best practices is limited. Most models are trained with extensive compute resources until performance gains plateau, focusing primarily on increasing model sizes rather than optimizing the efficient compute frontier that balances performance and compute budgets. Our investigation is grounded in a massive dataset consisting of 939 million protein sequences. We trained over 300 models ranging from 3.5 million to 10.7 billion parameters on 5 to 200 billion unique tokens, to investigate the relations between model sizes, training token numbers, and objectives. First, we observed the effect of diminishing returns for the Causal Language Model (CLM) and that of overfitting for the Masked Language Model~(MLM) when repeating the commonly used Uniref database. To address this, we included metagenomic protein sequences in the training set to increase the diversity and avoid the plateau or overfitting effects. Second, we obtained the scaling laws of CLM and MLM on Transformer, tailored to the specific characteristics of protein sequence data. Third, we observe a transfer scaling phenomenon from CLM to MLM, further demonstrating the effectiveness of transfer through scaling behaviors based on estimated Effectively Transferred Tokens. Finally, to validate our scaling laws, we compare the large-scale versions of ESM-2 and PROGEN2 on downstream tasks, encompassing evaluations of protein generation as well as structure- and function-related tasks, all within less or equivalent pre-training compute budgets.
翻译:我们探索了蛋白质语言模型的最优训练方法,这是生物研究中备受关注但缺乏最佳实践指导的领域。现有模型大多依赖大量计算资源进行训练,直至性能提升趋于平缓,其重点主要在于扩大模型规模,而非优化平衡性能与计算预算的高效计算边界。本研究基于包含9.39亿条蛋白质序列的大规模数据集展开。我们训练了超过300个参数规模从350万到107亿不等的模型,使用50亿至2000亿个独立标记进行训练,以探究模型规模、训练标记数量与训练目标之间的关系。首先,我们观察到在重复使用常用的Uniref数据库时,因果语言模型(CLM)存在收益递减现象,而掩码语言模型(MLM)则出现过拟合效应。为解决此问题,我们在训练集中引入宏基因组蛋白质序列以增加数据多样性,从而避免性能平台期或过拟合效应。其次,我们获得了针对蛋白质序列数据特性定制的Transformer架构上CLM与MLM的缩放定律。第三,我们观察到从CLM到MLM存在迁移缩放现象,基于估计的有效迁移标记数量,通过缩放行为进一步验证了迁移的有效性。最后,为验证所得缩放定律,我们在下游任务中比较了ESM-2与PROGEN2的大规模版本,评估范围涵盖蛋白质生成任务以及结构与功能相关任务,所有比较均在不超过或等同于原始预训练计算预算的条件下进行。